diff --git a/404.html b/404.html index 5c4c20fc6..5cff1efe3 100644 --- a/404.html +++ b/404.html @@ -5,11 +5,13 @@ VuePress + + - + @@ -132,6 +134,6 @@ (opens new window)

404

There's nothing here.
Take me home.
- + diff --git a/GLOSSARY.html b/GLOSSARY.html index 430c7c606..dce0bc6b9 100644 --- a/GLOSSARY.html +++ b/GLOSSARY.html @@ -6,8 +6,10 @@ Glossary | Cadence + + - + @@ -129,6 +131,6 @@ Cadence Web UI (opens new window)

# Glossary

activity
A business-level function that implements your application logic such as calling a service or transcoding a media file. An activity usually implements a single well-defined action; it can be short or long running. An activity can be implemented as a synchronous method or fully asynchronously involving multiple processes. An activity can be retried indefinitely according to the provided exponential retry policy. If for any reason an activity is not completed within the specified timeout, an error is reported to the workflow and the workflow decides how to handle it. There is no limit on potential activity duration.
activity task
A task that contains an activity invocation information that is delivered to an activity worker through and an activity task list. An activity worker upon receiving activity task executes a correponding activity
activity task list
Task list that is used to deliver activity task to activity worker
activity worker
An object that is executed in the client application and receives activity task from an activity task list it is subscribed to. Once task is received it invokes a correspondent activity.
archival
Archival is a feature that automatically moves event history from persistence to a blobstore after the workflow retention period. The purpose of archival is to be able to keep histories as long as needed while not overwhelming the persistence store. There are two reasons you may want to keep the histories after the retention period has passed: 1. Compliance: For legal reasons, histories may need to be stored for a long period of time. 2. Debugging: Old histories can still be accessed for debugging.
CLI
Cadence command-line interface.
client stub
A client-side proxy used to make remote invocations to an entity that it represents. For example, to start a workflow, a stub object that represents this workflow is created through a special API. Then this stub is used to start, query, or signal the corresponding workflow. The Go client doesn't use this.
decision
Any action taken by the workflow durable function is called a decision. For example: scheduling an activity, canceling a child workflow, or starting a timer. A decision task contains an optional list of decisions. Every decision is recorded in the event history as an event. See also [1] for more explanation
decision task
Every time a new external event that might affect a workflow state is recorded, a decision task that contains it is added to a decision task list and then picked up by a workflow worker. After the new event is handled, the decision task is completed with a list of decision. Note that handling of a decision task is usually very fast and is not related to duration of operations that the workflow invokes. See also [1] for more explanation
decision task list
Task list that is used to deliver decision task to workflow worker. From user's point of view, it can be viewed as a worker pool. It defines a pool of worker executing workflow or activity tasks.
domain
Cadence is backed by a multitenant service. The unit of isolation is called a domain. Each domain acts as a namespace for task list names as well as workflow IDs. For example, when a workflow is started, it is started in a specific domain. Cadence guarantees a unique workflow ID within a domain, and supports running workflow executions to use the same workflow ID if they are in different domains. Various configuration options like retention period or archival destination are configured per domain as well through a special CRUD API or through the Cadence CLI. In the multi-cluster deployment, domain is a unit of fail-over. Each domain can only be active on a single Cadence cluster at a time. However, different domains can be active in different clusters and can fail-over independently.
event
An indivisible operation performed by your application. For example, activity_task_started, task_failed, or timer_canceled. Events are recorded in the event history.
event history
An append log of events for your application. History is durably persisted by the Cadence service, enabling seamless recovery of your application state from crashes or failures. It also serves as an audit log for debugging.
local activity
A local activity is an activity that is invoked directly in the same process by a workflow code. It consumes much less resources than a normal activity, but imposes a lot of limitations like low duration and lack of rate limiting.
query
A synchronous (from the caller's point of view) operation that is used to report a workflow state. Note that a query is inherently read only and cannot affect a workflow state.
run ID
A UUID that a Cadence service assigns to each workflow run. If allowed by a configured policy, you might be able to re-execute a workflow, after it has closed or failed, with the same workflow id. Each such re-execution is called a run. run id is used to uniquely identify a run even if it shares a workflow id with others.
signal
An external asynchronous request to a workflow. It can be used to deliver notifications or updates to a running workflow at any point in its existence.
task
The context needed to execute a specific activity or workflow state transition. There are two types of tasks: an activity task and a decision task (aka workflow task). Note that a single activity execution corresponds to a single activity task, while a workflow execution employs multiple decision tasks.
task list
Common name for activity task list and decision task list
task token
A unique correlation ID for a Cadence activity. Activity completion calls take either task token or DomainName, WorkflowID, ActivityID arguments.
worker
Also known as a worker service. A service that hosts the workflow and activity implementations. The worker polls the Cadence service for tasks, performs those tasks, and communicates task execution results back to the Cadence service. Worker services are developed, deployed, and operated by Cadence customers.
workflow
A fault-oblivious stateful function that orchestrates activities. A workflow has full control over which activities are executed, and in which order. A workflow must not affect the external world directly, only through activities. What makes workflow code a workflow is that its state is preserved by Cadence. Therefore any failure of a worker process that hosts the workflow code does not affect the workflow execution. The workflow continues as if these failures did not happen. At the same time, activities can fail any moment for any reason. Because workflow code is fully fault-oblivious, it is guaranteed to get notifications about activity failures or timeouts and act accordingly. There is no limit on potential workflow duration.
workflow execution
An instance of a workflow. The instance can be in the process of executing or it could have already completed execution.
workflow ID
A unique identifier for a workflow execution. Cadence guarantees the uniqueness of an ID within a domain. An attempt to start a workflow with a duplicate ID results in an already started error.
workflow task
Synonym of the decision task.
workflow worker
An object that is executed in the client application and receives decision task from an decision task list it is subscribed to. Once task is received it is handled by a correponding workflow.

1 What exactly is a Cadence decision task? (opens new window)

- + diff --git a/assets/js/104.2a64d958.js b/assets/js/104.8ca9c752.js similarity index 96% rename from assets/js/104.2a64d958.js rename to assets/js/104.8ca9c752.js index 2d95a60a2..22ea5a9d3 100644 --- a/assets/js/104.2a64d958.js +++ b/assets/js/104.8ca9c752.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[104],{411:function(e,t,r){"use strict";r.r(t);var n=r(0),a=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"contact-us"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#contact-us"}},[e._v("#")]),e._v(" Contact us")]),e._v(" "),t("p",[e._v("If you have a question, check whether it is already answered at stackoverflow under "),t("a",{attrs:{href:"https://stackoverflow.com/questions/tagged/cadence-workflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadence-workflow"),t("OutboundLink")],1),e._v(" tag.")]),e._v(" "),t("p",[e._v("If you still need help, visit "),t("slack-link"),e._v(".")],1),e._v(" "),t("p",[e._v("If you have a feature request or a bug to report file an issue against one of the Cadence github repositories:")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Service and CLI"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-go/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Go Client"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Go Client Samples"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-java/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Java Client"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Java Client Samples"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence-web",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Web UI"),t("OutboundLink")],1)])])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[104],{412:function(e,t,r){"use strict";r.r(t);var n=r(0),a=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"contact-us"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#contact-us"}},[e._v("#")]),e._v(" Contact us")]),e._v(" "),t("p",[e._v("If you have a question, check whether it is already answered at stackoverflow under "),t("a",{attrs:{href:"https://stackoverflow.com/questions/tagged/cadence-workflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadence-workflow"),t("OutboundLink")],1),e._v(" tag.")]),e._v(" "),t("p",[e._v("If you still need help, visit "),t("slack-link"),e._v(".")],1),e._v(" "),t("p",[e._v("If you have a feature request or a bug to report file an issue against one of the Cadence github repositories:")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Service and CLI"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-go/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Go Client"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Go Client Samples"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-java/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Java Client"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Java Client Samples"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence-web",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Web UI"),t("OutboundLink")],1)])])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/105.227b5c90.js b/assets/js/105.8610d1ed.js similarity index 97% rename from assets/js/105.227b5c90.js rename to assets/js/105.8610d1ed.js index 012b37132..fcec6c470 100644 --- a/assets/js/105.227b5c90.js +++ b/assets/js/105.8610d1ed.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[105],{412:function(t,s,i){"use strict";i.r(s);var e=i(0),a=Object(e.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey},scopedSlots:t._u([{key:"footer",fn:function(){return[s("p",[t._v("© "+t._s((new Date).getFullYear())+" "),s("a",{attrs:{href:"https://uber.github.io/",target:"_blank",rel:"noopener noreferrer"}},[t._v("Uber Technologies, Inc."),s("OutboundLink")],1)])]},proxy:!0}])},[s("div",{staticClass:"section"},[s("div",{staticClass:"content"},[s("h1",[t._v("Easy to use")]),t._v(" "),s("div",{staticClass:"grid"},[s("div",{staticClass:"grid-col-4 text-align-center"},[s("img",{attrs:{src:"img/arrow_divert_filled.svg",width:"200px"}})]),t._v(" "),s("div",{staticClass:"grid-col-8"},[s("p",[t._v("Workflows provide primitives to allow application developers to express complex business logic as code.")]),t._v(" "),s("p",[t._v("The underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.")])])])])]),t._v(" "),s("div",{staticClass:"section alt"},[s("div",{staticClass:"content"},[s("h1",[t._v("Fault tolerant")]),t._v(" "),s("div",{staticClass:"grid"},[s("div",{staticClass:"grid-col-8"},[s("p",[t._v("Cadence enables writing stateful applications without worrying about the complexity of handling process failures.")]),t._v(" "),s("p",[t._v("Cadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.")])]),t._v(" "),s("div",{staticClass:"grid-col-4 text-align-center"},[s("img",{attrs:{src:"img/gears_outlined.svg",width:"200px"}})])])])]),t._v(" "),s("div",{staticClass:"section"},[s("div",{staticClass:"content"},[s("h1",[t._v("Scalable & Reliable")]),t._v(" "),s("div",{staticClass:"grid"},[s("div",{staticClass:"grid-col-4 text-align-center"},[s("img",{attrs:{src:"img/chart_bar_ascending_filled.svg",width:"200px"}})]),t._v(" "),s("div",{staticClass:"grid-col-8"},[s("p",[t._v("Cadence is designed to scale out horizontally to handle millions of concurrent workflows.")]),t._v(" "),s("p",[t._v("Cadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.")])])])])])])}),[],!1,null,null,null);s.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[105],{411:function(t,s,i){"use strict";i.r(s);var e=i(0),a=Object(e.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey},scopedSlots:t._u([{key:"footer",fn:function(){return[s("p",[t._v("© "+t._s((new Date).getFullYear())+" "),s("a",{attrs:{href:"https://uber.github.io/",target:"_blank",rel:"noopener noreferrer"}},[t._v("Uber Technologies, Inc."),s("OutboundLink")],1)])]},proxy:!0}])},[s("div",{staticClass:"section"},[s("div",{staticClass:"content"},[s("h1",[t._v("Easy to use")]),t._v(" "),s("div",{staticClass:"grid"},[s("div",{staticClass:"grid-col-4 text-align-center"},[s("img",{attrs:{src:"img/arrow_divert_filled.svg",width:"200px"}})]),t._v(" "),s("div",{staticClass:"grid-col-8"},[s("p",[t._v("Workflows provide primitives to allow application developers to express complex business logic as code.")]),t._v(" "),s("p",[t._v("The underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.")])])])])]),t._v(" "),s("div",{staticClass:"section alt"},[s("div",{staticClass:"content"},[s("h1",[t._v("Fault tolerant")]),t._v(" "),s("div",{staticClass:"grid"},[s("div",{staticClass:"grid-col-8"},[s("p",[t._v("Cadence enables writing stateful applications without worrying about the complexity of handling process failures.")]),t._v(" "),s("p",[t._v("Cadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.")])]),t._v(" "),s("div",{staticClass:"grid-col-4 text-align-center"},[s("img",{attrs:{src:"img/gears_outlined.svg",width:"200px"}})])])])]),t._v(" "),s("div",{staticClass:"section"},[s("div",{staticClass:"content"},[s("h1",[t._v("Scalable & Reliable")]),t._v(" "),s("div",{staticClass:"grid"},[s("div",{staticClass:"grid-col-4 text-align-center"},[s("img",{attrs:{src:"img/chart_bar_ascending_filled.svg",width:"200px"}})]),t._v(" "),s("div",{staticClass:"grid-col-8"},[s("p",[t._v("Cadence is designed to scale out horizontally to handle millions of concurrent workflows.")]),t._v(" "),s("p",[t._v("Cadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.")])])])])])])}),[],!1,null,null,null);s.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/12.495ad03a.js b/assets/js/12.dc4847f4.js similarity index 99% rename from assets/js/12.495ad03a.js rename to assets/js/12.dc4847f4.js index 8dc5274c0..bbd950ba0 100644 --- a/assets/js/12.495ad03a.js +++ b/assets/js/12.dc4847f4.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[12],{338:function(e,t,n){e.exports=n.p+"assets/img/using.9bd0e215.png"},339:function(e,t,n){e.exports=n.p+"assets/img/job_role.eb8d055a.png"},340:function(e,t,n){e.exports=n.p+"assets/img/scale.d95347cf.png"},341:function(e,t,n){e.exports=n.p+"assets/img/time_zone.9f0a17fe.png"},342:function(e,t,n){e.exports=n.p+"assets/img/following.91648535.png"},343:function(e,t,n){e.exports=n.p+"assets/img/channels.46dd81ad.png"},344:function(e,t,n){e.exports=n.p+"assets/img/scenarios.1001ca42.png"},345:function(e,t,n){e.exports=n.p+"assets/img/improvement.d734fc97.png"},346:function(e,t,n){e.exports=n.p+"assets/img/help_stage.48323436.png"},347:function(e,t,n){e.exports=n.p+"assets/img/support.0802e859.png"},380:function(e,t,n){"use strict";n.r(t);var o=n(4),a=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our "),t("a",{attrs:{href:"https://uber-cadence.slack.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack workspace"),t("OutboundLink")],1),e._v(", "),t("a",{attrs:{href:"https://cadenceworkflow.io",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadenceworkflow.io"),t("OutboundLink")],1),e._v(" Blog and "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(". After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.")]),e._v(" "),t("p",[e._v("Here are some highlights in text and you can check out the visuals to get more details:")]),e._v(" "),t("p",[t("img",{attrs:{src:n(338),alt:"using.png"}})]),e._v(" "),t("p",[t("img",{attrs:{src:n(339),alt:"job_role.png"}})]),e._v(" "),t("p",[e._v("Most of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal use cases. We will share more guidelines about this but until then, feel free to contact us over our Slack workspace for guidance.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(340),alt:"scale.png"}})]),e._v(" "),t("p",[e._v("The scale our users operating Cadence varies from thousands to billions of workflows per month. It was exciting to see it being used in both small and large scale companies.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(341),alt:"time_zone.png"}})]),e._v(" "),t("p",[e._v("Most survey responders were from Europe compared to any other place. This is in-line with the Cadence team growing its presence in Europe. Users from different places also contacted us to contribute to Cadence as a follow up to the survey. We will start putting up-for-grabs and new-starter tasks on Github. Several of them wanted to meet with a Zoom call and to discuss their use cases and best practices. As the Cadence team has presence in both the EU and the US, we welcome all our users to contact us anytime. Slack is the fastest way to reach us.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(342),alt:"following.png"}})]),e._v(" "),t("p",[t("img",{attrs:{src:n(343),alt:"channels.png"}})]),e._v(" "),t("p",[e._v("Cadence is followed in "),t("a",{attrs:{href:"https://uber-cadence.slack.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" the most, then "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Github"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(". We are the most active in Slack and we plan to be more active in other mediums as well.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(344),alt:"scenarios.png"}}),e._v("\nAll of our main use cases were used across the board. While we mentioned the most common cases, several others were mentioned as a comment: enhanced timers, leader election etc.")]),e._v(" "),t("p",[e._v("We found out that Cadence has been used in several science communities. Some of them were using community built clients and were asking if we are going to support more languages. We are planning to take ownership of the Python and Javascript/Typescript clients and support them officially.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(345),alt:"improvement.png"}})]),e._v(" "),t("p",[e._v("Documentation is by far what our users wanted improvements on. We are revamping our documentation soon and there will be major changes on our website soon.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(346),alt:"help_stage.png"}})]),e._v(" "),t("p",[e._v("Other requests were about observability, debuggability, operability, and usability. These areas have been our main focus this year and we are planning to release updates and blogs about them.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(347),alt:"support.png"}})]),e._v(" "),t("p",[e._v("We noticed most of our users need help once a month or more. While we welcome questions and discussions over the mediums mentioned above, we plan to make more public posts about the common issues using our blog, StackOverflow, LinkedIn, or Twitter.")]),e._v(" "),t("p",[e._v("Many users wanted to hear more from Cadence about the roadmap and its growth. Our posts about these will be released soon. Expect more posts about upcoming features, investments, scale, and community updates. Follow us at "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(" for such updates.")]),e._v(" "),t("p",[e._v("Our users are interested in learning more about guidelines, capacity expectations in on-prem and in managed solutions. While we have been providing feedback per user basis before, we plan to release more generic guidelines with our observability updates mentioned above.")]),e._v(" "),t("p",[e._v("We also would like to thank our community for the increased interest and engagement with us! Cadence has been more active in different mediums (LinkedIn, Slack, blog, etc.) this year. In the first quarter, we observed that our user base and activities has almost doubled (+96% and +90% respectively) through both new and returning users. Based on such immediate positive reactions, we will keep increasing our community investments in different channels.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[12],{338:function(e,t,n){e.exports=n.p+"assets/img/using.9bd0e215.png"},339:function(e,t,n){e.exports=n.p+"assets/img/job_role.eb8d055a.png"},340:function(e,t,n){e.exports=n.p+"assets/img/scale.d95347cf.png"},341:function(e,t,n){e.exports=n.p+"assets/img/time_zone.9f0a17fe.png"},342:function(e,t,n){e.exports=n.p+"assets/img/following.91648535.png"},343:function(e,t,n){e.exports=n.p+"assets/img/channels.46dd81ad.png"},344:function(e,t,n){e.exports=n.p+"assets/img/scenarios.1001ca42.png"},345:function(e,t,n){e.exports=n.p+"assets/img/improvement.d734fc97.png"},346:function(e,t,n){e.exports=n.p+"assets/img/help_stage.48323436.png"},347:function(e,t,n){e.exports=n.p+"assets/img/support.0802e859.png"},381:function(e,t,n){"use strict";n.r(t);var o=n(4),a=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our "),t("a",{attrs:{href:"https://uber-cadence.slack.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack workspace"),t("OutboundLink")],1),e._v(", "),t("a",{attrs:{href:"https://cadenceworkflow.io",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadenceworkflow.io"),t("OutboundLink")],1),e._v(" Blog and "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(". After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.")]),e._v(" "),t("p",[e._v("Here are some highlights in text and you can check out the visuals to get more details:")]),e._v(" "),t("p",[t("img",{attrs:{src:n(338),alt:"using.png"}})]),e._v(" "),t("p",[t("img",{attrs:{src:n(339),alt:"job_role.png"}})]),e._v(" "),t("p",[e._v("Most of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal use cases. We will share more guidelines about this but until then, feel free to contact us over our Slack workspace for guidance.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(340),alt:"scale.png"}})]),e._v(" "),t("p",[e._v("The scale our users operating Cadence varies from thousands to billions of workflows per month. It was exciting to see it being used in both small and large scale companies.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(341),alt:"time_zone.png"}})]),e._v(" "),t("p",[e._v("Most survey responders were from Europe compared to any other place. This is in-line with the Cadence team growing its presence in Europe. Users from different places also contacted us to contribute to Cadence as a follow up to the survey. We will start putting up-for-grabs and new-starter tasks on Github. Several of them wanted to meet with a Zoom call and to discuss their use cases and best practices. As the Cadence team has presence in both the EU and the US, we welcome all our users to contact us anytime. Slack is the fastest way to reach us.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(342),alt:"following.png"}})]),e._v(" "),t("p",[t("img",{attrs:{src:n(343),alt:"channels.png"}})]),e._v(" "),t("p",[e._v("Cadence is followed in "),t("a",{attrs:{href:"https://uber-cadence.slack.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" the most, then "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Github"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(". We are the most active in Slack and we plan to be more active in other mediums as well.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(344),alt:"scenarios.png"}}),e._v("\nAll of our main use cases were used across the board. While we mentioned the most common cases, several others were mentioned as a comment: enhanced timers, leader election etc.")]),e._v(" "),t("p",[e._v("We found out that Cadence has been used in several science communities. Some of them were using community built clients and were asking if we are going to support more languages. We are planning to take ownership of the Python and Javascript/Typescript clients and support them officially.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(345),alt:"improvement.png"}})]),e._v(" "),t("p",[e._v("Documentation is by far what our users wanted improvements on. We are revamping our documentation soon and there will be major changes on our website soon.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(346),alt:"help_stage.png"}})]),e._v(" "),t("p",[e._v("Other requests were about observability, debuggability, operability, and usability. These areas have been our main focus this year and we are planning to release updates and blogs about them.")]),e._v(" "),t("p",[t("img",{attrs:{src:n(347),alt:"support.png"}})]),e._v(" "),t("p",[e._v("We noticed most of our users need help once a month or more. While we welcome questions and discussions over the mediums mentioned above, we plan to make more public posts about the common issues using our blog, StackOverflow, LinkedIn, or Twitter.")]),e._v(" "),t("p",[e._v("Many users wanted to hear more from Cadence about the roadmap and its growth. Our posts about these will be released soon. Expect more posts about upcoming features, investments, scale, and community updates. Follow us at "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(" for such updates.")]),e._v(" "),t("p",[e._v("Our users are interested in learning more about guidelines, capacity expectations in on-prem and in managed solutions. While we have been providing feedback per user basis before, we plan to release more generic guidelines with our observability updates mentioned above.")]),e._v(" "),t("p",[e._v("We also would like to thank our community for the increased interest and engagement with us! Cadence has been more active in different mediums (LinkedIn, Slack, blog, etc.) this year. In the first quarter, we observed that our user base and activities has almost doubled (+96% and +90% respectively) through both new and returning users. Based on such immediate positive reactions, we will keep increasing our community investments in different channels.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/21.76308eb9.js b/assets/js/21.acead1c4.js similarity index 99% rename from assets/js/21.76308eb9.js rename to assets/js/21.acead1c4.js index f02e66bc7..303ea5a43 100644 --- a/assets/js/21.76308eb9.js +++ b/assets/js/21.acead1c4.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[21],{350:function(e,t,a){e.exports=a.p+"assets/img/workflow.fd077b31.png"},351:function(e,t,a){e.exports=a.p+"assets/img/cadence-benefits.316e2e82.png"},393:function(e,t,a){"use strict";a.r(t);var i=a(4),s=Object(i.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h2",{attrs:{id:"introduction"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#introduction"}},[e._v("#")]),e._v(" Introduction")]),e._v(" "),t("p",[e._v("If you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:")]),e._v(" "),t("ul",[t("li",[e._v("What’s a workflow? (everyone has a different definition)")]),e._v(" "),t("li",[e._v("Why does it matter to be code-driven?")]),e._v(" "),t("li",[e._v("Benefits of Cadence")])]),e._v(" "),t("h3",{attrs:{id:"what-is-a-workflow"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#what-is-a-workflow"}},[e._v("#")]),e._v(" What is a Workflow?")]),e._v(" "),t("p",[t("img",{attrs:{src:a(350),alt:"workflow.png"}})]),e._v(" "),t("p",[e._v("In the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might require external triggers (e.g. button click) or schedules. In the more broader meaning, any multi-step function or service is a workflow in principle.")]),e._v(" "),t("p",[e._v("While the above is a more correct way to define workflows, specialized workflows are more widely known: such as data pipelines, directed acyclic graphs, state machines, cron jobs, (micro)service orchestration, etc. This is why typically everyone has a different workflow meaning in mind. Specialized workflows also have simplified interfaces such as UI, configs or a DSL (domain specific language) to make it easy to express the workflow definition.")]),e._v(" "),t("h3",{attrs:{id:"code-driven-workflows"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#code-driven-workflows"}},[e._v("#")]),e._v(" Code-Driven Workflows")]),e._v(" "),t("p",[e._v("Over time, any workflow interface evolves to support more scenarios. For any non-code (UI, config, DSL) technology, this means more APIs, concepts and tooling. However, eventually, the technology’s capabilities will be limited by its interface itself. Otherwise the interface will get more complicated to operate.")]),e._v(" "),t("p",[e._v("What happens here is users love the seamless way of creating workflow applications and try to fit more scenarios into it. Natural user tendency is to be able to write any program with such simplicity and confidence.")]),e._v(" "),t("p",[e._v("Given this natural evolution of workflow requirements, it’s better to have a code-driven workflow orchestration engine that can meet any future needs with its powerful expressiveness. On top of this, it is ideal if the interface is seamless, where engineers learn as little as possible and change almost nothing in their local code to write a distributed and durable workflow code. This would virtually remove any limitation and enable implementing any service as a workflow. This is what Cadence aims for.")]),e._v(" "),t("h3",{attrs:{id:"benefits"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#benefits"}},[e._v("#")]),e._v(" Benefits")]),e._v(" "),t("p",[t("img",{attrs:{src:a(351),alt:"cadence-benefits.png"}})]),e._v(" "),t("p",[e._v("With Cadence, many overheads that need to be built for any well-supported service come for free. Here are some highlights (see "),t("a",{attrs:{href:"http://cadenceworkflow.io",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadenceworkflow.io"),t("OutboundLink")],1),e._v("):")]),e._v(" "),t("ul",[t("li",[e._v("Disaster recovery is supported by default through data replication and failovers")]),e._v(" "),t("li",[e._v("Strong multi tenancy support in Cadence clusters. Capacity and traffic management.")]),e._v(" "),t("li",[e._v("Users can use Cadence APIs to start and interact with their workflows instead of writing new APIs for them")]),e._v(" "),t("li",[e._v("They can schedule their workflows (distributed cron, scheduled start) or any step in their workflows")]),e._v(" "),t("li",[e._v("They have tooling to get updates or cancel their workflows.")]),e._v(" "),t("li",[e._v("Cadence comes with default metrics and logging support so users already get great insights about their workflows without implementing any observability tooling.")]),e._v(" "),t("li",[e._v("Cadence has a web UI where users can list and filter their workflows, inspect workflow/activity inputs and outputs.")]),e._v(" "),t("li",[e._v("They can scale their service just like true stateless services even though their workflows maintain a certain state.")]),e._v(" "),t("li",[e._v("Behavior on failure modes can easily be configured with a few lines, providing high reliability.")]),e._v(" "),t("li",[e._v("With Cadence testing capabilities, they can write unit tests or test against production data to prevent backward incompatibility issues.")]),e._v(" "),t("li",[e._v("…")])]),e._v(" "),t("h2",{attrs:{id:"project-support"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#project-support"}},[e._v("#")]),e._v(" Project Support")]),e._v(" "),t("h3",{attrs:{id:"team"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#team"}},[e._v("#")]),e._v(" Team")]),e._v(" "),t("p",[e._v("Today the Cadence team comprises 26 people. We have people working from Uber’s US offices (Seattle, San Francisco and Sunnyvale) as well as Europe offices (Aarhus-DK and Amsterdam-NL).")]),e._v(" "),t("h3",{attrs:{id:"community"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community"}},[e._v("#")]),e._v(" Community")]),e._v(" "),t("p",[e._v("Cadence is an actively built open source project. We invest in both our internal and open source community ("),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(", "),t("a",{attrs:{href:"https://github.com/uber/cadence/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("Github"),t("OutboundLink")],1),e._v("), responding to new features and enhancements.")]),e._v(" "),t("h3",{attrs:{id:"scale"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#scale"}},[e._v("#")]),e._v(" Scale")]),e._v(" "),t("p",[e._v("It’s one of the most popular platforms at Uber executing ~100K workflow updates per second. There are about 30 different Cadence clusters, several of which serve hundreds of domains. There are ~1000 domains (use cases) varying from tier 0 (most critical) to tier 5 scenarios.")]),e._v(" "),t("h3",{attrs:{id:"managed-solutions"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#managed-solutions"}},[e._v("#")]),e._v(" Managed Solutions")]),e._v(" "),t("p",[e._v("While Uber doesn’t officially sell a managed Cadence solution, there are companies (e.g. "),t("a",{attrs:{href:"https://www.instaclustr.com/platform/managed-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr"),t("OutboundLink")],1),e._v(") in our community that we work closely with selling Managed Cadence. Due to efficiency investments and other factors, it’s significantly cheaper than its competitors. It can be run in users’ on-prem machines or their cloud service of choice. Pricing is defined based on allocated hosts instead of number of requests so users can get more with the same resources by utilizing multi-tenant clusters.")]),e._v(" "),t("h2",{attrs:{id:"after-v1-release"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#after-v1-release"}},[e._v("#")]),e._v(" After V1 Release")]),e._v(" "),t("p",[e._v("Last year, around this time we announced "),t("a",{attrs:{href:"https://www.uber.com/blog/announcing-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence V1"),t("OutboundLink")],1),e._v(" and shared our roadmap. In this section we will talk about updates since then. At a high level, you will notice that we continue investing in high reliability and efficiency while also developing new features.")]),e._v(" "),t("h3",{attrs:{id:"frequent-releases"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frequent-releases"}},[e._v("#")]),e._v(" Frequent Releases")]),e._v(" "),t("p",[e._v("We announced plans to make more frequent releases last year and started making more frequent releases. Today we aim to release biweekly and sometimes release as frequently as weekly. About the format, we listened to our community and heard about having too frequent releases potentially being painful. Therefore, we decided to increment the patch version with releases while incrementing the minor version close to quarterly. This helped us ship much more robust releases and improved our reliability. Here are some highlights:")]),e._v(" "),t("h3",{attrs:{id:"zonal-isolation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#zonal-isolation"}},[e._v("#")]),e._v(" Zonal Isolation")]),e._v(" "),t("p",[e._v("Cadence clusters have already been regionally isolated until this change. However, in the cloud, inter-zone communications matter as they are more expensive and their latencies are higher. Zones can individually have problems without impacting other cloud zones. In a regional architecture, a single zone problem might impact every request; however, with zonal isolation traffic from a zone with issues can easily be failed over to other zones, eliminating its impact on the whole cluster. Therefore, we implemented zonal isolation keeping domain traffic inside a single zone to help improve efficiency and reliability.")]),e._v(" "),t("h3",{attrs:{id:"narrowing-blast-radius"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#narrowing-blast-radius"}},[e._v("#")]),e._v(" Narrowing Blast Radius")]),e._v(" "),t("p",[e._v("When there are issues in a Cadence cluster, it’s often from a single misbehaving workflow. When this happens the whole domain or the cluster could have had issues until the specific workflow is addressed. With this change, we are able to contain the issue only to the offending workflow without impacting others. This is the narrowest blast radius possible.")]),e._v(" "),t("h3",{attrs:{id:"async-apis"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#async-apis"}},[e._v("#")]),e._v(" Async APIs")]),e._v(" "),t("p",[e._v("At Uber, there are many batch work streams that run a high number of workflows (thousands to millions) at the same time causing bottlenecks for Cadence clusters, causing noisy neighbor issues. This is because StartWorkflow and SignalWorkflow APIs are synchronous, which means when Cadence acks the user requests are successfully saved in their workflow history.")]),e._v(" "),t("p",[e._v("Even after successful initiations, users would then need to deal with high concurrency. This often means constant worker cache thrashing, followed by history rebuilds at every update, increasing workflow execution complexity to O(n^2) from O(n). Alternatively, they would need to quickly scale out and down their service hosts in a very short amount of time to avoid this.")]),e._v(" "),t("p",[e._v("When we took a step back and analyzed such scenarios, we realized that users simply wanted to “complete N workflows (jobs) in K time”. The guarantees around starts and signals were not really important for their use cases. Therefore, we implemented async versions of our sync API, by which we can control the consumption rate, guaranteeing the fastest execution with no disruption in the cluster.")]),e._v(" "),t("p",[e._v("Later this year, we plan to expand this feature to cron workflows and timers as well.")]),e._v(" "),t("h3",{attrs:{id:"pinot-as-visibility-store"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#pinot-as-visibility-store"}},[e._v("#")]),e._v(" Pinot as Visibility Store")]),e._v(" "),t("p",[t("a",{attrs:{href:"https://pinot.apache.org/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Pinot"),t("OutboundLink")],1),e._v(" is becoming popular due to its cost efficient nature. Several teams reported significant savings by changing their observability storage to Pinot. Cadence now has a Pinot plugin for its visibility store. We are still rolling out this change. Latencies and cost savings will be shared later.")]),e._v(" "),t("h3",{attrs:{id:"code-coverage"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#code-coverage"}},[e._v("#")]),e._v(" Code Coverage")]),e._v(" "),t("p",[e._v("We have received many requests from our community to actively contribute to our codebase, especially after our V1 release. While we have been already collaborating with some companies, this is a challenge with individuals who are just learning about Cadence. One of the main reasons was to avoid bugs that can be introduced.")]),e._v(" "),t("p",[e._v("While Cadence has many integration tests, its unit test coverage was lower than desired. With better unit test coverage we can catch changes that break previous logic and prevent them getting into the main branch. Our team covered additional 50K+ lines in various Cadence repos. We hope to bring our code coverage to 85%+ by the end of year so we can welcome such inquiries a lot easier.")]),e._v(" "),t("h3",{attrs:{id:"replayer-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#replayer-improvements"}},[e._v("#")]),e._v(" Replayer Improvements")]),e._v(" "),t("p",[e._v("This is still an ongoing project. As mentioned in our V1 release, we are revisiting some core parts of Cadence where less-than-ideal architectural decisions were made in the past. Replayer/shadower is one of such parts. We have been working on improving its precision, eliminating false negatives and positives.")]),e._v(" "),t("h3",{attrs:{id:"global-rate-limiters"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#global-rate-limiters"}},[e._v("#")]),e._v(" Global Rate Limiters")]),e._v(" "),t("p",[e._v("Cadence rate limiters are equally distributed across zones and hosts. However, when the user's traffic is skewed, rate limits can get activated even though the user has more capacity. To avoid this, we built global rate limiters. This will make rate limits much more predictable and capacity management a lot easier.")]),e._v(" "),t("h3",{attrs:{id:"regular-failover-drills"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#regular-failover-drills"}},[e._v("#")]),e._v(" Regular Failover Drills")]),e._v(" "),t("p",[e._v("Cadence has been performing monthly regional and zonal failover drills to ensure its failover operations are working properly in case we need it. We are failing over hundreds of domains at the same time to validate the scale of this operation, capacity elasticity and correctness of workflows.")]),e._v(" "),t("h3",{attrs:{id:"cadence-web-v4"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-web-v4"}},[e._v("#")]),e._v(" Cadence Web v4")]),e._v(" "),t("p",[e._v("We are migrating Cadence web from Vue.js to React.js to use a more modern infrastructure and to have better feature velocity. We are about 70% complete with this migration and hope to release the new version of it soon.")]),e._v(" "),t("h3",{attrs:{id:"code-review-time-non-determinism-checks"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#code-review-time-non-determinism-checks"}},[e._v("#")]),e._v(" Code Review Time Non-determinism Checks")]),e._v(" "),t("p",[e._v("(This is an internal-only feature that we hope to release soon) Cadence non-determinism errors and versioning were common pain points for our customers. There are available tools but they require ongoing effort to validate. We have built a tool that generates a shadower test with a single line command (one time only operation) and continuously validates any code change against production data.")]),e._v(" "),t("p",[e._v("This feature reduced the detect-and-fix time from days/weeks to minutes. Just by launching this feature to the domains with the most non-determinism errors, the number of related incidents reduced by 40%. We have already blocked 500+ diffs that would potentially impact production negatively. This boosted our users’ confidence in using Cadence.")]),e._v(" "),t("h3",{attrs:{id:"domain-reports"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-reports"}},[e._v("#")]),e._v(" Domain Reports")]),e._v(" "),t("p",[e._v("(This is an internal-only feature that we hope to release soon) We are able to detect potential issues (bugs, antipatterns, inefficiencies, failures) with domains upon manual investigation. We have automated this process and now generate reports for each domain. This information can be accessed historically (to see the progression over time) and on-demand (to see the current state). This has already driven domain reliability and efficiency improvements.")]),e._v(" "),t("p",[e._v("This feature and above are at MVP level where we plan to generalize, expand and release for open source soon. In the V1 release, we have mentioned that we would build certain features internally first to be able to have enough velocity, to see where they are going and to make breaking changes until it’s mature.")]),e._v(" "),t("h3",{attrs:{id:"client-based-migrations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#client-based-migrations"}},[e._v("#")]),e._v(" Client Based Migrations")]),e._v(" "),t("p",[e._v("With 30 clusters and ~1000 domains in production, migrating a domain from a cluster to another became a somewhat frequent operation for Cadence. While this feature is mostly automated, we would like to fully automate it to a level that this would be a single click or command operation. Client based migrations (as opposed to server based ones) give us big flexibility that we can have migrations from many to many environments at the same time. Each migration happens in isolation without impacting any other domain or the cluster.")]),e._v(" "),t("p",[e._v("This is an ongoing project where remaining parts are migrating long running workflows faster and seamless technology to technology migrations even if the “from-technology” is not Cadence in the first place. There are many users that migrated from Cadence-like or different technologies to Cadence so we hope to remove the repeating overhead for such users.")]),e._v(" "),t("h2",{attrs:{id:"roadmap-next-year"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#roadmap-next-year"}},[e._v("#")]),e._v(" Roadmap (Next Year)")]),e._v(" "),t("p",[e._v("Our priorities for next year look similar with reliability, efficiency, and new features as our focus. We have seen significant improvements especially in our users’ reliability and efficiency on top of the improvements in our servers. This both reduces operational load on our users and makes Cadence one step closer to being a standard way to build services. Here is a short list of what's coming over the next 12 months:")]),e._v(" "),t("h3",{attrs:{id:"database-efficiency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#database-efficiency"}},[e._v("#")]),e._v(" Database efficiency")]),e._v(" "),t("p",[e._v("We are increasing our investment in improving Cadence’s database usage. Even though Cadence’s cost looks a lot better compared to the same family of technologies, it can still be significantly improved by eliminating certain bottlenecks coming from its original design.")]),e._v(" "),t("h3",{attrs:{id:"helm-charts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#helm-charts"}},[e._v("#")]),e._v(" Helm Charts")]),e._v(" "),t("p",[e._v("We are grateful to the Cadence community for introducing and maintaining our Helm charts for operating Cadence clusters. We are taking its ownership so it can be officially released and tested. We expect to release this in 2024.")]),e._v(" "),t("h3",{attrs:{id:"dashboard-templates"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#dashboard-templates"}},[e._v("#")]),e._v(" Dashboard Templates")]),e._v(" "),t("p",[e._v("During our tech talks, demos and user talks, we have received inquiries about what metrics care about. We plan to release templates for our dashboards so our community would look at a similar picture.")]),e._v(" "),t("h3",{attrs:{id:"client-v2-modernization"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#client-v2-modernization"}},[e._v("#")]),e._v(" Client V2 Modernization")]),e._v(" "),t("p",[e._v("As we announced last year that we plan to make breaking changes to significantly improve our interfaces, we are working on modernizing our client interface.")]),e._v(" "),t("h3",{attrs:{id:"higher-parallelization-and-prioritization-in-task-processing"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#higher-parallelization-and-prioritization-in-task-processing"}},[e._v("#")]),e._v(" Higher Parallelization and Prioritization in Task Processing")]),e._v(" "),t("p",[e._v("In an effort to have better domain prioritization in multitenant Cadence clusters, we are improving our task processing with higher parallelization and better prioritization. This is a lot better model than just having domains with defined limits. We expect to provide more resources to high priority domains during their peak hours while allowing low priority domains to consume much bigger resources than allocated during quiet times.")]),e._v(" "),t("h3",{attrs:{id:"timer-and-cron-burst-handling"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-and-cron-burst-handling"}},[e._v("#")]),e._v(" Timer and Cron Burst Handling")]),e._v(" "),t("p",[e._v("After addressing start and signal burst scenarios, we are continuing with bursty timers and cron jobs. Many users set their schedules and timers for the same second with the intention of being able to finish N jobs within a certain amount of time. Current scheduling design isn’t friendly for such intents and high loads can cause temporary starvation in the cluster. By introducing better batch scheduling support, clusters can continue with no disruption while timers are processed in the most efficient way.")]),e._v(" "),t("h3",{attrs:{id:"high-zonal-skew-handling"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#high-zonal-skew-handling"}},[e._v("#")]),e._v(" High zonal skew handling")]),e._v(" "),t("p",[e._v("For users operating in their own cloud and having multiple independent zones in every region, zonal skews can be a problem and can create unnecessary bottlenecks when Zonal Isolation feature is enabled. We are working on addressing such issues to improve task matching across zones when skew is detected.")]),e._v(" "),t("h3",{attrs:{id:"tasklist-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#tasklist-improvements"}},[e._v("#")]),e._v(" Tasklist Improvements")]),e._v(" "),t("p",[e._v("When a user scenario grows, there are many knobs that need to be manually adjusted. We would like to automatically partition and smartly forward tasks to improve tasklist efficiency significantly to avoid backlogs, timeouts and hot shards.")]),e._v(" "),t("h3",{attrs:{id:"shard-movement-assignment-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#shard-movement-assignment-improvements"}},[e._v("#")]),e._v(" Shard Movement/Assignment Improvements")]),e._v(" "),t("p",[e._v("Cadence shard movements are based on consistent hash and this can be a limiting factor for many different reasons. Certain hosts can end up getting unlucky by having many shards, or having heavy shards. During deployments we might observe a much higher number of shard movements than desired, which reduces the availability. With improved shard movements and assignments we can have more homogenous load among hosts while also having a minimum amount of shard movements during deployments with much better availability.")]),e._v(" "),t("h3",{attrs:{id:"worker-heartbeats"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#worker-heartbeats"}},[e._v("#")]),e._v(" Worker Heartbeats")]),e._v(" "),t("p",[e._v("Today, there’s no worker liveliness tracking in Cadence. Instead, task or activity heartbeat timeouts are used to reassign tasks to different workers. For latency sensitive users this can become a big disruption. For long activities without heartbeats, this can cause big delays. This feature is to eliminate depending on manual timeout or heartbeat configs to reassign tasks by tracking if workers are still healthy. This feature will also enable so many other new efficiency and reliability features we would like to get to in the future.")]),e._v(" "),t("h3",{attrs:{id:"domain-and-workflow-diagnostics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-and-workflow-diagnostics"}},[e._v("#")]),e._v(" Domain and Workflow Diagnostics")]),e._v(" "),t("p",[e._v("Probably the two most common user questions are “What’s wrong with my domain?” and “What’s wrong with my workflow?”. Today, diagnosing what happened and what could be wrong isn’t that easy apart from some basic cases. We are working on tools that would run diagnostics on workflows and domains to point out things that might potentially be wrong with public runbook links attached. This feature will not only help diagnose what is wrong with our workflows and domains but will also help fix them.")]),e._v(" "),t("h3",{attrs:{id:"self-serve-operations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#self-serve-operations"}},[e._v("#")]),e._v(" Self Serve Operations")]),e._v(" "),t("p",[e._v("Certain Cadence operations are performed through admin CLI operations. However, these should be able to be done via Cadence UI by users. Admins shouldn’t need to be involved in every step or the checks they validate should be able to be automated. This is what the initiative is about including domain registration, auth/authz onboarding or adding new search attributes but it’s not limited to these operations.")]),e._v(" "),t("h3",{attrs:{id:"cost-estimation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cost-estimation"}},[e._v("#")]),e._v(" Cost Estimation")]),e._v(" "),t("p",[e._v("One big question we receive when users are onboarding to Cadence is “How much will this cost me?”. This is not an easy question to answer since data and traffic load can be quite different. We plan to automate this process to help users understand how much resources they will need. Especially in multi-tenant clusters, this will help users understand how much room they still have in their clusters and how much the new scenario will consume.")]),e._v(" "),t("h3",{attrs:{id:"domain-reports-continue"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-reports-continue"}},[e._v("#")]),e._v(" Domain Reports (continue)")]),e._v(" "),t("p",[e._v("We plan to release this internal feature to open source as soon as possible. On top of presenting this data on built-in Cadence surfaces (web, CLI. etc.) we will create APIs to make it integratable with deployment systems, user service UIs, periodic reports and any other service that would like to consume.")]),e._v(" "),t("h3",{attrs:{id:"non-determinism-detection-improvements-continue"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#non-determinism-detection-improvements-continue"}},[e._v("#")]),e._v(" Non-determinism Detection Improvements (continue)")]),e._v(" "),t("p",[e._v("We have seen great reliability improvements and reduction in incidents with this feature on the user side last year. We continue to invest in this feature and make it available in open source as soon as possible.")]),e._v(" "),t("h3",{attrs:{id:"domain-migrations-continue"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-migrations-continue"}},[e._v("#")]),e._v(" Domain Migrations (continue)")]),e._v(" "),t("p",[e._v("In the next year, we plan to finish our seamless client based migration to be able to safely migrate domains from one cluster to another, one technology (even if it’s not Cadence) to another and one cloud solution to another. There are only a few features left to achieve this.")]),e._v(" "),t("h2",{attrs:{id:"community-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-2"}},[e._v("#")]),e._v(" Community")]),e._v(" "),t("p",[e._v("Do you want to hear more about Cadence? Do you need help with your set-up or usage? Are you evaluating your options? Do you want to contribute? Feel free to join our community and reach out to us.")]),e._v(" "),t("p",[e._v("Slack: "),t("a",{attrs:{href:"https://uber-cadence.slack.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://uber-cadence.slack.com/"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Github: "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber/cadence"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Since last year, we have been contacted by various companies to take on bigger projects on the Cadence project. As we have been investing in code coverage and refactoring Cadence for a cleaner codebase, this will be a lot easier now. Let us know if you have project ideas to contribute or if you’d like to pick something we already planned.")]),e._v(" "),t("p",[e._v("Our monthly community meetings are still ongoing, too. That is the best place to get heard and be involved in our decision-making process. Let us know so we can send you an invite. We are also working on a broader governing model to open up this project to more people. Stay tuned for updates on this topic!")])])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[21],{350:function(e,t,a){e.exports=a.p+"assets/img/workflow.fd077b31.png"},351:function(e,t,a){e.exports=a.p+"assets/img/cadence-benefits.316e2e82.png"},392:function(e,t,a){"use strict";a.r(t);var i=a(4),s=Object(i.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h2",{attrs:{id:"introduction"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#introduction"}},[e._v("#")]),e._v(" Introduction")]),e._v(" "),t("p",[e._v("If you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:")]),e._v(" "),t("ul",[t("li",[e._v("What’s a workflow? (everyone has a different definition)")]),e._v(" "),t("li",[e._v("Why does it matter to be code-driven?")]),e._v(" "),t("li",[e._v("Benefits of Cadence")])]),e._v(" "),t("h3",{attrs:{id:"what-is-a-workflow"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#what-is-a-workflow"}},[e._v("#")]),e._v(" What is a Workflow?")]),e._v(" "),t("p",[t("img",{attrs:{src:a(350),alt:"workflow.png"}})]),e._v(" "),t("p",[e._v("In the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might require external triggers (e.g. button click) or schedules. In the more broader meaning, any multi-step function or service is a workflow in principle.")]),e._v(" "),t("p",[e._v("While the above is a more correct way to define workflows, specialized workflows are more widely known: such as data pipelines, directed acyclic graphs, state machines, cron jobs, (micro)service orchestration, etc. This is why typically everyone has a different workflow meaning in mind. Specialized workflows also have simplified interfaces such as UI, configs or a DSL (domain specific language) to make it easy to express the workflow definition.")]),e._v(" "),t("h3",{attrs:{id:"code-driven-workflows"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#code-driven-workflows"}},[e._v("#")]),e._v(" Code-Driven Workflows")]),e._v(" "),t("p",[e._v("Over time, any workflow interface evolves to support more scenarios. For any non-code (UI, config, DSL) technology, this means more APIs, concepts and tooling. However, eventually, the technology’s capabilities will be limited by its interface itself. Otherwise the interface will get more complicated to operate.")]),e._v(" "),t("p",[e._v("What happens here is users love the seamless way of creating workflow applications and try to fit more scenarios into it. Natural user tendency is to be able to write any program with such simplicity and confidence.")]),e._v(" "),t("p",[e._v("Given this natural evolution of workflow requirements, it’s better to have a code-driven workflow orchestration engine that can meet any future needs with its powerful expressiveness. On top of this, it is ideal if the interface is seamless, where engineers learn as little as possible and change almost nothing in their local code to write a distributed and durable workflow code. This would virtually remove any limitation and enable implementing any service as a workflow. This is what Cadence aims for.")]),e._v(" "),t("h3",{attrs:{id:"benefits"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#benefits"}},[e._v("#")]),e._v(" Benefits")]),e._v(" "),t("p",[t("img",{attrs:{src:a(351),alt:"cadence-benefits.png"}})]),e._v(" "),t("p",[e._v("With Cadence, many overheads that need to be built for any well-supported service come for free. Here are some highlights (see "),t("a",{attrs:{href:"http://cadenceworkflow.io",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadenceworkflow.io"),t("OutboundLink")],1),e._v("):")]),e._v(" "),t("ul",[t("li",[e._v("Disaster recovery is supported by default through data replication and failovers")]),e._v(" "),t("li",[e._v("Strong multi tenancy support in Cadence clusters. Capacity and traffic management.")]),e._v(" "),t("li",[e._v("Users can use Cadence APIs to start and interact with their workflows instead of writing new APIs for them")]),e._v(" "),t("li",[e._v("They can schedule their workflows (distributed cron, scheduled start) or any step in their workflows")]),e._v(" "),t("li",[e._v("They have tooling to get updates or cancel their workflows.")]),e._v(" "),t("li",[e._v("Cadence comes with default metrics and logging support so users already get great insights about their workflows without implementing any observability tooling.")]),e._v(" "),t("li",[e._v("Cadence has a web UI where users can list and filter their workflows, inspect workflow/activity inputs and outputs.")]),e._v(" "),t("li",[e._v("They can scale their service just like true stateless services even though their workflows maintain a certain state.")]),e._v(" "),t("li",[e._v("Behavior on failure modes can easily be configured with a few lines, providing high reliability.")]),e._v(" "),t("li",[e._v("With Cadence testing capabilities, they can write unit tests or test against production data to prevent backward incompatibility issues.")]),e._v(" "),t("li",[e._v("…")])]),e._v(" "),t("h2",{attrs:{id:"project-support"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#project-support"}},[e._v("#")]),e._v(" Project Support")]),e._v(" "),t("h3",{attrs:{id:"team"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#team"}},[e._v("#")]),e._v(" Team")]),e._v(" "),t("p",[e._v("Today the Cadence team comprises 26 people. We have people working from Uber’s US offices (Seattle, San Francisco and Sunnyvale) as well as Europe offices (Aarhus-DK and Amsterdam-NL).")]),e._v(" "),t("h3",{attrs:{id:"community"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community"}},[e._v("#")]),e._v(" Community")]),e._v(" "),t("p",[e._v("Cadence is an actively built open source project. We invest in both our internal and open source community ("),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(", "),t("a",{attrs:{href:"https://github.com/uber/cadence/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("Github"),t("OutboundLink")],1),e._v("), responding to new features and enhancements.")]),e._v(" "),t("h3",{attrs:{id:"scale"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#scale"}},[e._v("#")]),e._v(" Scale")]),e._v(" "),t("p",[e._v("It’s one of the most popular platforms at Uber executing ~100K workflow updates per second. There are about 30 different Cadence clusters, several of which serve hundreds of domains. There are ~1000 domains (use cases) varying from tier 0 (most critical) to tier 5 scenarios.")]),e._v(" "),t("h3",{attrs:{id:"managed-solutions"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#managed-solutions"}},[e._v("#")]),e._v(" Managed Solutions")]),e._v(" "),t("p",[e._v("While Uber doesn’t officially sell a managed Cadence solution, there are companies (e.g. "),t("a",{attrs:{href:"https://www.instaclustr.com/platform/managed-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr"),t("OutboundLink")],1),e._v(") in our community that we work closely with selling Managed Cadence. Due to efficiency investments and other factors, it’s significantly cheaper than its competitors. It can be run in users’ on-prem machines or their cloud service of choice. Pricing is defined based on allocated hosts instead of number of requests so users can get more with the same resources by utilizing multi-tenant clusters.")]),e._v(" "),t("h2",{attrs:{id:"after-v1-release"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#after-v1-release"}},[e._v("#")]),e._v(" After V1 Release")]),e._v(" "),t("p",[e._v("Last year, around this time we announced "),t("a",{attrs:{href:"https://www.uber.com/blog/announcing-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence V1"),t("OutboundLink")],1),e._v(" and shared our roadmap. In this section we will talk about updates since then. At a high level, you will notice that we continue investing in high reliability and efficiency while also developing new features.")]),e._v(" "),t("h3",{attrs:{id:"frequent-releases"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frequent-releases"}},[e._v("#")]),e._v(" Frequent Releases")]),e._v(" "),t("p",[e._v("We announced plans to make more frequent releases last year and started making more frequent releases. Today we aim to release biweekly and sometimes release as frequently as weekly. About the format, we listened to our community and heard about having too frequent releases potentially being painful. Therefore, we decided to increment the patch version with releases while incrementing the minor version close to quarterly. This helped us ship much more robust releases and improved our reliability. Here are some highlights:")]),e._v(" "),t("h3",{attrs:{id:"zonal-isolation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#zonal-isolation"}},[e._v("#")]),e._v(" Zonal Isolation")]),e._v(" "),t("p",[e._v("Cadence clusters have already been regionally isolated until this change. However, in the cloud, inter-zone communications matter as they are more expensive and their latencies are higher. Zones can individually have problems without impacting other cloud zones. In a regional architecture, a single zone problem might impact every request; however, with zonal isolation traffic from a zone with issues can easily be failed over to other zones, eliminating its impact on the whole cluster. Therefore, we implemented zonal isolation keeping domain traffic inside a single zone to help improve efficiency and reliability.")]),e._v(" "),t("h3",{attrs:{id:"narrowing-blast-radius"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#narrowing-blast-radius"}},[e._v("#")]),e._v(" Narrowing Blast Radius")]),e._v(" "),t("p",[e._v("When there are issues in a Cadence cluster, it’s often from a single misbehaving workflow. When this happens the whole domain or the cluster could have had issues until the specific workflow is addressed. With this change, we are able to contain the issue only to the offending workflow without impacting others. This is the narrowest blast radius possible.")]),e._v(" "),t("h3",{attrs:{id:"async-apis"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#async-apis"}},[e._v("#")]),e._v(" Async APIs")]),e._v(" "),t("p",[e._v("At Uber, there are many batch work streams that run a high number of workflows (thousands to millions) at the same time causing bottlenecks for Cadence clusters, causing noisy neighbor issues. This is because StartWorkflow and SignalWorkflow APIs are synchronous, which means when Cadence acks the user requests are successfully saved in their workflow history.")]),e._v(" "),t("p",[e._v("Even after successful initiations, users would then need to deal with high concurrency. This often means constant worker cache thrashing, followed by history rebuilds at every update, increasing workflow execution complexity to O(n^2) from O(n). Alternatively, they would need to quickly scale out and down their service hosts in a very short amount of time to avoid this.")]),e._v(" "),t("p",[e._v("When we took a step back and analyzed such scenarios, we realized that users simply wanted to “complete N workflows (jobs) in K time”. The guarantees around starts and signals were not really important for their use cases. Therefore, we implemented async versions of our sync API, by which we can control the consumption rate, guaranteeing the fastest execution with no disruption in the cluster.")]),e._v(" "),t("p",[e._v("Later this year, we plan to expand this feature to cron workflows and timers as well.")]),e._v(" "),t("h3",{attrs:{id:"pinot-as-visibility-store"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#pinot-as-visibility-store"}},[e._v("#")]),e._v(" Pinot as Visibility Store")]),e._v(" "),t("p",[t("a",{attrs:{href:"https://pinot.apache.org/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Pinot"),t("OutboundLink")],1),e._v(" is becoming popular due to its cost efficient nature. Several teams reported significant savings by changing their observability storage to Pinot. Cadence now has a Pinot plugin for its visibility store. We are still rolling out this change. Latencies and cost savings will be shared later.")]),e._v(" "),t("h3",{attrs:{id:"code-coverage"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#code-coverage"}},[e._v("#")]),e._v(" Code Coverage")]),e._v(" "),t("p",[e._v("We have received many requests from our community to actively contribute to our codebase, especially after our V1 release. While we have been already collaborating with some companies, this is a challenge with individuals who are just learning about Cadence. One of the main reasons was to avoid bugs that can be introduced.")]),e._v(" "),t("p",[e._v("While Cadence has many integration tests, its unit test coverage was lower than desired. With better unit test coverage we can catch changes that break previous logic and prevent them getting into the main branch. Our team covered additional 50K+ lines in various Cadence repos. We hope to bring our code coverage to 85%+ by the end of year so we can welcome such inquiries a lot easier.")]),e._v(" "),t("h3",{attrs:{id:"replayer-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#replayer-improvements"}},[e._v("#")]),e._v(" Replayer Improvements")]),e._v(" "),t("p",[e._v("This is still an ongoing project. As mentioned in our V1 release, we are revisiting some core parts of Cadence where less-than-ideal architectural decisions were made in the past. Replayer/shadower is one of such parts. We have been working on improving its precision, eliminating false negatives and positives.")]),e._v(" "),t("h3",{attrs:{id:"global-rate-limiters"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#global-rate-limiters"}},[e._v("#")]),e._v(" Global Rate Limiters")]),e._v(" "),t("p",[e._v("Cadence rate limiters are equally distributed across zones and hosts. However, when the user's traffic is skewed, rate limits can get activated even though the user has more capacity. To avoid this, we built global rate limiters. This will make rate limits much more predictable and capacity management a lot easier.")]),e._v(" "),t("h3",{attrs:{id:"regular-failover-drills"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#regular-failover-drills"}},[e._v("#")]),e._v(" Regular Failover Drills")]),e._v(" "),t("p",[e._v("Cadence has been performing monthly regional and zonal failover drills to ensure its failover operations are working properly in case we need it. We are failing over hundreds of domains at the same time to validate the scale of this operation, capacity elasticity and correctness of workflows.")]),e._v(" "),t("h3",{attrs:{id:"cadence-web-v4"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-web-v4"}},[e._v("#")]),e._v(" Cadence Web v4")]),e._v(" "),t("p",[e._v("We are migrating Cadence web from Vue.js to React.js to use a more modern infrastructure and to have better feature velocity. We are about 70% complete with this migration and hope to release the new version of it soon.")]),e._v(" "),t("h3",{attrs:{id:"code-review-time-non-determinism-checks"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#code-review-time-non-determinism-checks"}},[e._v("#")]),e._v(" Code Review Time Non-determinism Checks")]),e._v(" "),t("p",[e._v("(This is an internal-only feature that we hope to release soon) Cadence non-determinism errors and versioning were common pain points for our customers. There are available tools but they require ongoing effort to validate. We have built a tool that generates a shadower test with a single line command (one time only operation) and continuously validates any code change against production data.")]),e._v(" "),t("p",[e._v("This feature reduced the detect-and-fix time from days/weeks to minutes. Just by launching this feature to the domains with the most non-determinism errors, the number of related incidents reduced by 40%. We have already blocked 500+ diffs that would potentially impact production negatively. This boosted our users’ confidence in using Cadence.")]),e._v(" "),t("h3",{attrs:{id:"domain-reports"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-reports"}},[e._v("#")]),e._v(" Domain Reports")]),e._v(" "),t("p",[e._v("(This is an internal-only feature that we hope to release soon) We are able to detect potential issues (bugs, antipatterns, inefficiencies, failures) with domains upon manual investigation. We have automated this process and now generate reports for each domain. This information can be accessed historically (to see the progression over time) and on-demand (to see the current state). This has already driven domain reliability and efficiency improvements.")]),e._v(" "),t("p",[e._v("This feature and above are at MVP level where we plan to generalize, expand and release for open source soon. In the V1 release, we have mentioned that we would build certain features internally first to be able to have enough velocity, to see where they are going and to make breaking changes until it’s mature.")]),e._v(" "),t("h3",{attrs:{id:"client-based-migrations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#client-based-migrations"}},[e._v("#")]),e._v(" Client Based Migrations")]),e._v(" "),t("p",[e._v("With 30 clusters and ~1000 domains in production, migrating a domain from a cluster to another became a somewhat frequent operation for Cadence. While this feature is mostly automated, we would like to fully automate it to a level that this would be a single click or command operation. Client based migrations (as opposed to server based ones) give us big flexibility that we can have migrations from many to many environments at the same time. Each migration happens in isolation without impacting any other domain or the cluster.")]),e._v(" "),t("p",[e._v("This is an ongoing project where remaining parts are migrating long running workflows faster and seamless technology to technology migrations even if the “from-technology” is not Cadence in the first place. There are many users that migrated from Cadence-like or different technologies to Cadence so we hope to remove the repeating overhead for such users.")]),e._v(" "),t("h2",{attrs:{id:"roadmap-next-year"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#roadmap-next-year"}},[e._v("#")]),e._v(" Roadmap (Next Year)")]),e._v(" "),t("p",[e._v("Our priorities for next year look similar with reliability, efficiency, and new features as our focus. We have seen significant improvements especially in our users’ reliability and efficiency on top of the improvements in our servers. This both reduces operational load on our users and makes Cadence one step closer to being a standard way to build services. Here is a short list of what's coming over the next 12 months:")]),e._v(" "),t("h3",{attrs:{id:"database-efficiency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#database-efficiency"}},[e._v("#")]),e._v(" Database efficiency")]),e._v(" "),t("p",[e._v("We are increasing our investment in improving Cadence’s database usage. Even though Cadence’s cost looks a lot better compared to the same family of technologies, it can still be significantly improved by eliminating certain bottlenecks coming from its original design.")]),e._v(" "),t("h3",{attrs:{id:"helm-charts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#helm-charts"}},[e._v("#")]),e._v(" Helm Charts")]),e._v(" "),t("p",[e._v("We are grateful to the Cadence community for introducing and maintaining our Helm charts for operating Cadence clusters. We are taking its ownership so it can be officially released and tested. We expect to release this in 2024.")]),e._v(" "),t("h3",{attrs:{id:"dashboard-templates"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#dashboard-templates"}},[e._v("#")]),e._v(" Dashboard Templates")]),e._v(" "),t("p",[e._v("During our tech talks, demos and user talks, we have received inquiries about what metrics care about. We plan to release templates for our dashboards so our community would look at a similar picture.")]),e._v(" "),t("h3",{attrs:{id:"client-v2-modernization"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#client-v2-modernization"}},[e._v("#")]),e._v(" Client V2 Modernization")]),e._v(" "),t("p",[e._v("As we announced last year that we plan to make breaking changes to significantly improve our interfaces, we are working on modernizing our client interface.")]),e._v(" "),t("h3",{attrs:{id:"higher-parallelization-and-prioritization-in-task-processing"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#higher-parallelization-and-prioritization-in-task-processing"}},[e._v("#")]),e._v(" Higher Parallelization and Prioritization in Task Processing")]),e._v(" "),t("p",[e._v("In an effort to have better domain prioritization in multitenant Cadence clusters, we are improving our task processing with higher parallelization and better prioritization. This is a lot better model than just having domains with defined limits. We expect to provide more resources to high priority domains during their peak hours while allowing low priority domains to consume much bigger resources than allocated during quiet times.")]),e._v(" "),t("h3",{attrs:{id:"timer-and-cron-burst-handling"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-and-cron-burst-handling"}},[e._v("#")]),e._v(" Timer and Cron Burst Handling")]),e._v(" "),t("p",[e._v("After addressing start and signal burst scenarios, we are continuing with bursty timers and cron jobs. Many users set their schedules and timers for the same second with the intention of being able to finish N jobs within a certain amount of time. Current scheduling design isn’t friendly for such intents and high loads can cause temporary starvation in the cluster. By introducing better batch scheduling support, clusters can continue with no disruption while timers are processed in the most efficient way.")]),e._v(" "),t("h3",{attrs:{id:"high-zonal-skew-handling"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#high-zonal-skew-handling"}},[e._v("#")]),e._v(" High zonal skew handling")]),e._v(" "),t("p",[e._v("For users operating in their own cloud and having multiple independent zones in every region, zonal skews can be a problem and can create unnecessary bottlenecks when Zonal Isolation feature is enabled. We are working on addressing such issues to improve task matching across zones when skew is detected.")]),e._v(" "),t("h3",{attrs:{id:"tasklist-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#tasklist-improvements"}},[e._v("#")]),e._v(" Tasklist Improvements")]),e._v(" "),t("p",[e._v("When a user scenario grows, there are many knobs that need to be manually adjusted. We would like to automatically partition and smartly forward tasks to improve tasklist efficiency significantly to avoid backlogs, timeouts and hot shards.")]),e._v(" "),t("h3",{attrs:{id:"shard-movement-assignment-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#shard-movement-assignment-improvements"}},[e._v("#")]),e._v(" Shard Movement/Assignment Improvements")]),e._v(" "),t("p",[e._v("Cadence shard movements are based on consistent hash and this can be a limiting factor for many different reasons. Certain hosts can end up getting unlucky by having many shards, or having heavy shards. During deployments we might observe a much higher number of shard movements than desired, which reduces the availability. With improved shard movements and assignments we can have more homogenous load among hosts while also having a minimum amount of shard movements during deployments with much better availability.")]),e._v(" "),t("h3",{attrs:{id:"worker-heartbeats"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#worker-heartbeats"}},[e._v("#")]),e._v(" Worker Heartbeats")]),e._v(" "),t("p",[e._v("Today, there’s no worker liveliness tracking in Cadence. Instead, task or activity heartbeat timeouts are used to reassign tasks to different workers. For latency sensitive users this can become a big disruption. For long activities without heartbeats, this can cause big delays. This feature is to eliminate depending on manual timeout or heartbeat configs to reassign tasks by tracking if workers are still healthy. This feature will also enable so many other new efficiency and reliability features we would like to get to in the future.")]),e._v(" "),t("h3",{attrs:{id:"domain-and-workflow-diagnostics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-and-workflow-diagnostics"}},[e._v("#")]),e._v(" Domain and Workflow Diagnostics")]),e._v(" "),t("p",[e._v("Probably the two most common user questions are “What’s wrong with my domain?” and “What’s wrong with my workflow?”. Today, diagnosing what happened and what could be wrong isn’t that easy apart from some basic cases. We are working on tools that would run diagnostics on workflows and domains to point out things that might potentially be wrong with public runbook links attached. This feature will not only help diagnose what is wrong with our workflows and domains but will also help fix them.")]),e._v(" "),t("h3",{attrs:{id:"self-serve-operations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#self-serve-operations"}},[e._v("#")]),e._v(" Self Serve Operations")]),e._v(" "),t("p",[e._v("Certain Cadence operations are performed through admin CLI operations. However, these should be able to be done via Cadence UI by users. Admins shouldn’t need to be involved in every step or the checks they validate should be able to be automated. This is what the initiative is about including domain registration, auth/authz onboarding or adding new search attributes but it’s not limited to these operations.")]),e._v(" "),t("h3",{attrs:{id:"cost-estimation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cost-estimation"}},[e._v("#")]),e._v(" Cost Estimation")]),e._v(" "),t("p",[e._v("One big question we receive when users are onboarding to Cadence is “How much will this cost me?”. This is not an easy question to answer since data and traffic load can be quite different. We plan to automate this process to help users understand how much resources they will need. Especially in multi-tenant clusters, this will help users understand how much room they still have in their clusters and how much the new scenario will consume.")]),e._v(" "),t("h3",{attrs:{id:"domain-reports-continue"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-reports-continue"}},[e._v("#")]),e._v(" Domain Reports (continue)")]),e._v(" "),t("p",[e._v("We plan to release this internal feature to open source as soon as possible. On top of presenting this data on built-in Cadence surfaces (web, CLI. etc.) we will create APIs to make it integratable with deployment systems, user service UIs, periodic reports and any other service that would like to consume.")]),e._v(" "),t("h3",{attrs:{id:"non-determinism-detection-improvements-continue"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#non-determinism-detection-improvements-continue"}},[e._v("#")]),e._v(" Non-determinism Detection Improvements (continue)")]),e._v(" "),t("p",[e._v("We have seen great reliability improvements and reduction in incidents with this feature on the user side last year. We continue to invest in this feature and make it available in open source as soon as possible.")]),e._v(" "),t("h3",{attrs:{id:"domain-migrations-continue"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-migrations-continue"}},[e._v("#")]),e._v(" Domain Migrations (continue)")]),e._v(" "),t("p",[e._v("In the next year, we plan to finish our seamless client based migration to be able to safely migrate domains from one cluster to another, one technology (even if it’s not Cadence) to another and one cloud solution to another. There are only a few features left to achieve this.")]),e._v(" "),t("h2",{attrs:{id:"community-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-2"}},[e._v("#")]),e._v(" Community")]),e._v(" "),t("p",[e._v("Do you want to hear more about Cadence? Do you need help with your set-up or usage? Are you evaluating your options? Do you want to contribute? Feel free to join our community and reach out to us.")]),e._v(" "),t("p",[e._v("Slack: "),t("a",{attrs:{href:"https://uber-cadence.slack.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://uber-cadence.slack.com/"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Github: "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber/cadence"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Since last year, we have been contacted by various companies to take on bigger projects on the Cadence project. As we have been investing in code coverage and refactoring Cadence for a cleaner codebase, this will be a lot easier now. Let us know if you have project ideas to contribute or if you’d like to pick something we already planned.")]),e._v(" "),t("p",[e._v("Our monthly community meetings are still ongoing, too. That is the best place to get heard and be involved in our decision-making process. Let us know so we can send you an invite. We are also working on a broader governing model to open up this project to more people. Stay tuned for updates on this topic!")])])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/25.7b224d27.js b/assets/js/25.60bd430d.js similarity index 99% rename from assets/js/25.7b224d27.js rename to assets/js/25.60bd430d.js index e1924cbcf..b3ae158f3 100644 --- a/assets/js/25.7b224d27.js +++ b/assets/js/25.60bd430d.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[25],{366:function(e,t,r){"use strict";r.r(t);var n=r(4),a=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h2",{attrs:{id:"background"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#background"}},[e._v("#")]),e._v(" Background")]),e._v(" "),t("p",[e._v("Cadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.")]),e._v(" "),t("h2",{attrs:{id:"our-approach"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#our-approach"}},[e._v("#")]),e._v(" Our Approach")]),e._v(" "),t("p",[e._v("With ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step to ensure a smooth transition for our users.")]),e._v(" "),t("p",[e._v("Cadence was using Thrift/TChannel not just for the API with client SDKs. They were also used for RPC calls between internal Cadence server components and also between different data centers. When starting this migration we had a choice of either starting with public APIs first or all the internal things within the server. We chose the latter one, so that we could gain experience and iterate faster within the server without disruption to the clients. With server side done and listening for both protocols, dynamic config flag was exposed to switch traffic internally. It allowed gradual deployment and provided an option to rollback if needed.")]),e._v(" "),t("p",[e._v("The next step - client migration. We have more users for the Go SDK at Uber, that is why we started with it. Current version of SDK exposes Thrift types via public API, therefore we can not remove them without breaking changes. While we have plans for revamped v2 SDK, current users are able to use gRPC as well - with the help of a "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/blob/v0.18.2/compatibility/thrift2proto.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("translation adapter"),t("OutboundLink")],1),e._v(". Migration is underway starting with "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/canary",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadence canary service"),t("OutboundLink")],1),e._v(", and then onboarding user services one by one.")]),e._v(" "),t("p",[e._v("We plan to support TChannel for a few more releases and then eventually drop it in a future.")]),e._v(" "),t("h2",{attrs:{id:"system-overview"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#system-overview"}},[e._v("#")]),e._v(" System overview")]),e._v(" "),t("p",[t("img",{attrs:{src:"/img/grpc-migration.svg",alt:"gRPC migration overview"}})]),e._v(" "),t("ol",[t("li",[e._v("The frontend of "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Server"),t("OutboundLink")],1),e._v(" exposes two inbounds for both gRPC and TChannel starting "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.21.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.21.0 release"),t("OutboundLink")],1),e._v(". gRPC traffic is being served on a different port that can be configured "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/config/development.yaml#L25",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1),e._v(". For gRPC API we introduced "),t("a",{attrs:{href:"https://github.com/uber/cadence-idl/tree/master/proto/uber/cadence/api/v1",target:"_blank",rel:"noopener noreferrer"}},[e._v("proto IDL"),t("OutboundLink")],1),e._v(" definitions. We will keep TChannel open on frontend for some time to allow gradual client migration.")]),e._v(" "),t("li",[e._v("Starting with "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.21.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.21.0"),t("OutboundLink")],1),e._v(" internal components of Cadence Server (history & matching) also started accepting gRPC traffic. Sending traffic via gRPC is off by default and could be enabled with a flag in "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/config/dynamicconfig/development.yaml#L10",target:"_blank",rel:"noopener noreferrer"}},[e._v("dynamic config"),t("OutboundLink")],1),e._v(". Planned for v0.24.0 it will be enabled by default, with an option to opt-out.")]),e._v(" "),t("li",[e._v("Starting with v0.23.0 communication between different Cadence clusters can be switched to gRPC via this "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/0.23.x/config/development_active.yaml#L82",target:"_blank",rel:"noopener noreferrer"}},[e._v("configuration"),t("OutboundLink")],1),e._v(". It is used for replication and request redirection to different DC.")]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-go/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Go SDK"),t("OutboundLink")],1),e._v(" has exposed generated Thrift types via its public API. This complicated migration, because switching them to proto types (or rpc agnostic types) means breaking changes. Because of this we are pursuing two alternatives:\n"),t("ol",[t("li",[e._v("(A) Short term: starting with "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/releases/tag/v0.18.2",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.18.2"),t("OutboundLink")],1),e._v(" a "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/blob/v0.18.2/compatibility/thrift2proto.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("compatibility layer"),t("OutboundLink")],1),e._v(" is available which makes translation between thrift-proto types underneath. It allows using gRPC communication while still using Thrift based API. "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/pull/52",target:"_blank",rel:"noopener noreferrer"}},[e._v("Usage example"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("li",[e._v("(B) Long term: we are currently designing v2 SDK that will support gRPC directly. Its API will be RPC agnostic and will include other usability improvements. You can check some ideas that are being considered "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/issues/1133",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1),e._v(".")])])]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence-java-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Java SDK"),t("OutboundLink")],1),e._v(" is currently on TChannel only. Move to gRPC is planned for 2022 H1.")]),e._v(" "),t("li",[e._v("It is now possible to communicate with gRPC from other languages as well. Use "),t("a",{attrs:{href:"https://github.com/uber/cadence-idl/tree/master/proto/uber/cadence/api/v1",target:"_blank",rel:"noopener noreferrer"}},[e._v("proto IDLs"),t("OutboundLink")],1),e._v(" to generate bindings for your preferred language. "),t("a",{attrs:{href:"https://github.com/vytautas-karpavicius/cadence-python",target:"_blank",rel:"noopener noreferrer"}},[e._v("Minimal example"),t("OutboundLink")],1),e._v(" for doing it in python.")]),e._v(" "),t("li",[e._v("WebUI and CLI are currently on TChannel. They are planned to be switched to gRPC for 2022 H1.")])]),e._v(" "),t("h2",{attrs:{id:"migration-steps"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migration-steps"}},[e._v("#")]),e._v(" Migration steps")]),e._v(" "),t("h3",{attrs:{id:"upgrading-cadence-server"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upgrading-cadence-server"}},[e._v("#")]),e._v(" Upgrading Cadence server")]),e._v(" "),t("p",[e._v("In order to start using gRPC please upgrade Cadence server to "),t("strong",[t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.22.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.22.0"),t("OutboundLink")],1),e._v(" or later")]),e._v(".")]),e._v(" "),t("ol",[t("li",[e._v("If you are using an older version (before v0.21.0), make sure to disable internal gRPC communication at first. Needed to ensure that all nodes in the cluster are ready to accept gRPC traffic, before switching it on. This is controlled by the "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/config/dynamicconfig/development.yaml#L10",target:"_blank",rel:"noopener noreferrer"}},[e._v("system.enableGRPCOutbound"),t("OutboundLink")],1),e._v(" flag in dynamic config.")]),e._v(" "),t("li",[e._v("Once deployed, flip system.enableGRPCOutbound to true. It will require a cluster restart for setting to take effect.")]),e._v(" "),t("li",[e._v("If you are operating in more than one DC - recommended server version to upgrade to is v0.23.0 or newer. Once individual clusters with gRPC support are deployed, please update "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/0.23.x/config/development_active.yaml#L82",target:"_blank",rel:"noopener noreferrer"}},[e._v("config"),t("OutboundLink")],1),e._v(" to switch cross DC traffic to gRPC. Don’t forget to update ports as well. We also recommend increasing "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/0.23.x/config/development.yaml#L29",target:"_blank",rel:"noopener noreferrer"}},[e._v("grpcMaxMsgSize"),t("OutboundLink")],1),e._v(" to 32MB which is needed to ensure smooth replication. After config change you will need a restart for setting to take effect.")]),e._v(" "),t("li",[e._v("Do not forget that gRPC runs on a different port, therefore you might need to open it on docker containers, firewalls, etc.")])]),e._v(" "),t("h3",{attrs:{id:"upgrading-clients"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upgrading-clients"}},[e._v("#")]),e._v(" Upgrading clients")]),e._v(" "),t("ol",[t("li",[e._v("GoSDK - Follow an "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/pull/52",target:"_blank",rel:"noopener noreferrer"}},[e._v("example"),t("OutboundLink")],1),e._v(" to inject Thrift-to-proto adapter during client initialization and update your config to use the gRPC port.")])]),e._v(" "),t("h3",{attrs:{id:"status-at-uber"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#status-at-uber"}},[e._v("#")]),e._v(" Status at Uber")]),e._v(" "),t("ul",[t("li",[e._v("All clusters run gRPC traffic internally for 4 months without any issues.")]),e._v(" "),t("li",[e._v("Cross DC traffic has been switched to gRPC this month.")]),e._v(" "),t("li",[e._v("With internal tooling updated, we are starting to onboard services to use the Go SDK gRPC compatibility layer.")])]),e._v(" "),t("hr"),e._v(" "),t("p",[e._v("Do not hesitate to reach out to us ("),t("a",{attrs:{href:"mailto:cadence-oss@googlegroups.com"}},[e._v("cadence-oss@googlegroups.com")]),e._v(" or "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("slack"),t("OutboundLink")],1),e._v(") if you have any questions.")]),e._v(" "),t("p",[e._v("The Uber Cadence team")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[25],{368:function(e,t,r){"use strict";r.r(t);var n=r(4),a=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h2",{attrs:{id:"background"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#background"}},[e._v("#")]),e._v(" Background")]),e._v(" "),t("p",[e._v("Cadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.")]),e._v(" "),t("h2",{attrs:{id:"our-approach"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#our-approach"}},[e._v("#")]),e._v(" Our Approach")]),e._v(" "),t("p",[e._v("With ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step to ensure a smooth transition for our users.")]),e._v(" "),t("p",[e._v("Cadence was using Thrift/TChannel not just for the API with client SDKs. They were also used for RPC calls between internal Cadence server components and also between different data centers. When starting this migration we had a choice of either starting with public APIs first or all the internal things within the server. We chose the latter one, so that we could gain experience and iterate faster within the server without disruption to the clients. With server side done and listening for both protocols, dynamic config flag was exposed to switch traffic internally. It allowed gradual deployment and provided an option to rollback if needed.")]),e._v(" "),t("p",[e._v("The next step - client migration. We have more users for the Go SDK at Uber, that is why we started with it. Current version of SDK exposes Thrift types via public API, therefore we can not remove them without breaking changes. While we have plans for revamped v2 SDK, current users are able to use gRPC as well - with the help of a "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/blob/v0.18.2/compatibility/thrift2proto.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("translation adapter"),t("OutboundLink")],1),e._v(". Migration is underway starting with "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/canary",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadence canary service"),t("OutboundLink")],1),e._v(", and then onboarding user services one by one.")]),e._v(" "),t("p",[e._v("We plan to support TChannel for a few more releases and then eventually drop it in a future.")]),e._v(" "),t("h2",{attrs:{id:"system-overview"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#system-overview"}},[e._v("#")]),e._v(" System overview")]),e._v(" "),t("p",[t("img",{attrs:{src:"/img/grpc-migration.svg",alt:"gRPC migration overview"}})]),e._v(" "),t("ol",[t("li",[e._v("The frontend of "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Server"),t("OutboundLink")],1),e._v(" exposes two inbounds for both gRPC and TChannel starting "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.21.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.21.0 release"),t("OutboundLink")],1),e._v(". gRPC traffic is being served on a different port that can be configured "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/config/development.yaml#L25",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1),e._v(". For gRPC API we introduced "),t("a",{attrs:{href:"https://github.com/uber/cadence-idl/tree/master/proto/uber/cadence/api/v1",target:"_blank",rel:"noopener noreferrer"}},[e._v("proto IDL"),t("OutboundLink")],1),e._v(" definitions. We will keep TChannel open on frontend for some time to allow gradual client migration.")]),e._v(" "),t("li",[e._v("Starting with "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.21.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.21.0"),t("OutboundLink")],1),e._v(" internal components of Cadence Server (history & matching) also started accepting gRPC traffic. Sending traffic via gRPC is off by default and could be enabled with a flag in "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/config/dynamicconfig/development.yaml#L10",target:"_blank",rel:"noopener noreferrer"}},[e._v("dynamic config"),t("OutboundLink")],1),e._v(". Planned for v0.24.0 it will be enabled by default, with an option to opt-out.")]),e._v(" "),t("li",[e._v("Starting with v0.23.0 communication between different Cadence clusters can be switched to gRPC via this "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/0.23.x/config/development_active.yaml#L82",target:"_blank",rel:"noopener noreferrer"}},[e._v("configuration"),t("OutboundLink")],1),e._v(". It is used for replication and request redirection to different DC.")]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber-go/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Go SDK"),t("OutboundLink")],1),e._v(" has exposed generated Thrift types via its public API. This complicated migration, because switching them to proto types (or rpc agnostic types) means breaking changes. Because of this we are pursuing two alternatives:\n"),t("ol",[t("li",[e._v("(A) Short term: starting with "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/releases/tag/v0.18.2",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.18.2"),t("OutboundLink")],1),e._v(" a "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/blob/v0.18.2/compatibility/thrift2proto.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("compatibility layer"),t("OutboundLink")],1),e._v(" is available which makes translation between thrift-proto types underneath. It allows using gRPC communication while still using Thrift based API. "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/pull/52",target:"_blank",rel:"noopener noreferrer"}},[e._v("Usage example"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("li",[e._v("(B) Long term: we are currently designing v2 SDK that will support gRPC directly. Its API will be RPC agnostic and will include other usability improvements. You can check some ideas that are being considered "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client/issues/1133",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1),e._v(".")])])]),e._v(" "),t("li",[t("a",{attrs:{href:"https://github.com/uber/cadence-java-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("Java SDK"),t("OutboundLink")],1),e._v(" is currently on TChannel only. Move to gRPC is planned for 2022 H1.")]),e._v(" "),t("li",[e._v("It is now possible to communicate with gRPC from other languages as well. Use "),t("a",{attrs:{href:"https://github.com/uber/cadence-idl/tree/master/proto/uber/cadence/api/v1",target:"_blank",rel:"noopener noreferrer"}},[e._v("proto IDLs"),t("OutboundLink")],1),e._v(" to generate bindings for your preferred language. "),t("a",{attrs:{href:"https://github.com/vytautas-karpavicius/cadence-python",target:"_blank",rel:"noopener noreferrer"}},[e._v("Minimal example"),t("OutboundLink")],1),e._v(" for doing it in python.")]),e._v(" "),t("li",[e._v("WebUI and CLI are currently on TChannel. They are planned to be switched to gRPC for 2022 H1.")])]),e._v(" "),t("h2",{attrs:{id:"migration-steps"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migration-steps"}},[e._v("#")]),e._v(" Migration steps")]),e._v(" "),t("h3",{attrs:{id:"upgrading-cadence-server"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upgrading-cadence-server"}},[e._v("#")]),e._v(" Upgrading Cadence server")]),e._v(" "),t("p",[e._v("In order to start using gRPC please upgrade Cadence server to "),t("strong",[t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.22.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.22.0"),t("OutboundLink")],1),e._v(" or later")]),e._v(".")]),e._v(" "),t("ol",[t("li",[e._v("If you are using an older version (before v0.21.0), make sure to disable internal gRPC communication at first. Needed to ensure that all nodes in the cluster are ready to accept gRPC traffic, before switching it on. This is controlled by the "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/config/dynamicconfig/development.yaml#L10",target:"_blank",rel:"noopener noreferrer"}},[e._v("system.enableGRPCOutbound"),t("OutboundLink")],1),e._v(" flag in dynamic config.")]),e._v(" "),t("li",[e._v("Once deployed, flip system.enableGRPCOutbound to true. It will require a cluster restart for setting to take effect.")]),e._v(" "),t("li",[e._v("If you are operating in more than one DC - recommended server version to upgrade to is v0.23.0 or newer. Once individual clusters with gRPC support are deployed, please update "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/0.23.x/config/development_active.yaml#L82",target:"_blank",rel:"noopener noreferrer"}},[e._v("config"),t("OutboundLink")],1),e._v(" to switch cross DC traffic to gRPC. Don’t forget to update ports as well. We also recommend increasing "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/0.23.x/config/development.yaml#L29",target:"_blank",rel:"noopener noreferrer"}},[e._v("grpcMaxMsgSize"),t("OutboundLink")],1),e._v(" to 32MB which is needed to ensure smooth replication. After config change you will need a restart for setting to take effect.")]),e._v(" "),t("li",[e._v("Do not forget that gRPC runs on a different port, therefore you might need to open it on docker containers, firewalls, etc.")])]),e._v(" "),t("h3",{attrs:{id:"upgrading-clients"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upgrading-clients"}},[e._v("#")]),e._v(" Upgrading clients")]),e._v(" "),t("ol",[t("li",[e._v("GoSDK - Follow an "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/pull/52",target:"_blank",rel:"noopener noreferrer"}},[e._v("example"),t("OutboundLink")],1),e._v(" to inject Thrift-to-proto adapter during client initialization and update your config to use the gRPC port.")])]),e._v(" "),t("h3",{attrs:{id:"status-at-uber"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#status-at-uber"}},[e._v("#")]),e._v(" Status at Uber")]),e._v(" "),t("ul",[t("li",[e._v("All clusters run gRPC traffic internally for 4 months without any issues.")]),e._v(" "),t("li",[e._v("Cross DC traffic has been switched to gRPC this month.")]),e._v(" "),t("li",[e._v("With internal tooling updated, we are starting to onboard services to use the Go SDK gRPC compatibility layer.")])]),e._v(" "),t("hr"),e._v(" "),t("p",[e._v("Do not hesitate to reach out to us ("),t("a",{attrs:{href:"mailto:cadence-oss@googlegroups.com"}},[e._v("cadence-oss@googlegroups.com")]),e._v(" or "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("slack"),t("OutboundLink")],1),e._v(") if you have any questions.")]),e._v(" "),t("p",[e._v("The Uber Cadence team")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/26.7d416bec.js b/assets/js/26.587ba748.js similarity index 98% rename from assets/js/26.7d416bec.js rename to assets/js/26.587ba748.js index e8ca7cd79..659b96852 100644 --- a/assets/js/26.7d416bec.js +++ b/assets/js/26.587ba748.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[26],{367:function(e,t,n){"use strict";n.r(t);var o=n(4),a=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to our very first Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.")]),e._v(" "),t("p",[e._v("Please see below for a short round up of things that have happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"community-related-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-related-office-hours"}},[e._v("#")]),e._v(" Community Related Office Hours")]),e._v(" "),t("p",[e._v("On the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.")]),e._v(" "),t("p",[e._v("And you can see that this monthly update is the result of the feedback from that session! We are happy to get any feedback for comments you may have. Please remember that this update is for you so getting your feedback will help us improve it.")]),e._v(" "),t("p",[e._v("We will be planning other Community Related Office Hour sessions so please watch out for updates.")]),e._v(" "),t("h2",{attrs:{id:"adopting-a-cadence-community-code-of-conduct"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#adopting-a-cadence-community-code-of-conduct"}},[e._v("#")]),e._v(" Adopting a Cadence Community Code of Conduct")]),e._v(" "),t("p",[e._v("Some of you may already know that our community has adopted this version of the "),t("a",{attrs:{href:"https://github.com/uber/.github/blob/dcd96c52f2d1d839208315a2572cf37f48e52e96/CODE_OF_CONDUCT.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("Contributor Covenant"),t("OutboundLink")],1),e._v(" as our Code of Conduct. We want our community to be an open, welcoming and supportive place where everyone can collaborate.")]),e._v(" "),t("h2",{attrs:{id:"recording-from-cadence-meetup-available"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#recording-from-cadence-meetup-available"}},[e._v("#")]),e._v(" Recording from Cadence Meetup Available")]),e._v(" "),t("p",[e._v("Please don't worry if you missed our online "),t("a",{attrs:{href:"https://www.meetup.com/UberEvents/events/281975343/",target:"_blank",rel:"noopener noreferrer"}},[e._v("November Cadence meetup"),t("OutboundLink")],1),e._v(" because the recording is now available. You can find out more details about the meetup and get access to recordings "),t("a",{attrs:{href:"https://www.youtube.com/watch?v=pXgCd1BilLQ",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1)]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://thenewstack.io/meet-cadence-workflow-engine-for-taming-complex-processes/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Meet Cadence: Workflow Engine for Taming Complex Processes"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-workflows-with-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Workflows with Cadence"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://www.globenewswire.com/news-release/2021/12/07/2347314/0/en/Instaclustr-Joins-the-Engineering-Team-at-Uber-in-Supporting-Cadence-the-Powerful-Open-Source-Orchestration-Engine.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr Joins the Engineering Team at Uber in Supporting Cadence, the Powerful Open Source Orchestration Engine"),t("OutboundLink")],1)])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-emea-what-is-cadence.html?_ga=2.191041518.510582234.1643223308-2138855655.1638190316",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar : What is Cadence? And is it right for you?"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("slack"),t("OutboundLink")],1),e._v("#community channel.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[26],{366:function(e,t,n){"use strict";n.r(t);var o=n(4),a=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to our very first Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.")]),e._v(" "),t("p",[e._v("Please see below for a short round up of things that have happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"community-related-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-related-office-hours"}},[e._v("#")]),e._v(" Community Related Office Hours")]),e._v(" "),t("p",[e._v("On the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.")]),e._v(" "),t("p",[e._v("And you can see that this monthly update is the result of the feedback from that session! We are happy to get any feedback for comments you may have. Please remember that this update is for you so getting your feedback will help us improve it.")]),e._v(" "),t("p",[e._v("We will be planning other Community Related Office Hour sessions so please watch out for updates.")]),e._v(" "),t("h2",{attrs:{id:"adopting-a-cadence-community-code-of-conduct"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#adopting-a-cadence-community-code-of-conduct"}},[e._v("#")]),e._v(" Adopting a Cadence Community Code of Conduct")]),e._v(" "),t("p",[e._v("Some of you may already know that our community has adopted this version of the "),t("a",{attrs:{href:"https://github.com/uber/.github/blob/dcd96c52f2d1d839208315a2572cf37f48e52e96/CODE_OF_CONDUCT.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("Contributor Covenant"),t("OutboundLink")],1),e._v(" as our Code of Conduct. We want our community to be an open, welcoming and supportive place where everyone can collaborate.")]),e._v(" "),t("h2",{attrs:{id:"recording-from-cadence-meetup-available"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#recording-from-cadence-meetup-available"}},[e._v("#")]),e._v(" Recording from Cadence Meetup Available")]),e._v(" "),t("p",[e._v("Please don't worry if you missed our online "),t("a",{attrs:{href:"https://www.meetup.com/UberEvents/events/281975343/",target:"_blank",rel:"noopener noreferrer"}},[e._v("November Cadence meetup"),t("OutboundLink")],1),e._v(" because the recording is now available. You can find out more details about the meetup and get access to recordings "),t("a",{attrs:{href:"https://www.youtube.com/watch?v=pXgCd1BilLQ",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1)]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://thenewstack.io/meet-cadence-workflow-engine-for-taming-complex-processes/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Meet Cadence: Workflow Engine for Taming Complex Processes"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-workflows-with-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Workflows with Cadence"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://www.globenewswire.com/news-release/2021/12/07/2347314/0/en/Instaclustr-Joins-the-Engineering-Team-at-Uber-in-Supporting-Cadence-the-Powerful-Open-Source-Orchestration-Engine.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr Joins the Engineering Team at Uber in Supporting Cadence, the Powerful Open Source Orchestration Engine"),t("OutboundLink")],1)])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-emea-what-is-cadence.html?_ga=2.191041518.510582234.1643223308-2138855655.1638190316",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar : What is Cadence? And is it right for you?"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("slack"),t("OutboundLink")],1),e._v("#community channel.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/27.9b323429.js b/assets/js/27.6a83e764.js similarity index 99% rename from assets/js/27.9b323429.js rename to assets/js/27.6a83e764.js index 0592cba79..d3126d0b6 100644 --- a/assets/js/27.9b323429.js +++ b/assets/js/27.6a83e764.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[27],{368:function(e,t,a){"use strict";a.r(t);var n=a(4),o=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This is the second in our series of monthly updates focused on the Cadence community and news about what you have been doing with Cadence. We hope that you enjoyed last month's update and are keen to find out what has been happening.")]),e._v(" "),t("p",[e._v("Please see below for a short activity roundup of what has happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"announcements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#announcements"}},[e._v("#")]),e._v(" Announcements")]),e._v(" "),t("p",[e._v("Just in case you missed it the alpha version of the Cadence notification service has been released. Details can be found at the following link:\n"),t("a",{attrs:{href:"https://github.com/cadence-oss/cadence-notification/releases/tag/v0.0.1",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Notification Service"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Thanks very much to everyone that worked on this!")]),e._v(" "),t("h2",{attrs:{id:"community-supporting-the-community"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-supporting-the-community"}},[e._v("#")]),e._v(" Community Supporting the Community")]),e._v(" "),t("p",[e._v("During February 16 questions were posted in the Cadence #support "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel from new Cadence users and existing community members looking for help and guidance. A very big thank you to the following community members who took the time to help others: Ali, David, Tamas Weisz, Liang Mei, Quanzheng Long, peaceChoi, Emrah Seker, Ben Slater and Sathyaraju Sekaran.")]),e._v(" "),t("p",[e._v("It’s great to see that we are supporting each other - and that is exactly what communities do!")]),e._v(" "),t("h2",{attrs:{id:"please-subscribe-to-our-youtube-channel"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#please-subscribe-to-our-youtube-channel"}},[e._v("#")]),e._v(" Please Subscribe to our Youtube Channel")]),e._v(" "),t("p",[e._v("Did you know that we have a Youtube channel where you can find Cadence related videos and even the recording of our last meetup? Well we do and you can find it here:\n"),t("a",{attrs:{href:"https://www.youtube.com/channel/UC6H9Jsq4ZQ74g8coDgJu9ZA/videos",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Youtube"),t("OutboundLink")],1),e._v("\nPlease subscribe and let us know what other videos you’d like to see there.")]),e._v(" "),t("h2",{attrs:{id:"help-us-to-make-cadence-even-better"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#help-us-to-make-cadence-even-better"}},[e._v("#")]),e._v(" Help us to Make Cadence even better")]),e._v(" "),t("p",[e._v("Are you interested in helping us improve Cadence? We are always looking for contributors to help share the workload. If you’d like to help then you can start by taking a look at our list of "),t("a",{attrs:{href:"https://github.com/uber/cadence/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("open issues"),t("OutboundLink")],1),e._v(" on Github. We currently have 320 of them that need to be worked on so if you want to learn more about Cadence and solve some of the reported issues then please take a look and volunteer to fix it.")]),e._v(" "),t("p",[e._v("If you are new to Cadence or you’d like to try something simple then we have some issues labelled as ‘good first issue’. These are a great place to start to get more Cadence experience.")]),e._v(" "),t("h2",{attrs:{id:"cadence-calendar"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-calendar"}},[e._v("#")]),e._v(" Cadence Calendar")]),e._v(" "),t("p",[e._v("We have created a "),t("a",{attrs:{href:"https://calendar.google.com/calendar/embed?src=e6r40gp3c2r01054id7e99dlac%40group.calendar.google.com&ctz=America%2FLos_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence public calendar"),t("OutboundLink")],1),e._v(" where we can highlight events, meetings, webinars etc that are planned around Cadence. The calendar will soon be available on the "),t("a",{attrs:{href:"https://cadenceworkflow.io/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence website"),t("OutboundLink")],1),e._v(" so please make sure that you check it regularly.\nThis means that you can easily find out if there are any Cadence events planned that you would like to attend.")]),e._v(" "),t("h2",{attrs:{id:"cadence-technical-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-technical-office-hours"}},[e._v("#")]),e._v(" Cadence Technical Office Hours")]),e._v(" "),t("p",[e._v("Our second Technical Office Hours event took place on February 28th, Monday at 9AM PST. The main objective was provide Cadence support, respond to any questions about and to share any knowledge that you have learned. We always encourage community members to come along - and thanks very much to everyone who participated.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-apache-kafka-microservices-with-cadence-workflows/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Apache Kafka® Microservices With Cadence Workflows"),t("OutboundLink")],1)])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-apac-what-is-cadence.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar : What is Cadence? And is it right for you? (APAC)"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[27],{367:function(e,t,a){"use strict";a.r(t);var n=a(4),o=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This is the second in our series of monthly updates focused on the Cadence community and news about what you have been doing with Cadence. We hope that you enjoyed last month's update and are keen to find out what has been happening.")]),e._v(" "),t("p",[e._v("Please see below for a short activity roundup of what has happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"announcements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#announcements"}},[e._v("#")]),e._v(" Announcements")]),e._v(" "),t("p",[e._v("Just in case you missed it the alpha version of the Cadence notification service has been released. Details can be found at the following link:\n"),t("a",{attrs:{href:"https://github.com/cadence-oss/cadence-notification/releases/tag/v0.0.1",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Notification Service"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Thanks very much to everyone that worked on this!")]),e._v(" "),t("h2",{attrs:{id:"community-supporting-the-community"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-supporting-the-community"}},[e._v("#")]),e._v(" Community Supporting the Community")]),e._v(" "),t("p",[e._v("During February 16 questions were posted in the Cadence #support "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel from new Cadence users and existing community members looking for help and guidance. A very big thank you to the following community members who took the time to help others: Ali, David, Tamas Weisz, Liang Mei, Quanzheng Long, peaceChoi, Emrah Seker, Ben Slater and Sathyaraju Sekaran.")]),e._v(" "),t("p",[e._v("It’s great to see that we are supporting each other - and that is exactly what communities do!")]),e._v(" "),t("h2",{attrs:{id:"please-subscribe-to-our-youtube-channel"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#please-subscribe-to-our-youtube-channel"}},[e._v("#")]),e._v(" Please Subscribe to our Youtube Channel")]),e._v(" "),t("p",[e._v("Did you know that we have a Youtube channel where you can find Cadence related videos and even the recording of our last meetup? Well we do and you can find it here:\n"),t("a",{attrs:{href:"https://www.youtube.com/channel/UC6H9Jsq4ZQ74g8coDgJu9ZA/videos",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Youtube"),t("OutboundLink")],1),e._v("\nPlease subscribe and let us know what other videos you’d like to see there.")]),e._v(" "),t("h2",{attrs:{id:"help-us-to-make-cadence-even-better"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#help-us-to-make-cadence-even-better"}},[e._v("#")]),e._v(" Help us to Make Cadence even better")]),e._v(" "),t("p",[e._v("Are you interested in helping us improve Cadence? We are always looking for contributors to help share the workload. If you’d like to help then you can start by taking a look at our list of "),t("a",{attrs:{href:"https://github.com/uber/cadence/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("open issues"),t("OutboundLink")],1),e._v(" on Github. We currently have 320 of them that need to be worked on so if you want to learn more about Cadence and solve some of the reported issues then please take a look and volunteer to fix it.")]),e._v(" "),t("p",[e._v("If you are new to Cadence or you’d like to try something simple then we have some issues labelled as ‘good first issue’. These are a great place to start to get more Cadence experience.")]),e._v(" "),t("h2",{attrs:{id:"cadence-calendar"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-calendar"}},[e._v("#")]),e._v(" Cadence Calendar")]),e._v(" "),t("p",[e._v("We have created a "),t("a",{attrs:{href:"https://calendar.google.com/calendar/embed?src=e6r40gp3c2r01054id7e99dlac%40group.calendar.google.com&ctz=America%2FLos_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence public calendar"),t("OutboundLink")],1),e._v(" where we can highlight events, meetings, webinars etc that are planned around Cadence. The calendar will soon be available on the "),t("a",{attrs:{href:"https://cadenceworkflow.io/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence website"),t("OutboundLink")],1),e._v(" so please make sure that you check it regularly.\nThis means that you can easily find out if there are any Cadence events planned that you would like to attend.")]),e._v(" "),t("h2",{attrs:{id:"cadence-technical-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-technical-office-hours"}},[e._v("#")]),e._v(" Cadence Technical Office Hours")]),e._v(" "),t("p",[e._v("Our second Technical Office Hours event took place on February 28th, Monday at 9AM PST. The main objective was provide Cadence support, respond to any questions about and to share any knowledge that you have learned. We always encourage community members to come along - and thanks very much to everyone who participated.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-apache-kafka-microservices-with-cadence-workflows/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Apache Kafka® Microservices With Cadence Workflows"),t("OutboundLink")],1)])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-apac-what-is-cadence.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar : What is Cadence? And is it right for you? (APAC)"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/27.38f433e1.js b/assets/js/27.99353468.js similarity index 99% rename from assets/js/27.38f433e1.js rename to assets/js/27.99353468.js index a56b144dc..d0f0692d3 100644 --- a/assets/js/27.38f433e1.js +++ b/assets/js/27.99353468.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[27],{332:function(e,a,t){"use strict";t.r(a);var s=t(0),r=Object(s.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"install-cadence-service-locally"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#install-cadence-service-locally"}},[e._v("#")]),e._v(" Install Cadence Service Locally")]),e._v(" "),a("p",[e._v("To get started with Cadence, you need to set up three components successfully.")]),e._v(" "),a("ul",[a("li",[e._v("A Cadence server hosting dependencies that Cadence relies on such as Cassandra, Elastic Search, etc")]),e._v(" "),a("li",[e._v("A Cadence domain for you workflow application")]),e._v(" "),a("li",[e._v("A Cadence worker service hosting your workflows")])]),e._v(" "),a("h2",{attrs:{id:"_0-prerequisite-install-docker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#_0-prerequisite-install-docker"}},[e._v("#")]),e._v(" 0. Prerequisite - Install docker")]),e._v(" "),a("p",[e._v("Follow the Docker installation instructions found here: "),a("a",{attrs:{href:"https://docs.docker.com/engine/installation/",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://docs.docker.com/engine/installation/"),a("OutboundLink")],1)]),e._v(" "),a("h2",{attrs:{id:"_1-run-cadence-server-using-docker-compose"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#_1-run-cadence-server-using-docker-compose"}},[e._v("#")]),e._v(" 1. Run Cadence Server Using Docker Compose")]),e._v(" "),a("p",[e._v("Download the Cadence docker-compose file:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("\n"),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("curl")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-O")]),e._v(" https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose.yml "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("&&")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("curl")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-O")]),e._v(" https://raw.githubusercontent.com/uber/cadence/master/docker/prometheus/prometheus.yml\n")])])]),a("p",[e._v("Then start Cadence Service by running:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker-compose")]),e._v(" up\n")])])]),a("p",[e._v("Please keep this process running at background.")]),e._v(" "),a("h2",{attrs:{id:"_2-register-a-domain-using-the-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#_2-register-a-domain-using-the-cli"}},[e._v("#")]),e._v(" 2. Register a Domain Using the CLI")]),e._v(" "),a("p",[e._v("In a new terminal, create a new domain called "),a("code",[e._v("test-domain")]),e._v(" (or choose whatever name you like) by running:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("=")]),e._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-domain domain register "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-rd")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n")])])]),a("p",[e._v("Check that the domain is indeed registered:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("=")]),e._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-domain domain describe\nName: test-domain\nDescription:\nOwnerEmail:\nDomainData: map"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("]")]),e._v("\nStatus: REGISTERED\nRetentionInDays: "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\nEmitMetrics: "),a("span",{pre:!0,attrs:{class:"token boolean"}},[e._v("false")]),e._v("\nActiveClusterName: active\nClusters: active\nArchivalStatus: DISABLED\nBad binaries to reset:\n+-----------------+----------+------------+--------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" BINARY CHECKSUM "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" OPERATOR "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" START TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" REASON "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n+-----------------+----------+------------+--------+\n+-----------------+----------+------------+--------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("p",[e._v("Please remember the domains you created because they will be used in your worker implementation and Cadence CLI commands.")]),e._v(" "),a("h2",{attrs:{id:"what-s-next"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#what-s-next"}},[e._v("#")]),e._v(" What's Next")]),e._v(" "),a("p",[e._v("So far you've successfully finished two prerequisites to your Cadence application. The next steps are to implement a simple worker service that hosts your workflows and to run your very first hello world Cadence workflow.")]),e._v(" "),a("p",[e._v("Go to "),a("a",{attrs:{href:"/docs/get-started/java-hello-world"}},[e._v("Java HelloWorld")]),e._v(" or "),a("a",{attrs:{href:"/docs/get-started/golang-hello-world"}},[e._v("Golang HelloWorld")]),e._v(".")]),e._v(" "),a("h2",{attrs:{id:"troubleshooting"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#troubleshooting"}},[e._v("#")]),e._v(" Troubleshooting")]),e._v(" "),a("p",[e._v("There can be various reasons that "),a("code",[e._v("docker-compose up")]),e._v(" cannot succeed:")]),e._v(" "),a("ul",[a("li",[e._v("In case of the image being too old, update the docker image by "),a("code",[e._v("docker pull ubercadence/server:master-auto-setup")]),e._v(" and retry")]),e._v(" "),a("li",[e._v("In case of the local docker env is messed up: "),a("code",[e._v("docker system prune --all")]),e._v(" and retry (see "),a("a",{attrs:{href:"https://docs.docker.com/config/pruning/",target:"_blank",rel:"noopener noreferrer"}},[e._v("details about it"),a("OutboundLink")],1),e._v(" )")]),e._v(" "),a("li",[e._v("See logs of different container:\n"),a("ul",[a("li",[e._v("If Cassandra is not able to get up: "),a("code",[e._v("docker logs -f docker_cassandra_1")])]),e._v(" "),a("li",[e._v("If Cadence is not able to get up: "),a("code",[e._v("docker logs -f docker_cadence_1")])]),e._v(" "),a("li",[e._v("If Cadence Web is not able to get up: "),a("code",[e._v("docker logs -f docker_cadence-web_1")])])])])]),e._v(" "),a("p",[e._v("If the above is still not working, "),a("a",{attrs:{href:"https://github.com/uber/cadence/issues/new/choose",target:"_blank",rel:"noopener noreferrer"}},[e._v("open an issue in Server(main) repo"),a("OutboundLink")],1),e._v(".")])])}),[],!1,null,null,null);a.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[27],{334:function(e,a,t){"use strict";t.r(a);var s=t(0),r=Object(s.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"install-cadence-service-locally"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#install-cadence-service-locally"}},[e._v("#")]),e._v(" Install Cadence Service Locally")]),e._v(" "),a("p",[e._v("To get started with Cadence, you need to set up three components successfully.")]),e._v(" "),a("ul",[a("li",[e._v("A Cadence server hosting dependencies that Cadence relies on such as Cassandra, Elastic Search, etc")]),e._v(" "),a("li",[e._v("A Cadence domain for you workflow application")]),e._v(" "),a("li",[e._v("A Cadence worker service hosting your workflows")])]),e._v(" "),a("h2",{attrs:{id:"_0-prerequisite-install-docker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#_0-prerequisite-install-docker"}},[e._v("#")]),e._v(" 0. Prerequisite - Install docker")]),e._v(" "),a("p",[e._v("Follow the Docker installation instructions found here: "),a("a",{attrs:{href:"https://docs.docker.com/engine/installation/",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://docs.docker.com/engine/installation/"),a("OutboundLink")],1)]),e._v(" "),a("h2",{attrs:{id:"_1-run-cadence-server-using-docker-compose"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#_1-run-cadence-server-using-docker-compose"}},[e._v("#")]),e._v(" 1. Run Cadence Server Using Docker Compose")]),e._v(" "),a("p",[e._v("Download the Cadence docker-compose file:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("\n"),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("curl")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-O")]),e._v(" https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose.yml "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("&&")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("curl")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-O")]),e._v(" https://raw.githubusercontent.com/uber/cadence/master/docker/prometheus/prometheus.yml\n")])])]),a("p",[e._v("Then start Cadence Service by running:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker-compose")]),e._v(" up\n")])])]),a("p",[e._v("Please keep this process running at background.")]),e._v(" "),a("h2",{attrs:{id:"_2-register-a-domain-using-the-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#_2-register-a-domain-using-the-cli"}},[e._v("#")]),e._v(" 2. Register a Domain Using the CLI")]),e._v(" "),a("p",[e._v("In a new terminal, create a new domain called "),a("code",[e._v("test-domain")]),e._v(" (or choose whatever name you like) by running:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("=")]),e._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-domain domain register "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-rd")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n")])])]),a("p",[e._v("Check that the domain is indeed registered:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("=")]),e._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-domain domain describe\nName: test-domain\nDescription:\nOwnerEmail:\nDomainData: map"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("]")]),e._v("\nStatus: REGISTERED\nRetentionInDays: "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\nEmitMetrics: "),a("span",{pre:!0,attrs:{class:"token boolean"}},[e._v("false")]),e._v("\nActiveClusterName: active\nClusters: active\nArchivalStatus: DISABLED\nBad binaries to reset:\n+-----------------+----------+------------+--------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" BINARY CHECKSUM "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" OPERATOR "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" START TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" REASON "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n+-----------------+----------+------------+--------+\n+-----------------+----------+------------+--------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("p",[e._v("Please remember the domains you created because they will be used in your worker implementation and Cadence CLI commands.")]),e._v(" "),a("h2",{attrs:{id:"what-s-next"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#what-s-next"}},[e._v("#")]),e._v(" What's Next")]),e._v(" "),a("p",[e._v("So far you've successfully finished two prerequisites to your Cadence application. The next steps are to implement a simple worker service that hosts your workflows and to run your very first hello world Cadence workflow.")]),e._v(" "),a("p",[e._v("Go to "),a("a",{attrs:{href:"/docs/get-started/java-hello-world"}},[e._v("Java HelloWorld")]),e._v(" or "),a("a",{attrs:{href:"/docs/get-started/golang-hello-world"}},[e._v("Golang HelloWorld")]),e._v(".")]),e._v(" "),a("h2",{attrs:{id:"troubleshooting"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#troubleshooting"}},[e._v("#")]),e._v(" Troubleshooting")]),e._v(" "),a("p",[e._v("There can be various reasons that "),a("code",[e._v("docker-compose up")]),e._v(" cannot succeed:")]),e._v(" "),a("ul",[a("li",[e._v("In case of the image being too old, update the docker image by "),a("code",[e._v("docker pull ubercadence/server:master-auto-setup")]),e._v(" and retry")]),e._v(" "),a("li",[e._v("In case of the local docker env is messed up: "),a("code",[e._v("docker system prune --all")]),e._v(" and retry (see "),a("a",{attrs:{href:"https://docs.docker.com/config/pruning/",target:"_blank",rel:"noopener noreferrer"}},[e._v("details about it"),a("OutboundLink")],1),e._v(" )")]),e._v(" "),a("li",[e._v("See logs of different container:\n"),a("ul",[a("li",[e._v("If Cassandra is not able to get up: "),a("code",[e._v("docker logs -f docker_cassandra_1")])]),e._v(" "),a("li",[e._v("If Cadence is not able to get up: "),a("code",[e._v("docker logs -f docker_cadence_1")])]),e._v(" "),a("li",[e._v("If Cadence Web is not able to get up: "),a("code",[e._v("docker logs -f docker_cadence-web_1")])])])])]),e._v(" "),a("p",[e._v("If the above is still not working, "),a("a",{attrs:{href:"https://github.com/uber/cadence/issues/new/choose",target:"_blank",rel:"noopener noreferrer"}},[e._v("open an issue in Server(main) repo"),a("OutboundLink")],1),e._v(".")])])}),[],!1,null,null,null);a.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/28.2128df6e.js b/assets/js/28.a2cac47c.js similarity index 99% rename from assets/js/28.2128df6e.js rename to assets/js/28.a2cac47c.js index 6ea41e37a..37bc77a28 100644 --- a/assets/js/28.2128df6e.js +++ b/assets/js/28.a2cac47c.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[28],{334:function(t,a,s){"use strict";s.r(a);var e=s(0),n=Object(e.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"java-hello-world"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#java-hello-world"}},[t._v("#")]),t._v(" Java Hello World")]),t._v(" "),a("p",[t._v("This section provides step by step instructions on how to write and run a HelloWorld with Java.")]),t._v(" "),a("p",[t._v("For complete, ready to build samples covering all the key Cadence concepts go to "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence-Java-Samples"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("p",[t._v("You can also review "),a("a",{attrs:{href:"/docs/java-client"}},[t._v("Java-Client")]),t._v(" and "),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/index.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("java-docs"),a("OutboundLink")],1),t._v(" for more documentation.")]),t._v(" "),a("h2",{attrs:{id:"include-cadence-java-client-dependency"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#include-cadence-java-client-dependency"}},[t._v("#")]),t._v(" Include Cadence Java Client Dependency")]),t._v(" "),a("p",[t._v("Go to the "),a("a",{attrs:{href:"https://mvnrepository.com/artifact/com.uber.cadence/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[t._v("Maven Repository Uber Cadence Java Client Page"),a("OutboundLink")],1),t._v("\nand find the latest version of the library. Include it as a dependency into your Java project. For example if you\nare using Gradle the dependency looks like:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("compile group: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'com.uber.cadence'")]),t._v(", name: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'cadence-client'")]),t._v(", version: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("''")]),t._v("\n")])])]),a("p",[t._v("Also add the following dependencies that cadence-client relies on:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("compile group: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'commons-configuration'")]),t._v(", name: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'commons-configuration'")]),t._v(", version: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'1.9'")]),t._v("\ncompile group: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'ch.qos.logback'")]),t._v(", name: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'logback-classic'")]),t._v(", version: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'1.2.3'")]),t._v("\n")])])]),a("p",[t._v("Make sure that the following code compiles:")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowMethod")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("org"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("slf4j"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")]),t._v(" logger "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("getLogger")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("If you are having problems setting up the build files use the\n"),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence Java Samples"),a("OutboundLink")],1),t._v(" GitHub repository as a reference.")]),t._v(" "),a("p",[t._v("Also add the following logback config file somewhere in your classpath:")]),t._v(" "),a("div",{staticClass:"language-xml extra-class"},[a("pre",{pre:!0,attrs:{class:"language-xml"}},[a("code",[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("configuration")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("appender")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("name")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("STDOUT"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("ch.qos.logback.core.ConsoleAppender"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("\x3c!-- encoders are assigned the type\n ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --\x3e")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("encoder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("pattern")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("logger")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("name")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("io.netty"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("level")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("INFO"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("/>")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("root")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("level")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("INFO"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("appender-ref")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("ref")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("STDOUT"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("/>")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n")])])]),a("h2",{attrs:{id:"implement-hello-world-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#implement-hello-world-workflow"}},[t._v("#")]),t._v(" Implement Hello World Workflow")]),t._v(" "),a("p",[t._v("Let's add "),a("code",[t._v("HelloWorldImpl")]),t._v(" with the "),a("code",[t._v("sayHello")]),t._v(' method that just logs the "Hello ..." and returns.')]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowMethod")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("org"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("slf4j"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")]),t._v(" logger "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("getLogger")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n logger"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("To link the "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation to the Cadence framework, it should be registered with a "),a("Term",{attrs:{term:"worker"}}),t._v(" that connects to\na Cadence Service. By default the "),a("Term",{attrs:{term:"worker"}}),t._v(" connects to the locally running Cadence service.")],1),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Get worker to poll the task list.")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),t._v(" factory "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflowClient"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("The code is slightly different if you are using client version prior to 3.0.0:")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),t._v(" factory "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"test-domain"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorldTaskList"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("h2",{attrs:{id:"execute-hello-world-workflow-using-the-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#execute-hello-world-workflow-using-the-cli"}},[t._v("#")]),t._v(" Execute Hello World Workflow using the CLI")]),t._v(" "),a("p",[t._v("Now run the "),a("Term",{attrs:{term:"worker"}}),t._v(" program. Following is an example log:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.575 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("service")]),t._v(" cadence-frontend, LibraryVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.2")]),t._v(".0, FeatureVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1.0")]),t._v(".0\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.671 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("45937")]),t._v("@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.673 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'null'")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("81b8d0ac-ff89-47e8-b842-3dd26337feea"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("No Hello printed. This is expected because a "),a("Term",{attrs:{term:"worker"}}),t._v(" is just a "),a("Term",{attrs:{term:"workflow"}}),t._v(" code host. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" has to be started to execute. Let's use Cadence "),a("Term",{attrs:{term:"CLI"}}),t._v(" to start the workflow:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"World'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run Id: e7c40431-8e23-485b-9649-e8f161219efe\n')])])]),a("p",[t._v("The output of the program should change to:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.575 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("service")]),t._v(" cadence-frontend, LibraryVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.2")]),t._v(".0, FeatureVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1.0")]),t._v(".0\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.671 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("45937")]),t._v("@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.673 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'null'")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("81b8d0ac-ff89-47e8-b842-3dd26337feea"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":40:28.308 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Let's start another "),a("Term",{attrs:{term:"workflow_execution",show:""}})],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Cadence'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: d2083532-9c68-49ab-90e1-d960175377a7, run Id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e\n')])])]),a("p",[t._v("And the output changed to:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.575 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("service")]),t._v(" cadence-frontend, LibraryVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.2")]),t._v(".0, FeatureVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1.0")]),t._v(".0\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.671 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("45937")]),t._v("@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.673 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'null'")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("81b8d0ac-ff89-47e8-b842-3dd26337feea"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":40:28.308 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":42:34.994 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - Hello Cadence"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("h2",{attrs:{id:"list-workflows-and-workflow-history"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#list-workflows-and-workflow-history"}},[t._v("#")]),t._v(" List Workflows and Workflow History")]),t._v(" "),a("p",[t._v("Let's list our "),a("Term",{attrs:{term:"workflow"}}),t._v(" in the "),a("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow list\n WORKFLOW TYPE "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" WORKFLOW ID "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" RUN ID "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" START TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" EXECUTION TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" END TIME\n HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" d2083532-9c68-49ab-90e1-d960175377a7 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" 331bfa04-834b-45a7-861e-bcb9f6ddae3e "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":42:34 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":42:34 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":42:35\n HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" e7c40431-8e23-485b-9649-e8f161219efe "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":40:28 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":40:28 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":40:29\n")])])]),a("p",[t._v("Now let's look at the "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" history:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(" WorkflowExecutionStarted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("WorkflowType:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Name:HelloWorld::sayHello"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(",\n TaskList:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Name:HelloWorldTaskList"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(",\n Input:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"World"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n ExecutionStartToCloseTimeoutSeconds:3600,\n TaskStartToCloseTimeoutSeconds:10,\n ContinuedFailureDetails:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n LastCompletionResult:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n Identity:cadence-cli@linuxkit-025000000001,\n Attempt:0,\n FirstDecisionTaskBackoffSeconds:0"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(" DecisionTaskScheduled "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("TaskList:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Name:HelloWorldTaskList"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(",\n StartToCloseTimeoutSeconds:10,\n Attempt:0"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v(" DecisionTaskStarted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("ScheduledEventId:2,\n Identity:45937@maxim-C02XD0AAJGH6,\n RequestId:481a14e5-67a4-436e-9a23-7f7fb7f87ef3"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("4")]),t._v(" DecisionTaskCompleted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("ExecutionContext:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n ScheduledEventId:2,\n StartedEventId:3,\n Identity:45937@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("5")]),t._v(" WorkflowExecutionCompleted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Result:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n DecisionTaskCompletedEventId:4"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("Even for such a trivial "),a("Term",{attrs:{term:"workflow"}}),t._v(", the history gives a lot of useful information. For complex "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" this is a really useful tool for production and development troubleshooting. History can be automatically archived to a long-term blob store (for example Amazon S3) upon "),a("Term",{attrs:{term:"workflow"}}),t._v(" completion for compliance, analytical, and troubleshooting purposes.")],1),t._v(" "),a("h2",{attrs:{id:"what-is-next"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#what-is-next"}},[t._v("#")]),t._v(" What is Next")]),t._v(" "),a("p",[t._v("Now you have completed the tutorials. You can continue to explore the key "),a("a",{attrs:{href:"/docs/concepts"}},[t._v("concepts")]),t._v(" in Cadence, and also how to use them with "),a("a",{attrs:{href:"/docs/java-client"}},[t._v("Java Client")])])])}),[],!1,null,null,null);a.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[28],{332:function(t,a,s){"use strict";s.r(a);var e=s(0),n=Object(e.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"java-hello-world"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#java-hello-world"}},[t._v("#")]),t._v(" Java Hello World")]),t._v(" "),a("p",[t._v("This section provides step by step instructions on how to write and run a HelloWorld with Java.")]),t._v(" "),a("p",[t._v("For complete, ready to build samples covering all the key Cadence concepts go to "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence-Java-Samples"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("p",[t._v("You can also review "),a("a",{attrs:{href:"/docs/java-client"}},[t._v("Java-Client")]),t._v(" and "),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/index.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("java-docs"),a("OutboundLink")],1),t._v(" for more documentation.")]),t._v(" "),a("h2",{attrs:{id:"include-cadence-java-client-dependency"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#include-cadence-java-client-dependency"}},[t._v("#")]),t._v(" Include Cadence Java Client Dependency")]),t._v(" "),a("p",[t._v("Go to the "),a("a",{attrs:{href:"https://mvnrepository.com/artifact/com.uber.cadence/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[t._v("Maven Repository Uber Cadence Java Client Page"),a("OutboundLink")],1),t._v("\nand find the latest version of the library. Include it as a dependency into your Java project. For example if you\nare using Gradle the dependency looks like:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("compile group: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'com.uber.cadence'")]),t._v(", name: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'cadence-client'")]),t._v(", version: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("''")]),t._v("\n")])])]),a("p",[t._v("Also add the following dependencies that cadence-client relies on:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("compile group: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'commons-configuration'")]),t._v(", name: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'commons-configuration'")]),t._v(", version: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'1.9'")]),t._v("\ncompile group: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'ch.qos.logback'")]),t._v(", name: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'logback-classic'")]),t._v(", version: "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'1.2.3'")]),t._v("\n")])])]),a("p",[t._v("Make sure that the following code compiles:")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowMethod")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("org"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("slf4j"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")]),t._v(" logger "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("getLogger")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("If you are having problems setting up the build files use the\n"),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence Java Samples"),a("OutboundLink")],1),t._v(" GitHub repository as a reference.")]),t._v(" "),a("p",[t._v("Also add the following logback config file somewhere in your classpath:")]),t._v(" "),a("div",{staticClass:"language-xml extra-class"},[a("pre",{pre:!0,attrs:{class:"language-xml"}},[a("code",[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("configuration")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("appender")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("name")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("STDOUT"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("ch.qos.logback.core.ConsoleAppender"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("\x3c!-- encoders are assigned the type\n ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --\x3e")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("encoder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("pattern")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("logger")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("name")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("io.netty"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("level")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("INFO"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("/>")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("root")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("level")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("INFO"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),t._v("appender-ref")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token attr-name"}},[t._v("ref")]),a("span",{pre:!0,attrs:{class:"token attr-value"}},[a("span",{pre:!0,attrs:{class:"token punctuation attr-equals"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')]),t._v("STDOUT"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v('"')])]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("/>")])]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token tag"}},[a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("")])]),t._v("\n")])])]),a("h2",{attrs:{id:"implement-hello-world-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#implement-hello-world-workflow"}},[t._v("#")]),t._v(" Implement Hello World Workflow")]),t._v(" "),a("p",[t._v("Let's add "),a("code",[t._v("HelloWorldImpl")]),t._v(" with the "),a("code",[t._v("sayHello")]),t._v(' method that just logs the "Hello ..." and returns.')]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("com"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("uber"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("cadence"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowMethod")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token import"}},[a("span",{pre:!0,attrs:{class:"token namespace"}},[t._v("org"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("slf4j"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")])]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")])]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Logger")]),t._v(" logger "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("getLogger")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GettingStarted")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n logger"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("To link the "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation to the Cadence framework, it should be registered with a "),a("Term",{attrs:{term:"worker"}}),t._v(" that connects to\na Cadence Service. By default the "),a("Term",{attrs:{term:"worker"}}),t._v(" connects to the locally running Cadence service.")],1),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Get worker to poll the task list.")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),t._v(" factory "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflowClient"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("The code is slightly different if you are using client version prior to 3.0.0:")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),t._v(" factory "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"test-domain"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorldTaskList"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("h2",{attrs:{id:"execute-hello-world-workflow-using-the-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#execute-hello-world-workflow-using-the-cli"}},[t._v("#")]),t._v(" Execute Hello World Workflow using the CLI")]),t._v(" "),a("p",[t._v("Now run the "),a("Term",{attrs:{term:"worker"}}),t._v(" program. Following is an example log:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.575 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("service")]),t._v(" cadence-frontend, LibraryVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.2")]),t._v(".0, FeatureVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1.0")]),t._v(".0\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.671 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("45937")]),t._v("@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.673 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'null'")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("81b8d0ac-ff89-47e8-b842-3dd26337feea"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("No Hello printed. This is expected because a "),a("Term",{attrs:{term:"worker"}}),t._v(" is just a "),a("Term",{attrs:{term:"workflow"}}),t._v(" code host. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" has to be started to execute. Let's use Cadence "),a("Term",{attrs:{term:"CLI"}}),t._v(" to start the workflow:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"World'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run Id: e7c40431-8e23-485b-9649-e8f161219efe\n')])])]),a("p",[t._v("The output of the program should change to:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.575 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("service")]),t._v(" cadence-frontend, LibraryVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.2")]),t._v(".0, FeatureVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1.0")]),t._v(".0\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.671 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("45937")]),t._v("@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.673 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'null'")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("81b8d0ac-ff89-47e8-b842-3dd26337feea"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":40:28.308 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Let's start another "),a("Term",{attrs:{term:"workflow_execution",show:""}})],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Cadence'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: d2083532-9c68-49ab-90e1-d960175377a7, run Id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e\n')])])]),a("p",[t._v("And the output changed to:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.575 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("service")]),t._v(" cadence-frontend, LibraryVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.2")]),t._v(".0, FeatureVersion: "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1.0")]),t._v(".0\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.671 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("45937")]),t._v("@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":35:02.673 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("main"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.cadence.internal.worker.Poller - start"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(": Poller"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("options"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PollerOptions"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("maximumPollRateIntervalMilliseconds"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("maximumPollRatePerSecond")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffCoefficient")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2.0")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffInitialInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT0.2S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollBackoffMaximumInterval")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("PT20S, "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadCount")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("pollThreadNamePrefix")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v("'null'")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(", "),a("span",{pre:!0,attrs:{class:"token assign-left variable"}},[t._v("identity")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("81b8d0ac-ff89-47e8-b842-3dd26337feea"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":40:28.308 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("13")]),t._v(":42:34.994 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - Hello Cadence"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("h2",{attrs:{id:"list-workflows-and-workflow-history"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#list-workflows-and-workflow-history"}},[t._v("#")]),t._v(" List Workflows and Workflow History")]),t._v(" "),a("p",[t._v("Let's list our "),a("Term",{attrs:{term:"workflow"}}),t._v(" in the "),a("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow list\n WORKFLOW TYPE "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" WORKFLOW ID "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" RUN ID "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" START TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" EXECUTION TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" END TIME\n HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" d2083532-9c68-49ab-90e1-d960175377a7 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" 331bfa04-834b-45a7-861e-bcb9f6ddae3e "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":42:34 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":42:34 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":42:35\n HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" e7c40431-8e23-485b-9649-e8f161219efe "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":40:28 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":40:28 "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("20")]),t._v(":40:29\n")])])]),a("p",[t._v("Now let's look at the "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" history:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(" WorkflowExecutionStarted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("WorkflowType:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Name:HelloWorld::sayHello"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(",\n TaskList:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Name:HelloWorldTaskList"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(",\n Input:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"World"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n ExecutionStartToCloseTimeoutSeconds:3600,\n TaskStartToCloseTimeoutSeconds:10,\n ContinuedFailureDetails:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n LastCompletionResult:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n Identity:cadence-cli@linuxkit-025000000001,\n Attempt:0,\n FirstDecisionTaskBackoffSeconds:0"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(" DecisionTaskScheduled "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("TaskList:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Name:HelloWorldTaskList"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(",\n StartToCloseTimeoutSeconds:10,\n Attempt:0"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v(" DecisionTaskStarted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("ScheduledEventId:2,\n Identity:45937@maxim-C02XD0AAJGH6,\n RequestId:481a14e5-67a4-436e-9a23-7f7fb7f87ef3"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("4")]),t._v(" DecisionTaskCompleted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("ExecutionContext:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n ScheduledEventId:2,\n StartedEventId:3,\n Identity:45937@maxim-C02XD0AAJGH6"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("5")]),t._v(" WorkflowExecutionCompleted "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Result:"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(",\n DecisionTaskCompletedEventId:4"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("Even for such a trivial "),a("Term",{attrs:{term:"workflow"}}),t._v(", the history gives a lot of useful information. For complex "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" this is a really useful tool for production and development troubleshooting. History can be automatically archived to a long-term blob store (for example Amazon S3) upon "),a("Term",{attrs:{term:"workflow"}}),t._v(" completion for compliance, analytical, and troubleshooting purposes.")],1),t._v(" "),a("h2",{attrs:{id:"what-is-next"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#what-is-next"}},[t._v("#")]),t._v(" What is Next")]),t._v(" "),a("p",[t._v("Now you have completed the tutorials. You can continue to explore the key "),a("a",{attrs:{href:"/docs/concepts"}},[t._v("concepts")]),t._v(" in Cadence, and also how to use them with "),a("a",{attrs:{href:"/docs/java-client"}},[t._v("Java Client")])])])}),[],!1,null,null,null);a.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/29.34f14a1a.js b/assets/js/29.cbde47ed.js similarity index 99% rename from assets/js/29.34f14a1a.js rename to assets/js/29.cbde47ed.js index 5868bf59a..fc4a1b044 100644 --- a/assets/js/29.34f14a1a.js +++ b/assets/js/29.cbde47ed.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[29],{370:function(e,t,o){"use strict";o.r(t);var a=o(4),n=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to our Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This is our monthly blog post series focused on news from in and around the Cadence community.")]),e._v(" "),t("p",[e._v("Please see below for a short activity roundup of what has happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"sd-times-names-cadence-open-source-project-of-the-week"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#sd-times-names-cadence-open-source-project-of-the-week"}},[e._v("#")]),e._v(" SD Times Names Cadence Open Source Project of the Week")]),e._v(" "),t("p",[e._v("In April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the "),t("em",[e._v("Cadence in the News")]),e._v(" section below.")]),e._v(" "),t("h2",{attrs:{id:"follow-us-on-linkedin-and-twitter"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#follow-us-on-linkedin-and-twitter"}},[e._v("#")]),e._v(" Follow Us on LinkedIn and Twitter!")]),e._v(" "),t("p",[e._v("We have now set up Cadence accounts on "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://twitter.com/cadenceworkflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("Twitter"),t("OutboundLink")],1),e._v(" where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!")]),e._v(" "),t("p",[e._v("And don’t forget to share your news with us. We are looking forward to receiving your feedback and comments. The more we interact - the more we build our community!")]),e._v(" "),t("h2",{attrs:{id:"proposal-to-change-the-way-we-write-workflows"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#proposal-to-change-the-way-we-write-workflows"}},[e._v("#")]),e._v(" Proposal to Change the Way We Write Workflows")]),e._v(" "),t("p",[e._v("If you haven’t seen the proposal from community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(" about creating a new way to write Cadence workflows then please take a look:"),t("a",{attrs:{href:"https://github.com/uber/cadence/issues/4785",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber/cadence/issues/4785"),t("OutboundLink")],1),e._v(". He has already received some initial feedback and is currently working on putting together a proof of concept demo to show the community. As soon as we have more news about it - we will let you know!")]),e._v(" "),t("h2",{attrs:{id:"help-us-improve-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#help-us-improve-cadence"}},[e._v("#")]),e._v(" Help Us Improve Cadence")]),e._v(" "),t("p",[e._v("Do you want to help us improve Cadence? We are always looking for contributors so any contribution you can make - however small is welcome. If you would like to start contributing then please take a look at the list of "),t("a",{attrs:{href:"https://github.com/uber/cadence/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Issues on Github"),t("OutboundLink")],1),e._v(". We have some issues flagged with a tag of "),t("em",[e._v("‘good first issue'")]),e._v(" that would be a great place to start.")]),e._v(" "),t("p",[e._v("Remember that we are not only looking for code contributions but also non coding ones such as documentation improvements so please take a look and select something to work on.")]),e._v(" "),t("h2",{attrs:{id:"next-cadence-technical-office-hours-30th-may-2022"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#next-cadence-technical-office-hours-30th-may-2022"}},[e._v("#")]),e._v(" Next Cadence Technical Office Hours: 30th May 2022")]),e._v(" "),t("p",[e._v("Every month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!")]),e._v(" "),t("p",[e._v("Meetings are held on the last Monday of every month so please make sure you mark the dates in your calendars. Our next session will be on the 30th May at 9am PT so hope to see you there!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("SD Times Open Source Project of the Week : Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.youtube.com/watch?v=-f1m5EI4cRo",target:"_blank",rel:"noopener noreferrer"}},[e._v("The New Stack Interview: Meet Cadence: The Open-Source Orchestration Workflow Engine"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://thenewstack.io/instaclustr-adds-managed-cadence-to-its-platform/",target:"_blank",rel:"noopener noreferrer"}},[e._v("The New Stack: Instaclustr Adds Managed Cadence to Its Platform"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 30th May 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[29],{371:function(e,t,o){"use strict";o.r(t);var a=o(4),n=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to our Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This is our monthly blog post series focused on news from in and around the Cadence community.")]),e._v(" "),t("p",[e._v("Please see below for a short activity roundup of what has happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"sd-times-names-cadence-open-source-project-of-the-week"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#sd-times-names-cadence-open-source-project-of-the-week"}},[e._v("#")]),e._v(" SD Times Names Cadence Open Source Project of the Week")]),e._v(" "),t("p",[e._v("In April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the "),t("em",[e._v("Cadence in the News")]),e._v(" section below.")]),e._v(" "),t("h2",{attrs:{id:"follow-us-on-linkedin-and-twitter"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#follow-us-on-linkedin-and-twitter"}},[e._v("#")]),e._v(" Follow Us on LinkedIn and Twitter!")]),e._v(" "),t("p",[e._v("We have now set up Cadence accounts on "),t("a",{attrs:{href:"https://www.linkedin.com/company/cadenceworkflow/",target:"_blank",rel:"noopener noreferrer"}},[e._v("LinkedIn"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://twitter.com/cadenceworkflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("Twitter"),t("OutboundLink")],1),e._v(" where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!")]),e._v(" "),t("p",[e._v("And don’t forget to share your news with us. We are looking forward to receiving your feedback and comments. The more we interact - the more we build our community!")]),e._v(" "),t("h2",{attrs:{id:"proposal-to-change-the-way-we-write-workflows"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#proposal-to-change-the-way-we-write-workflows"}},[e._v("#")]),e._v(" Proposal to Change the Way We Write Workflows")]),e._v(" "),t("p",[e._v("If you haven’t seen the proposal from community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(" about creating a new way to write Cadence workflows then please take a look:"),t("a",{attrs:{href:"https://github.com/uber/cadence/issues/4785",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber/cadence/issues/4785"),t("OutboundLink")],1),e._v(". He has already received some initial feedback and is currently working on putting together a proof of concept demo to show the community. As soon as we have more news about it - we will let you know!")]),e._v(" "),t("h2",{attrs:{id:"help-us-improve-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#help-us-improve-cadence"}},[e._v("#")]),e._v(" Help Us Improve Cadence")]),e._v(" "),t("p",[e._v("Do you want to help us improve Cadence? We are always looking for contributors so any contribution you can make - however small is welcome. If you would like to start contributing then please take a look at the list of "),t("a",{attrs:{href:"https://github.com/uber/cadence/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Issues on Github"),t("OutboundLink")],1),e._v(". We have some issues flagged with a tag of "),t("em",[e._v("‘good first issue'")]),e._v(" that would be a great place to start.")]),e._v(" "),t("p",[e._v("Remember that we are not only looking for code contributions but also non coding ones such as documentation improvements so please take a look and select something to work on.")]),e._v(" "),t("h2",{attrs:{id:"next-cadence-technical-office-hours-30th-may-2022"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#next-cadence-technical-office-hours-30th-may-2022"}},[e._v("#")]),e._v(" Next Cadence Technical Office Hours: 30th May 2022")]),e._v(" "),t("p",[e._v("Every month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!")]),e._v(" "),t("p",[e._v("Meetings are held on the last Monday of every month so please make sure you mark the dates in your calendars. Our next session will be on the 30th May at 9am PT so hope to see you there!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("SD Times Open Source Project of the Week : Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.youtube.com/watch?v=-f1m5EI4cRo",target:"_blank",rel:"noopener noreferrer"}},[e._v("The New Stack Interview: Meet Cadence: The Open-Source Orchestration Workflow Engine"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://thenewstack.io/instaclustr-adds-managed-cadence-to-its-platform/",target:"_blank",rel:"noopener noreferrer"}},[e._v("The New Stack: Instaclustr Adds Managed Cadence to Its Platform"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 30th May 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/30.123bb74e.js b/assets/js/30.60d3695e.js similarity index 99% rename from assets/js/30.123bb74e.js rename to assets/js/30.60d3695e.js index 976262f52..444bf9d52 100644 --- a/assets/js/30.123bb74e.js +++ b/assets/js/30.60d3695e.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[30],{371:function(e,t,a){"use strict";a.r(t);var n=a(4),o=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to our regular Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This is our monthly blog post series focused on news from in and around the Cadence community.")]),e._v(" "),t("p",[e._v("Please see below for a short activity roundup of what has happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"cadence-polling-cookbook"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-polling-cookbook"}},[e._v("#")]),e._v(" Cadence Polling Cookbook")]),e._v(" "),t("p",[e._v("Do you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new "),t("a",{attrs:{href:"https://info.instaclustr.com/rs/620-JHM-287/images/Cadence_Cookbook.pdf",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Polling cookbook"),t("OutboundLink")],1),e._v(" is now available that gives you all the details you need. The cookbook was created by several members of the "),t("a",{attrs:{href:"https://www.instaclustr.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr"),t("OutboundLink")],1),e._v(" team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the "),t("em",[e._v("Polling an external API for a specific resource to become available")]),e._v(" section of the "),t("a",{attrs:{href:"https://cadenceworkflow.io/docs/use-cases/polling/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Polling Use cases"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("A "),t("a",{attrs:{href:"https://github.com/instaclustr/cadence-cookbooks-instafood",target:"_blank",rel:"noopener noreferrer"}},[e._v("Github repository"),t("OutboundLink")],1),e._v(" has also been created with the sample cookbook code for you to try out for yourself.")]),e._v(" "),t("p",[e._v("So please go ahead and try out the cookbook and don’t forget to let us have your feedback.")]),e._v(" "),t("h2",{attrs:{id:"congratulations-to-a-first-time-contributor"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#congratulations-to-a-first-time-contributor"}},[e._v("#")]),e._v(" Congratulations to a First Time Contributor")]),e._v(" "),t("p",[e._v("We are always looking for ways to encourage project participation. It doesn't matter how large the contribution is or whether it is coding or non coding related. This month one of our community members had "),t("a",{attrs:{href:"https://github.com/uber/Cadence-Docs/pull/107",target:"_blank",rel:"noopener noreferrer"}},[e._v("their first PR merged"),t("OutboundLink")],1),e._v("- so congratulations and many thanks for the contribution "),t("a",{attrs:{href:"https://github.com/tonyxrandall",target:"_blank",rel:"noopener noreferrer"}},[e._v("tonyxrandall"),t("OutboundLink")],1),e._v("!")]),e._v(" "),t("h2",{attrs:{id:"share-your-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#share-your-news"}},[e._v("#")]),e._v(" Share Your News!")]),e._v(" "),t("p",[e._v("Our #support "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel is always full of questions and activity so we know that there are are lot of people out there exploring, trying out and setting up Cadence. We are always interested in hearing about what the community are doing so if you have something to you want to share as a blog post or part of this montly update then please contact us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")]),e._v(" "),t("h2",{attrs:{id:"next-cadence-technical-office-hours-3rd-and-27th-june-2022"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#next-cadence-technical-office-hours-3rd-and-27th-june-2022"}},[e._v("#")]),e._v(" Next Cadence Technical Office Hours: 3rd and 27th June 2022")]),e._v(" "),t("p",[e._v("We will be having two Technical Office Hours sessions this month. As 30th May was a US holiday we have moved May’s Technical Office Hours to Friday 3rd June at 11am PT. And we will be having our June call on 27th.")]),e._v(" "),t("p",[e._v("Remember that in these Zoom sessions you can speak directly with some of our Cadence experts so if you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-integration-patterns-and-new-cadence-features/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Drones With Cadence and Apache Kafka – Integration Patterns and New Cadence Features"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://doordash.engineering/2022/05/18/enabling-faster-financial-partnership-integrations-using-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Enabling Faster Financial Partnership Integrations Using Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-architecture-order-and-delivery-workflows/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Drones With Cadence and Apache Kafka® – Architecture, Order and Delivery Workflows"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 3rd June 2022 @ 11am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-emea-spinning-workflows-cadence.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar : Spinning up Your Workflows with Cadence : 20th June"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 27th June 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-building-cadence-workflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Building Your First Cadence Workflow with Java and Go - 19th July 2022"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[30],{370:function(e,t,a){"use strict";a.r(t);var n=a(4),o=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to our regular Cadence Community Spotlight update!")]),e._v(" "),t("p",[e._v("This is our monthly blog post series focused on news from in and around the Cadence community.")]),e._v(" "),t("p",[e._v("Please see below for a short activity roundup of what has happened recently in the community.")]),e._v(" "),t("h2",{attrs:{id:"cadence-polling-cookbook"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-polling-cookbook"}},[e._v("#")]),e._v(" Cadence Polling Cookbook")]),e._v(" "),t("p",[e._v("Do you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new "),t("a",{attrs:{href:"https://info.instaclustr.com/rs/620-JHM-287/images/Cadence_Cookbook.pdf",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Polling cookbook"),t("OutboundLink")],1),e._v(" is now available that gives you all the details you need. The cookbook was created by several members of the "),t("a",{attrs:{href:"https://www.instaclustr.com/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr"),t("OutboundLink")],1),e._v(" team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the "),t("em",[e._v("Polling an external API for a specific resource to become available")]),e._v(" section of the "),t("a",{attrs:{href:"https://cadenceworkflow.io/docs/use-cases/polling/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Polling Use cases"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("A "),t("a",{attrs:{href:"https://github.com/instaclustr/cadence-cookbooks-instafood",target:"_blank",rel:"noopener noreferrer"}},[e._v("Github repository"),t("OutboundLink")],1),e._v(" has also been created with the sample cookbook code for you to try out for yourself.")]),e._v(" "),t("p",[e._v("So please go ahead and try out the cookbook and don’t forget to let us have your feedback.")]),e._v(" "),t("h2",{attrs:{id:"congratulations-to-a-first-time-contributor"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#congratulations-to-a-first-time-contributor"}},[e._v("#")]),e._v(" Congratulations to a First Time Contributor")]),e._v(" "),t("p",[e._v("We are always looking for ways to encourage project participation. It doesn't matter how large the contribution is or whether it is coding or non coding related. This month one of our community members had "),t("a",{attrs:{href:"https://github.com/uber/Cadence-Docs/pull/107",target:"_blank",rel:"noopener noreferrer"}},[e._v("their first PR merged"),t("OutboundLink")],1),e._v("- so congratulations and many thanks for the contribution "),t("a",{attrs:{href:"https://github.com/tonyxrandall",target:"_blank",rel:"noopener noreferrer"}},[e._v("tonyxrandall"),t("OutboundLink")],1),e._v("!")]),e._v(" "),t("h2",{attrs:{id:"share-your-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#share-your-news"}},[e._v("#")]),e._v(" Share Your News!")]),e._v(" "),t("p",[e._v("Our #support "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel is always full of questions and activity so we know that there are are lot of people out there exploring, trying out and setting up Cadence. We are always interested in hearing about what the community are doing so if you have something to you want to share as a blog post or part of this montly update then please contact us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")]),e._v(" "),t("h2",{attrs:{id:"next-cadence-technical-office-hours-3rd-and-27th-june-2022"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#next-cadence-technical-office-hours-3rd-and-27th-june-2022"}},[e._v("#")]),e._v(" Next Cadence Technical Office Hours: 3rd and 27th June 2022")]),e._v(" "),t("p",[e._v("We will be having two Technical Office Hours sessions this month. As 30th May was a US holiday we have moved May’s Technical Office Hours to Friday 3rd June at 11am PT. And we will be having our June call on 27th.")]),e._v(" "),t("p",[e._v("Remember that in these Zoom sessions you can speak directly with some of our Cadence experts so if you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-integration-patterns-and-new-cadence-features/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Drones With Cadence and Apache Kafka – Integration Patterns and New Cadence Features"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://doordash.engineering/2022/05/18/enabling-faster-financial-partnership-integrations-using-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Enabling Faster Financial Partnership Integrations Using Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-architecture-order-and-delivery-workflows/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Drones With Cadence and Apache Kafka® – Architecture, Order and Delivery Workflows"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 3rd June 2022 @ 11am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-emea-spinning-workflows-cadence.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar : Spinning up Your Workflows with Cadence : 20th June"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 27th June 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-building-cadence-workflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Building Your First Cadence Workflow with Java and Go - 19th July 2022"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/31.ad819ec7.js b/assets/js/31.b7ab4dd8.js similarity index 98% rename from assets/js/31.ad819ec7.js rename to assets/js/31.b7ab4dd8.js index 0c9a3bf65..4af567fd2 100644 --- a/assets/js/31.ad819ec7.js +++ b/assets/js/31.b7ab4dd8.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[31],{373:function(e,t,n){"use strict";n.r(t);var o=n(4),a=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"knowledge-sharing-and-support"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#knowledge-sharing-and-support"}},[e._v("#")]),e._v(" Knowledge Sharing and Support")]),e._v(" "),t("p",[e._v("Our Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.")]),e._v(" "),t("p",[e._v("Please don’t forget that we encourage everyone to post questions on StackOverflow using the "),t("strong",[e._v("cadence-workflow")]),e._v(" and "),t("strong",[e._v("uber-cadence")]),e._v(" tags so that others with similar questions or issues can easily search for and find an answer.")]),e._v(" "),t("h2",{attrs:{id:"improving-technical-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#improving-technical-office-hours"}},[e._v("#")]),e._v(" Improving Technical Office Hours")]),e._v(" "),t("p",[e._v("Over the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or dates are not convenient) so we are planning to send out a short community survey.")]),e._v(" "),t("p",[e._v("If you have any ideas or comments about how we can improve our community office hours sessions then please include this in your feedback or contact us in the #community Slack channel.")]),e._v(" "),t("h2",{attrs:{id:"cadence-stability-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-stability-improvements"}},[e._v("#")]),e._v(" Cadence Stability Improvements")]),e._v(" "),t("p",[e._v("Is Cadence getting better? Yes it is! Many of you may have noticed that Cadence is improving.That is because of the amount of work being done behind the scenes. The Cadence core team has been doing a lot of work to stabilise Cadence functionality. Keep watching out for even more improvements!")]),e._v(" "),t("h2",{attrs:{id:"sprechen-sie-deutsch"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#sprechen-sie-deutsch"}},[e._v("#")]),e._v(" Sprechen Sie Deutsch?")]),e._v(" "),t("p",[e._v("Do you speak German? If you do speak then we have some good news for you. A couple of Cadence blog posts have been translated into German to help promote it to a wider audience. The links are as below and we hope you find them useful!")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://www.credativ.de/blog/howtos/workflows-mit-cadence-optimieren/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Workflows mit Cadence optimieren!"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://www.credativ.de/blog/howtos/apache-kafka-microservices-mit-cadence-workflows-optimieren/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Kafka® Microservices mit Cadence-Workflows optimieren"),t("OutboundLink")],1)])]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/workflow-comparison-uber-cadence-vs-netflix-conductor/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Workflow Comparison: Uber Cadence vs Netflix Conductor"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/securing-cadence-web-using-nginx/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Securing Cadence Web Using NGINX"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 25th July 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-building-cadence-workflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Building Your First Cadence Workflow with Java and Go - 19th July 2022"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[31],{372:function(e,t,n){"use strict";n.r(t);var o=n(4),a=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"knowledge-sharing-and-support"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#knowledge-sharing-and-support"}},[e._v("#")]),e._v(" Knowledge Sharing and Support")]),e._v(" "),t("p",[e._v("Our Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.")]),e._v(" "),t("p",[e._v("Please don’t forget that we encourage everyone to post questions on StackOverflow using the "),t("strong",[e._v("cadence-workflow")]),e._v(" and "),t("strong",[e._v("uber-cadence")]),e._v(" tags so that others with similar questions or issues can easily search for and find an answer.")]),e._v(" "),t("h2",{attrs:{id:"improving-technical-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#improving-technical-office-hours"}},[e._v("#")]),e._v(" Improving Technical Office Hours")]),e._v(" "),t("p",[e._v("Over the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or dates are not convenient) so we are planning to send out a short community survey.")]),e._v(" "),t("p",[e._v("If you have any ideas or comments about how we can improve our community office hours sessions then please include this in your feedback or contact us in the #community Slack channel.")]),e._v(" "),t("h2",{attrs:{id:"cadence-stability-improvements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-stability-improvements"}},[e._v("#")]),e._v(" Cadence Stability Improvements")]),e._v(" "),t("p",[e._v("Is Cadence getting better? Yes it is! Many of you may have noticed that Cadence is improving.That is because of the amount of work being done behind the scenes. The Cadence core team has been doing a lot of work to stabilise Cadence functionality. Keep watching out for even more improvements!")]),e._v(" "),t("h2",{attrs:{id:"sprechen-sie-deutsch"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#sprechen-sie-deutsch"}},[e._v("#")]),e._v(" Sprechen Sie Deutsch?")]),e._v(" "),t("p",[e._v("Do you speak German? If you do speak then we have some good news for you. A couple of Cadence blog posts have been translated into German to help promote it to a wider audience. The links are as below and we hope you find them useful!")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://www.credativ.de/blog/howtos/workflows-mit-cadence-optimieren/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Workflows mit Cadence optimieren!"),t("OutboundLink")],1)]),e._v(" "),t("li",[t("a",{attrs:{href:"https://www.credativ.de/blog/howtos/apache-kafka-microservices-mit-cadence-workflows-optimieren/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Kafka® Microservices mit Cadence-Workflows optimieren"),t("OutboundLink")],1)])]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/workflow-comparison-uber-cadence-vs-netflix-conductor/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Workflow Comparison: Uber Cadence vs Netflix Conductor"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/securing-cadence-web-using-nginx/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Securing Cadence Web Using NGINX"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 25th July 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-building-cadence-workflow",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Building Your First Cadence Workflow with Java and Go - 19th July 2022"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/32.eb6f85d0.js b/assets/js/32.d7cd8e46.js similarity index 98% rename from assets/js/32.eb6f85d0.js rename to assets/js/32.d7cd8e46.js index 7de91321c..42cd00874 100644 --- a/assets/js/32.eb6f85d0.js +++ b/assets/js/32.d7cd8e46.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[32],{372:function(e,t,n){"use strict";n.r(t);var a=n(4),r=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"flying-drones-with-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#flying-drones-with-cadence"}},[e._v("#")]),e._v(" Flying Drones with Cadence")]),e._v(" "),t("p",[e._v("Community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/paul-brebner-0a547b4/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Paul Brebner"),t("OutboundLink")],1),e._v(" has released "),t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-how-many-drones-can-we-fly/",target:"_blank",rel:"noopener noreferrer"}},[e._v("another blog"),t("OutboundLink")],1),e._v(" in the series of using Cadence to manage a drone delivery service. You can see a "),t("a",{attrs:{href:"https://www.youtube.com/watch?v=YgQeFSqzprk",target:"_blank",rel:"noopener noreferrer"}},[e._v("simulated view of it in action"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Don’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.")]),e._v(" "),t("h2",{attrs:{id:"github-statistics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#github-statistics"}},[e._v("#")]),e._v(" GitHub Statistics")]),e._v(" "),t("p",[e._v("During July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details "),t("a",{attrs:{href:"https://github.com/uber/cadence/pulse/monthly",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("The Cadence documentation repository was not as busy with only 2 PRs merged in July, 5 commits and 3 authors active. More details can be found "),t("a",{attrs:{href:"https://github.com/uber/Cadence-Docs/pulse/monthly",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1)]),e._v(" "),t("h2",{attrs:{id:"cadence-roadmap"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-roadmap"}},[e._v("#")]),e._v(" Cadence Roadmap")]),e._v(" "),t("p",[e._v("The Cadence Core team has been busy this month looking at the various community feedback for potential improvements and features for Cadence. Planning is already in place for a development roadmap and it is still a little too early to say what will be included so please watch out for future updates. All I know is that it’s going to be exciting!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/migrate-to-cadence-from-temporal/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Migrate to Cadence From Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-how-many-drones-can-we-fly/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Drones With Cadence and Apache Kafka®: How Many Drones Can We Fly?"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 29th August 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-cadence-fundamentals-event-sourcing/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Cadence Fundamentals: Event Sourcing - Two sessions one on 16th and one on 18th August 2022\n"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-emea-building-cadence-workflow.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Building Your First Cadence Workflow with Java and Go - 1st September 2022"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[32],{373:function(e,t,n){"use strict";n.r(t);var a=n(4),r=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"flying-drones-with-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#flying-drones-with-cadence"}},[e._v("#")]),e._v(" Flying Drones with Cadence")]),e._v(" "),t("p",[e._v("Community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/paul-brebner-0a547b4/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Paul Brebner"),t("OutboundLink")],1),e._v(" has released "),t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-how-many-drones-can-we-fly/",target:"_blank",rel:"noopener noreferrer"}},[e._v("another blog"),t("OutboundLink")],1),e._v(" in the series of using Cadence to manage a drone delivery service. You can see a "),t("a",{attrs:{href:"https://www.youtube.com/watch?v=YgQeFSqzprk",target:"_blank",rel:"noopener noreferrer"}},[e._v("simulated view of it in action"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Don’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.")]),e._v(" "),t("h2",{attrs:{id:"github-statistics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#github-statistics"}},[e._v("#")]),e._v(" GitHub Statistics")]),e._v(" "),t("p",[e._v("During July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details "),t("a",{attrs:{href:"https://github.com/uber/cadence/pulse/monthly",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("The Cadence documentation repository was not as busy with only 2 PRs merged in July, 5 commits and 3 authors active. More details can be found "),t("a",{attrs:{href:"https://github.com/uber/Cadence-Docs/pulse/monthly",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1)]),e._v(" "),t("h2",{attrs:{id:"cadence-roadmap"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-roadmap"}},[e._v("#")]),e._v(" Cadence Roadmap")]),e._v(" "),t("p",[e._v("The Cadence Core team has been busy this month looking at the various community feedback for potential improvements and features for Cadence. Planning is already in place for a development roadmap and it is still a little too early to say what will be included so please watch out for future updates. All I know is that it’s going to be exciting!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/migrate-to-cadence-from-temporal/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Migrate to Cadence From Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/spinning-your-drones-with-cadence-and-apache-kafka-how-many-drones-can-we-fly/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Spinning Your Drones With Cadence and Apache Kafka®: How Many Drones Can We Fly?"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 29th August 2022 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-cadence-fundamentals-event-sourcing/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Cadence Fundamentals: Event Sourcing - Two sessions one on 16th and one on 18th August 2022\n"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-emea-building-cadence-workflow.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Building Your First Cadence Workflow with Java and Go - 1st September 2022"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/33.0f2645ac.js b/assets/js/33.fc898d30.js similarity index 95% rename from assets/js/33.0f2645ac.js rename to assets/js/33.fc898d30.js index 5c399dec4..f30671b83 100644 --- a/assets/js/33.0f2645ac.js +++ b/assets/js/33.fc898d30.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[33],{339:function(o,e,t){"use strict";t.r(e);var n=t(0),i=Object(n.a)({},(function(){var o=this,e=o._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":o.$parent.slotKey}},[e("h1",{attrs:{id:"polling"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#polling"}},[o._v("#")]),o._v(" Polling")]),o._v(" "),e("p",[o._v("Polling is executing a periodic action checking for a state change. Examples are pinging a host, calling a REST API, or listing an Amazon S3 bucket for newly uploaded files.")]),o._v(" "),e("p",[o._v("Cadence support for long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),o._v(" and unlimited retries makes it a good fit.")],1),o._v(" "),e("p",[o._v("Some real-world use cases:")]),o._v(" "),e("ul",[e("li",[o._v("Network, host and service monitoring")]),o._v(" "),e("li",[o._v("Processing files uploaded to FTP or S3")]),o._v(" "),e("li",[e("a",{attrs:{href:"https://github.com/instaclustr/cadence-cookbooks-instafood/blob/main/cookbooks/polling/polling-megafood.md",target:"_blank",rel:"noopener noreferrer"}},[o._v("Cadence Polling Cookbook by Instaclustr: Polling an external API for a specific resource to become available: "),e("OutboundLink")],1)])])])}),[],!1,null,null,null);e.default=i.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[33],{340:function(o,e,t){"use strict";t.r(e);var n=t(0),i=Object(n.a)({},(function(){var o=this,e=o._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":o.$parent.slotKey}},[e("h1",{attrs:{id:"polling"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#polling"}},[o._v("#")]),o._v(" Polling")]),o._v(" "),e("p",[o._v("Polling is executing a periodic action checking for a state change. Examples are pinging a host, calling a REST API, or listing an Amazon S3 bucket for newly uploaded files.")]),o._v(" "),e("p",[o._v("Cadence support for long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),o._v(" and unlimited retries makes it a good fit.")],1),o._v(" "),e("p",[o._v("Some real-world use cases:")]),o._v(" "),e("ul",[e("li",[o._v("Network, host and service monitoring")]),o._v(" "),e("li",[o._v("Processing files uploaded to FTP or S3")]),o._v(" "),e("li",[e("a",{attrs:{href:"https://github.com/instaclustr/cadence-cookbooks-instafood/blob/main/cookbooks/polling/polling-megafood.md",target:"_blank",rel:"noopener noreferrer"}},[o._v("Cadence Polling Cookbook by Instaclustr: Polling an external API for a specific resource to become available: "),e("OutboundLink")],1)])])])}),[],!1,null,null,null);e.default=i.exports}}]); \ No newline at end of file diff --git a/assets/js/34.38e65a26.js b/assets/js/34.ae5f2824.js similarity index 95% rename from assets/js/34.38e65a26.js rename to assets/js/34.ae5f2824.js index 7b587d964..953898587 100644 --- a/assets/js/34.38e65a26.js +++ b/assets/js/34.ae5f2824.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[34],{340:function(e,t,r){"use strict";r.r(t);var s=r(0),a=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"event-driven-application"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#event-driven-application"}},[e._v("#")]),e._v(" Event driven application")]),e._v(" "),t("p",[e._v("Many applications listen to multiple "),t("Term",{attrs:{term:"event"}}),e._v(" sources, update the state of correspondent business entities,\nand have to execute actions if some state is reached.\nCadence is a good fit for many of these. It has direct support for asynchronous "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" (aka "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v("),\nhas a simple programming model that obscures a lot of complexity\naround state persistence, and ensures external action execution through built-in retries.")],1),e._v(" "),t("p",[e._v("Real-world examples:")]),e._v(" "),t("ul",[t("li",[e._v("Fraud detection where "),t("Term",{attrs:{term:"workflow"}}),e._v(" reacts to "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" generated by consumer behavior")],1),e._v(" "),t("li",[e._v("Customer loyalty program where the "),t("Term",{attrs:{term:"workflow"}}),e._v(" accumulates reward points and applies them when requested")],1)])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[34],{339:function(e,t,r){"use strict";r.r(t);var s=r(0),a=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"event-driven-application"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#event-driven-application"}},[e._v("#")]),e._v(" Event driven application")]),e._v(" "),t("p",[e._v("Many applications listen to multiple "),t("Term",{attrs:{term:"event"}}),e._v(" sources, update the state of correspondent business entities,\nand have to execute actions if some state is reached.\nCadence is a good fit for many of these. It has direct support for asynchronous "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" (aka "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v("),\nhas a simple programming model that obscures a lot of complexity\naround state persistence, and ensures external action execution through built-in retries.")],1),e._v(" "),t("p",[e._v("Real-world examples:")]),e._v(" "),t("ul",[t("li",[e._v("Fraud detection where "),t("Term",{attrs:{term:"workflow"}}),e._v(" reacts to "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" generated by consumer behavior")],1),e._v(" "),t("li",[e._v("Customer loyalty program where the "),t("Term",{attrs:{term:"workflow"}}),e._v(" accumulates reward points and applies them when requested")],1)])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/37.3867be6f.js b/assets/js/37.48286875.js similarity index 96% rename from assets/js/37.3867be6f.js rename to assets/js/37.48286875.js index 07c32776a..a93331ff0 100644 --- a/assets/js/37.3867be6f.js +++ b/assets/js/37.48286875.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[37],{343:function(e,o,n){"use strict";n.r(o);var t=n(0),r=Object(t.a)({},(function(){var e=this,o=e._self._c;return o("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[o("h1",{attrs:{id:"infrastructure-provisioning"}},[o("a",{staticClass:"header-anchor",attrs:{href:"#infrastructure-provisioning"}},[e._v("#")]),e._v(" Infrastructure provisioning")]),e._v(" "),o("p",[e._v("Provisioning a new datacenter or a pool of machines in a public cloud is a potentially long running operation with\na lot of possibilities for intermittent failures. The scale is also a concern when tens or even hundreds of thousands of resources should be provisioned and configured. One useful feature for provisioning scenarios is Cadence support for routing "),o("Term",{attrs:{term:"activity"}}),e._v(" execution to a specific process or host.")],1),e._v(" "),o("p",[e._v("A lot of operations require some sort of locking to ensure that no more than one mutation is executed on a resource at a time.\nCadence provides strong guarantees of uniqueness by business ID. This can be used to implement such locking behavior in a fault tolerant and scalable manner.")]),e._v(" "),o("p",[e._v("Some real-world use cases:")]),e._v(" "),o("ul",[o("li",[o("a",{attrs:{href:"https://banzaicloud.com/blog/introduction-to-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Using Cadence workflows to spin up Kubernetes, by Banzai Cloud"),o("OutboundLink")],1)]),e._v(" "),o("li",[o("a",{attrs:{href:"https://www.youtube.com/watch?v=kDlrM6sgk2k&feature=youtu.be&t=1188",target:"_blank",rel:"noopener noreferrer"}},[e._v("Using Cadence to orchestrate cluster life cycle in HashiCorp Consul, by HashiCorp"),o("OutboundLink")],1)])])])}),[],!1,null,null,null);o.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[37],{344:function(e,o,n){"use strict";n.r(o);var t=n(0),r=Object(t.a)({},(function(){var e=this,o=e._self._c;return o("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[o("h1",{attrs:{id:"infrastructure-provisioning"}},[o("a",{staticClass:"header-anchor",attrs:{href:"#infrastructure-provisioning"}},[e._v("#")]),e._v(" Infrastructure provisioning")]),e._v(" "),o("p",[e._v("Provisioning a new datacenter or a pool of machines in a public cloud is a potentially long running operation with\na lot of possibilities for intermittent failures. The scale is also a concern when tens or even hundreds of thousands of resources should be provisioned and configured. One useful feature for provisioning scenarios is Cadence support for routing "),o("Term",{attrs:{term:"activity"}}),e._v(" execution to a specific process or host.")],1),e._v(" "),o("p",[e._v("A lot of operations require some sort of locking to ensure that no more than one mutation is executed on a resource at a time.\nCadence provides strong guarantees of uniqueness by business ID. This can be used to implement such locking behavior in a fault tolerant and scalable manner.")]),e._v(" "),o("p",[e._v("Some real-world use cases:")]),e._v(" "),o("ul",[o("li",[o("a",{attrs:{href:"https://banzaicloud.com/blog/introduction-to-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Using Cadence workflows to spin up Kubernetes, by Banzai Cloud"),o("OutboundLink")],1)]),e._v(" "),o("li",[o("a",{attrs:{href:"https://www.youtube.com/watch?v=kDlrM6sgk2k&feature=youtu.be&t=1188",target:"_blank",rel:"noopener noreferrer"}},[e._v("Using Cadence to orchestrate cluster life cycle in HashiCorp Consul, by HashiCorp"),o("OutboundLink")],1)])])])}),[],!1,null,null,null);o.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/37.8f2dc7f8.js b/assets/js/37.f5b86fe0.js similarity index 98% rename from assets/js/37.8f2dc7f8.js rename to assets/js/37.f5b86fe0.js index 4ef67336d..ea65a8f69 100644 --- a/assets/js/37.8f2dc7f8.js +++ b/assets/js/37.f5b86fe0.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[37],{378:function(e,t,a){"use strict";a.r(t);var o=a(4),n=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("I know we are a little early this month as many people will be taking some time out for holidays.")]),e._v(" "),t("h2",{attrs:{id:"happy-holidays"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#happy-holidays"}},[e._v("#")]),e._v(" Happy Holidays")]),e._v(" "),t("p",[e._v("We'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.")]),e._v(" "),t("p",[e._v("Let's keep going!...This is a great way to build a sustainable community.")]),e._v(" "),t("p",[e._v("We are sure that 2023 will be even more exciting as we continue to develop Cadence.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/cadence-iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence iWF"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://github.com/instaclustr/cadence-cookbooks-instafood/blob/main/cookbooks/child-workflows/child-workflows-megafood.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("Child Workflow Cookbook"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/cadence-connection-examples-using-tls/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Connection Examples Using TLS"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 30th January 2023 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[37],{382:function(e,t,a){"use strict";a.r(t);var o=a(4),n=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("I know we are a little early this month as many people will be taking some time out for holidays.")]),e._v(" "),t("h2",{attrs:{id:"happy-holidays"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#happy-holidays"}},[e._v("#")]),e._v(" Happy Holidays")]),e._v(" "),t("p",[e._v("We'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.")]),e._v(" "),t("p",[e._v("Let's keep going!...This is a great way to build a sustainable community.")]),e._v(" "),t("p",[e._v("We are sure that 2023 will be even more exciting as we continue to develop Cadence.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/cadence-iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence iWF"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://github.com/instaclustr/cadence-cookbooks-instafood/blob/main/cookbooks/child-workflows/child-workflows-megafood.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("Child Workflow Cookbook"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/cadence-connection-examples-using-tls/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Connection Examples Using TLS"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://calendar.google.com/calendar/u/0/embed?src=e6r40gp3c2r01054id7e99dlac@group.calendar.google.com&ctz=America/Los_Angeles",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Technical Office Hours - 30th January 2023 @ 9am PT"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/38.40fd752c.js b/assets/js/38.2fda6e47.js similarity index 98% rename from assets/js/38.40fd752c.js rename to assets/js/38.2fda6e47.js index f4b4bbeec..20207433c 100644 --- a/assets/js/38.40fd752c.js +++ b/assets/js/38.2fda6e47.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[38],{382:function(e,t,a){"use strict";a.r(t);var n=a(4),o=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"closing-down-cadence-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#closing-down-cadence-office-hours"}},[e._v("#")]),e._v(" Closing Down Cadence Office Hours")]),e._v(" "),t("p",[e._v("We have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.")]),e._v(" "),t("p",[e._v("Thanks very much to "),t("a",{attrs:{href:"https://www.linkedin.com/in/enderdemirkaya/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Ender Demirkaya"),t("OutboundLink")],1),e._v("and the Uber team for making themselves available for these sessions.")]),e._v(" "),t("p",[e._v("Please remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel. If you also post the details on StackOverflow with the cadence workflow tag then there will be a searchable history for others who encounter the same issue to find a solution.")]),e._v(" "),t("h2",{attrs:{id:"update-on-iwf-support-for-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#update-on-iwf-support-for-cadence"}},[e._v("#")]),e._v(" Update on iWF Support for Cadence")]),e._v(" "),t("p",[e._v("Last October we featured an update in our monthly blog about "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF - Interpreter for Workflow"),t("OutboundLink")],1),e._v(", a project built on top of Cadence by community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(". It was announced recently that iWF has released a "),t("a",{attrs:{href:"https://github.com/iworkflowio/iwf-golang-sdk",target:"_blank",rel:"noopener noreferrer"}},[e._v("Golang SDK"),t("OutboundLink")],1),e._v(" and updated versions of the "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("Java SDK and server"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Long is really keen to get feedback so please take a look at iWF, try them out and presented him any feedback.\nLong has also created a couple of blog posts about iWF that we have featured in the Cadence in the News section below so please take a look.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/a-letter-to-cadence-temporal-and-workflow-tech-community-b32e9fa97a0c",target:"_blank",rel:"noopener noreferrer"}},[e._v("A Letter to Cadence/Temporal and Workflow Tech Community"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/iwf-vs-cadence-temporal-1e11b35960fe",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF vs Cadence/Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/aws-privatelink-for-cadence-on-instaclustr-by-netapp/",target:"_blank",rel:"noopener noreferrer"}},[e._v("AWS PrivateLink Connectivity Is Now Available with Instaclustr for Cadence"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("p",[e._v("No upcoming events at the moment.")]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[38],{380:function(e,t,a){"use strict";a.r(t);var n=a(4),o=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"closing-down-cadence-office-hours"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#closing-down-cadence-office-hours"}},[e._v("#")]),e._v(" Closing Down Cadence Office Hours")]),e._v(" "),t("p",[e._v("We have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.")]),e._v(" "),t("p",[e._v("Thanks very much to "),t("a",{attrs:{href:"https://www.linkedin.com/in/enderdemirkaya/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Ender Demirkaya"),t("OutboundLink")],1),e._v("and the Uber team for making themselves available for these sessions.")]),e._v(" "),t("p",[e._v("Please remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel. If you also post the details on StackOverflow with the cadence workflow tag then there will be a searchable history for others who encounter the same issue to find a solution.")]),e._v(" "),t("h2",{attrs:{id:"update-on-iwf-support-for-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#update-on-iwf-support-for-cadence"}},[e._v("#")]),e._v(" Update on iWF Support for Cadence")]),e._v(" "),t("p",[e._v("Last October we featured an update in our monthly blog about "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF - Interpreter for Workflow"),t("OutboundLink")],1),e._v(", a project built on top of Cadence by community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(". It was announced recently that iWF has released a "),t("a",{attrs:{href:"https://github.com/iworkflowio/iwf-golang-sdk",target:"_blank",rel:"noopener noreferrer"}},[e._v("Golang SDK"),t("OutboundLink")],1),e._v(" and updated versions of the "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("Java SDK and server"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Long is really keen to get feedback so please take a look at iWF, try them out and presented him any feedback.\nLong has also created a couple of blog posts about iWF that we have featured in the Cadence in the News section below so please take a look.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/a-letter-to-cadence-temporal-and-workflow-tech-community-b32e9fa97a0c",target:"_blank",rel:"noopener noreferrer"}},[e._v("A Letter to Cadence/Temporal and Workflow Tech Community"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/iwf-vs-cadence-temporal-1e11b35960fe",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF vs Cadence/Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/aws-privatelink-for-cadence-on-instaclustr-by-netapp/",target:"_blank",rel:"noopener noreferrer"}},[e._v("AWS PrivateLink Connectivity Is Now Available with Instaclustr for Cadence"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("p",[e._v("No upcoming events at the moment.")]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/38.a39819d0.js b/assets/js/38.41aa0e5c.js similarity index 94% rename from assets/js/38.a39819d0.js rename to assets/js/38.41aa0e5c.js index 3410b2d21..5bcc266e8 100644 --- a/assets/js/38.a39819d0.js +++ b/assets/js/38.41aa0e5c.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[38],{344:function(e,n,t){"use strict";t.r(n);var s=t(0),a=Object(s.a)({},(function(){var e=this,n=e._self._c;return n("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[n("h1",{attrs:{id:"ci-cd-and-deployment"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#ci-cd-and-deployment"}},[e._v("#")]),e._v(" CI/CD and Deployment")]),e._v(" "),n("p",[e._v("Implementing CI/CD pipelines and deployment of applications to containers or virtual or physical machines is a non-trivial process.\nIts business logic has to deal with complex requirements around rolling upgrades, canary deployments, and rollbacks.\nCadence is a perfect platform for building a deployment solution because it provides all the necessary guarantees and abstractions\nallowing developers to focus on the business logic.")]),e._v(" "),n("p",[e._v("Example production systems:")]),e._v(" "),n("ul",[n("li",[e._v("Uber internal deployment infrastructure")]),e._v(" "),n("li",[e._v("Update push to IoT devices")])])])}),[],!1,null,null,null);n.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[38],{343:function(e,n,t){"use strict";t.r(n);var s=t(0),a=Object(s.a)({},(function(){var e=this,n=e._self._c;return n("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[n("h1",{attrs:{id:"ci-cd-and-deployment"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#ci-cd-and-deployment"}},[e._v("#")]),e._v(" CI/CD and Deployment")]),e._v(" "),n("p",[e._v("Implementing CI/CD pipelines and deployment of applications to containers or virtual or physical machines is a non-trivial process.\nIts business logic has to deal with complex requirements around rolling upgrades, canary deployments, and rollbacks.\nCadence is a perfect platform for building a deployment solution because it provides all the necessary guarantees and abstractions\nallowing developers to focus on the business logic.")]),e._v(" "),n("p",[e._v("Example production systems:")]),e._v(" "),n("ul",[n("li",[e._v("Uber internal deployment infrastructure")]),e._v(" "),n("li",[e._v("Update push to IoT devices")])])])}),[],!1,null,null,null);n.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/39.57896f73.js b/assets/js/39.594789cf.js similarity index 98% rename from assets/js/39.57896f73.js rename to assets/js/39.594789cf.js index 333b270aa..f5644e77f 100644 --- a/assets/js/39.57896f73.js +++ b/assets/js/39.594789cf.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[39],{379:function(e,t,n){"use strict";n.r(t);var a=n(4),r=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"community-survey"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-survey"}},[e._v("#")]),e._v(" Community Survey")]),e._v(" "),t("p",[e._v("We've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.")]),e._v(" "),t("p",[e._v("The survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.")]),e._v(" "),t("p",[e._v("Use this "),t("a",{attrs:{href:"https://uber.surveymonkey.com/r/ZS83WJW",target:"_blank",rel:"noopener noreferrer"}},[e._v("Survey Link"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Please take a few minutes to give us your opinion.")]),e._v(" "),t("h2",{attrs:{id:"cadence-and-temporal"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-and-temporal"}},[e._v("#")]),e._v(" Cadence and Temporal")]),e._v(" "),t("p",[e._v("During user surveys we've had a few queries about whether Cadence and "),t("a",{attrs:{href:"https://temporal.io/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Temporal"),t("OutboundLink")],1),e._v(" are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different communities so are independent.")]),e._v(" "),t("h2",{attrs:{id:"cadence-at-doordash"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-at-doordash"}},[e._v("#")]),e._v(" Cadence at DoorDash")]),e._v(" "),t("p",[e._v("Although published a few months ago we missed including an article by "),t("a",{attrs:{href:"https://doordash.engineering/",target:"_blank",rel:"noopener noreferrer"}},[e._v("DoorDash"),t("OutboundLink")],1),e._v(" about how they are using Cadence to build real time event processing with "),t("a",{attrs:{href:"https://flink.apache.org/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Flink"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://kafka.apache.org/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Kafka"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Here is the link to the article: "),t("a",{attrs:{href:"https://doordash.engineering/2022/08/02/building-scalable-real-time-event-processing-with-kafka-and-flink/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Building Scalable Real Time Event Processing with Kafka and Flink"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Remember to let us know if you have news, articles or blog posts about Cadence that you'd like us to include in these monthly updates.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://opensource.com/article/22/6/cadence-open-source-workflow-engine",target:"_blank",rel:"noopener noreferrer"}},[e._v("Getting Started with Cadence, an Open Source Workflow Engine"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://thenewstack.io/meet-cadence-workflow-engine-for-taming-complex-processes/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Meet Cadence: Workflow Engine for Taming Complex Processes"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-spinning-drones-cadence-kafka.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Spinning Your Drones with Cadence and Apache Kafka"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[39],{378:function(e,t,n){"use strict";n.r(t);var a=n(4),r=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"community-survey"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-survey"}},[e._v("#")]),e._v(" Community Survey")]),e._v(" "),t("p",[e._v("We've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.")]),e._v(" "),t("p",[e._v("The survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.")]),e._v(" "),t("p",[e._v("Use this "),t("a",{attrs:{href:"https://uber.surveymonkey.com/r/ZS83WJW",target:"_blank",rel:"noopener noreferrer"}},[e._v("Survey Link"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Please take a few minutes to give us your opinion.")]),e._v(" "),t("h2",{attrs:{id:"cadence-and-temporal"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-and-temporal"}},[e._v("#")]),e._v(" Cadence and Temporal")]),e._v(" "),t("p",[e._v("During user surveys we've had a few queries about whether Cadence and "),t("a",{attrs:{href:"https://temporal.io/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Temporal"),t("OutboundLink")],1),e._v(" are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different communities so are independent.")]),e._v(" "),t("h2",{attrs:{id:"cadence-at-doordash"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-at-doordash"}},[e._v("#")]),e._v(" Cadence at DoorDash")]),e._v(" "),t("p",[e._v("Although published a few months ago we missed including an article by "),t("a",{attrs:{href:"https://doordash.engineering/",target:"_blank",rel:"noopener noreferrer"}},[e._v("DoorDash"),t("OutboundLink")],1),e._v(" about how they are using Cadence to build real time event processing with "),t("a",{attrs:{href:"https://flink.apache.org/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Flink"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://kafka.apache.org/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Apache Kafka"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Here is the link to the article: "),t("a",{attrs:{href:"https://doordash.engineering/2022/08/02/building-scalable-real-time-event-processing-with-kafka-and-flink/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Building Scalable Real Time Event Processing with Kafka and Flink"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("Remember to let us know if you have news, articles or blog posts about Cadence that you'd like us to include in these monthly updates.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://opensource.com/article/22/6/cadence-open-source-workflow-engine",target:"_blank",rel:"noopener noreferrer"}},[e._v("Getting Started with Cadence, an Open Source Workflow Engine"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://thenewstack.io/meet-cadence-workflow-engine-for-taming-complex-processes/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Meet Cadence: Workflow Engine for Taming Complex Processes"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://info.instaclustr.com/webinar-spinning-drones-cadence-kafka.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Spinning Your Drones with Cadence and Apache Kafka"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/40.7ba21e1f.js b/assets/js/40.3127e425.js similarity index 98% rename from assets/js/40.7ba21e1f.js rename to assets/js/40.3127e425.js index 545a3b8ea..f08d60b48 100644 --- a/assets/js/40.7ba21e1f.js +++ b/assets/js/40.3127e425.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[40],{381:function(e,t,a){"use strict";a.r(t);var n=a(4),r=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"cadence-at-open-source-summit-north-america"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-at-open-source-summit-north-america"}},[e._v("#")]),e._v(" Cadence at Open Source Summit, North America")]),e._v(" "),t("p",[e._v("We are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's "),t("a",{attrs:{href:"https://events.linuxfoundation.org/open-source-summit-north-america/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Open Source Summit, North America"),t("OutboundLink")],1),e._v(" in Vancouver on 10th - 12th May 2023.")]),e._v(" "),t("p",[e._v("The talk called "),t("a",{attrs:{href:"https://ossna2023.sched.com/event/1K5B1",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence: The New Open Source Project for Building Complex Distributed Applications"),t("OutboundLink")],1),e._v(" will be given by "),t("a",{attrs:{href:"https://www.linkedin.com/in/enderdemirkaya/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Ender Demirkaya"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://www.linkedin.com/in/emrahseker/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Emrah Seker"),t("OutboundLink")],1),e._v(" If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!")]),e._v(" "),t("h2",{attrs:{id:"community-activity"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-activity"}},[e._v("#")]),e._v(" Community Activity")]),e._v(" "),t("p",[e._v("Our Slack #support channel has been very active over the last few months as we continue to get an continual stream of questions. Here are the stats:")]),e._v(" "),t("ul",[t("li",[e._v("February 2023 : 16 questions asked")]),e._v(" "),t("li",[e._v("March 2023 : 12 questions asked")])]),e._v(" "),t("p",[e._v("All of these questions are being answered collaboratively by the community. Thanks everyone for sharing your knowledge and we are looking forward to receiving more of your questions!")]),e._v(" "),t("h2",{attrs:{id:"cadence-developer-advocate"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-developer-advocate"}},[e._v("#")]),e._v(" Cadence Developer Advocate")]),e._v(" "),t("p",[e._v("Please welcome Yizhe Qin - the new Cadence Developer Advocate from Uber team that will be working to help support the community.")]),e._v(" "),t("p",[e._v("Yizhe's role will involve responding to support questions, organising documentation and anything else that will help keep the community running smoothly.")]),e._v(" "),t("p",[e._v("Please feel free to say Hi to Yizhe on the Slack channel!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/instaclustr-cadence-workflow-developer/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr Cadence Developer Offering - General Availability"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/improving-the-reliability-of-cadence-search-queries/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Improving Reliability of Cadence Search Queries That Use OpenSearch/Elasticsearch"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN__5fuwxmNQuWeZ6DiI5wUqg",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Microservices - A Modern Orchestration Approach with Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[40],{379:function(e,t,a){"use strict";a.r(t);var n=a(4),r=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"cadence-at-open-source-summit-north-america"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-at-open-source-summit-north-america"}},[e._v("#")]),e._v(" Cadence at Open Source Summit, North America")]),e._v(" "),t("p",[e._v("We are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's "),t("a",{attrs:{href:"https://events.linuxfoundation.org/open-source-summit-north-america/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Open Source Summit, North America"),t("OutboundLink")],1),e._v(" in Vancouver on 10th - 12th May 2023.")]),e._v(" "),t("p",[e._v("The talk called "),t("a",{attrs:{href:"https://ossna2023.sched.com/event/1K5B1",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence: The New Open Source Project for Building Complex Distributed Applications"),t("OutboundLink")],1),e._v(" will be given by "),t("a",{attrs:{href:"https://www.linkedin.com/in/enderdemirkaya/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Ender Demirkaya"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://www.linkedin.com/in/emrahseker/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Emrah Seker"),t("OutboundLink")],1),e._v(" If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!")]),e._v(" "),t("h2",{attrs:{id:"community-activity"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-activity"}},[e._v("#")]),e._v(" Community Activity")]),e._v(" "),t("p",[e._v("Our Slack #support channel has been very active over the last few months as we continue to get an continual stream of questions. Here are the stats:")]),e._v(" "),t("ul",[t("li",[e._v("February 2023 : 16 questions asked")]),e._v(" "),t("li",[e._v("March 2023 : 12 questions asked")])]),e._v(" "),t("p",[e._v("All of these questions are being answered collaboratively by the community. Thanks everyone for sharing your knowledge and we are looking forward to receiving more of your questions!")]),e._v(" "),t("h2",{attrs:{id:"cadence-developer-advocate"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-developer-advocate"}},[e._v("#")]),e._v(" Cadence Developer Advocate")]),e._v(" "),t("p",[e._v("Please welcome Yizhe Qin - the new Cadence Developer Advocate from Uber team that will be working to help support the community.")]),e._v(" "),t("p",[e._v("Yizhe's role will involve responding to support questions, organising documentation and anything else that will help keep the community running smoothly.")]),e._v(" "),t("p",[e._v("Please feel free to say Hi to Yizhe on the Slack channel!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/instaclustr-cadence-workflow-developer/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Instaclustr Cadence Developer Offering - General Availability"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/improving-the-reliability-of-cadence-search-queries/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Improving Reliability of Cadence Search Queries That Use OpenSearch/Elasticsearch"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN__5fuwxmNQuWeZ6DiI5wUqg",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Microservices - A Modern Orchestration Approach with Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v("#community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/40.38ff5d3e.js b/assets/js/40.57dc9a8f.js similarity index 92% rename from assets/js/40.38ff5d3e.js rename to assets/js/40.57dc9a8f.js index 7c03c17f2..dd2d34da1 100644 --- a/assets/js/40.38ff5d3e.js +++ b/assets/js/40.57dc9a8f.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[40],{347:function(t,e,a){"use strict";a.r(e);var s=a(0),r=Object(s.a)({},(function(){var t=this._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":this.$parent.slotKey}},[t("h1",{attrs:{id:"interactive-application"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#interactive-application"}},[this._v("#")]),this._v(" Interactive application")]),this._v(" "),t("p",[this._v("Cadence is performant and scalable enough to support interactive applications. It can be used to track UI session state and\nat the same time execute background operations. For example, while placing an order a customer might need to go through several screens while a background "),t("Term",{attrs:{term:"task"}}),this._v(" evaluates the customer for fraudulent "),t("Term",{attrs:{term:"activity"}}),this._v(".")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[40],{346:function(t,e,a){"use strict";a.r(e);var s=a(0),r=Object(s.a)({},(function(){var t=this._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":this.$parent.slotKey}},[t("h1",{attrs:{id:"interactive-application"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#interactive-application"}},[this._v("#")]),this._v(" Interactive application")]),this._v(" "),t("p",[this._v("Cadence is performant and scalable enough to support interactive applications. It can be used to track UI session state and\nat the same time execute background operations. For example, while placing an order a customer might need to go through several screens while a background "),t("Term",{attrs:{term:"task"}}),this._v(" evaluates the customer for fraudulent "),t("Term",{attrs:{term:"activity"}}),this._v(".")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/41.a7f6f5c5.js b/assets/js/41.8b3e151d.js similarity index 96% rename from assets/js/41.a7f6f5c5.js rename to assets/js/41.8b3e151d.js index 69b8f17e6..353a65a7a 100644 --- a/assets/js/41.a7f6f5c5.js +++ b/assets/js/41.8b3e151d.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[41],{346:function(e,t,i){"use strict";i.r(t);var n=i(0),s=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"dsl-workflows"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#dsl-workflows"}},[e._v("#")]),e._v(" DSL workflows")]),e._v(" "),t("p",[e._v("Cadence supports implementing business logic directly in programming languages like Java and Go. But there are cases when\nusing a domain-specific language is more appropriate. Or there might be a legacy system that uses some form of DSL for process definition but it is not operationally stable and scalable. This also applies to more recent systems like Apache Airflow, various BPMN engines and AWS Step Functions.")]),e._v(" "),t("p",[e._v("An application that interprets the DSL definition can be written using the Cadence SDK. It automatically becomes highly fault tolerant, scalable, and durable when running on Cadence. Cadence has been used to deprecate several Uber internal DSL engines. The customers continue to use existing process definitions, but Cadence is used as an execution engine.")]),e._v(" "),t("p",[e._v("There are multiple benefits of unifying all company "),t("Term",{attrs:{term:"workflow"}}),e._v(" engines on top of Cadence. The most obvious one is that\nit is more efficient to support a single product instead of many. It is also difficult to beat the scalability and stability of\nCadence which each of the integrations it comes with. Additionally, the ability to share "),t("Term",{attrs:{term:"activity",show:"activities"}}),e._v(' across "engines"\nmight be a huge benefit in some cases.')],1)])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[41],{347:function(e,t,i){"use strict";i.r(t);var n=i(0),s=Object(n.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"dsl-workflows"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#dsl-workflows"}},[e._v("#")]),e._v(" DSL workflows")]),e._v(" "),t("p",[e._v("Cadence supports implementing business logic directly in programming languages like Java and Go. But there are cases when\nusing a domain-specific language is more appropriate. Or there might be a legacy system that uses some form of DSL for process definition but it is not operationally stable and scalable. This also applies to more recent systems like Apache Airflow, various BPMN engines and AWS Step Functions.")]),e._v(" "),t("p",[e._v("An application that interprets the DSL definition can be written using the Cadence SDK. It automatically becomes highly fault tolerant, scalable, and durable when running on Cadence. Cadence has been used to deprecate several Uber internal DSL engines. The customers continue to use existing process definitions, but Cadence is used as an execution engine.")]),e._v(" "),t("p",[e._v("There are multiple benefits of unifying all company "),t("Term",{attrs:{term:"workflow"}}),e._v(" engines on top of Cadence. The most obvious one is that\nit is more efficient to support a single product instead of many. It is also difficult to beat the scalability and stability of\nCadence which each of the integrations it comes with. Additionally, the ability to share "),t("Term",{attrs:{term:"activity",show:"activities"}}),e._v(' across "engines"\nmight be a huge benefit in some cases.')],1)])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/44.74f995cb.js b/assets/js/44.a4d16e22.js similarity index 98% rename from assets/js/44.74f995cb.js rename to assets/js/44.a4d16e22.js index 0286f2647..7f042e092 100644 --- a/assets/js/44.74f995cb.js +++ b/assets/js/44.a4d16e22.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[44],{390:function(e,t,i){"use strict";i.r(t);var s=i(4),a=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.")]),e._v(" "),t("ul",[t("li",[e._v("Reusing the same workflow ID for very active/continuous running workflows")])]),e._v(" "),t("p",[e._v("Cadence organizes workflows based on their unique IDs, using a process called "),t("b",[e._v("partitioning")]),e._v(". If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the "),t("code",[e._v("continueAsNew")]),e._v(' function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the problem.')]),e._v(" "),t("p",[e._v("Solution:\nWell, the best way to avoid this is simply just design your workflow in the way such that each workflow owns a uniformly distributed workflow ID across your Cadence domain. This will make sure that Cadence backend is able to evenly distribute the traffic with proper partition on your workflowIDs.")]),e._v(" "),t("ul",[t("li",[e._v("Excessive batch jobs or an enormous number of timers triggered at the same time")])]),e._v(" "),t("p",[e._v("Cadence has the capability to handle a large number of concurrent tasks initiated simultaneously, but tampering with this feature can lead to issues within the Cadence system. Consider a scenario where millions of jobs are scheduled to start at the same time and are expected to finish within a specific time interval. Cadence faces the challenge of understanding the desired behavior of customers in such cases. It is uncertain whether the intention is to complete all jobs simultaneously, provide progressive updates in parallel, or finish all jobs before a given deadline. This ambiguity arises due to the independent nature of each job and the difficulty in predicting their outcomes.")]),e._v(" "),t("p",[e._v("Moreover, Cadence workers utilize a sticky cache by default to optimize the runtime of workflows. However, when an overwhelming number of parallel workflows cannot fit into the cache, it can result in "),t("b",[e._v("cache thrashing")]),e._v(". This, in turn, leads to a quadratic increase in runtime complexity, specifically O(n^2), exacerbating the overall performance of the system.")]),e._v(" "),t("p",[e._v("Solution:\nThere are multiple ways to address this issue. Customers can either run jobs in a smaller batch or use start workflow jitter to randomly distribute timers within certain timeframe.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[44],{388:function(e,t,i){"use strict";i.r(t);var s=i(4),a=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.")]),e._v(" "),t("ul",[t("li",[e._v("Reusing the same workflow ID for very active/continuous running workflows")])]),e._v(" "),t("p",[e._v("Cadence organizes workflows based on their unique IDs, using a process called "),t("b",[e._v("partitioning")]),e._v(". If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the "),t("code",[e._v("continueAsNew")]),e._v(' function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the problem.')]),e._v(" "),t("p",[e._v("Solution:\nWell, the best way to avoid this is simply just design your workflow in the way such that each workflow owns a uniformly distributed workflow ID across your Cadence domain. This will make sure that Cadence backend is able to evenly distribute the traffic with proper partition on your workflowIDs.")]),e._v(" "),t("ul",[t("li",[e._v("Excessive batch jobs or an enormous number of timers triggered at the same time")])]),e._v(" "),t("p",[e._v("Cadence has the capability to handle a large number of concurrent tasks initiated simultaneously, but tampering with this feature can lead to issues within the Cadence system. Consider a scenario where millions of jobs are scheduled to start at the same time and are expected to finish within a specific time interval. Cadence faces the challenge of understanding the desired behavior of customers in such cases. It is uncertain whether the intention is to complete all jobs simultaneously, provide progressive updates in parallel, or finish all jobs before a given deadline. This ambiguity arises due to the independent nature of each job and the difficulty in predicting their outcomes.")]),e._v(" "),t("p",[e._v("Moreover, Cadence workers utilize a sticky cache by default to optimize the runtime of workflows. However, when an overwhelming number of parallel workflows cannot fit into the cache, it can result in "),t("b",[e._v("cache thrashing")]),e._v(". This, in turn, leads to a quadratic increase in runtime complexity, specifically O(n^2), exacerbating the overall performance of the system.")]),e._v(" "),t("p",[e._v("Solution:\nThere are multiple ways to address this issue. Customers can either run jobs in a smaller batch or use start workflow jitter to randomly distribute timers within certain timeframe.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/45.a157c19e.js b/assets/js/45.194f36ed.js similarity index 99% rename from assets/js/45.a157c19e.js rename to assets/js/45.194f36ed.js index 461c5c53d..f848093de 100644 --- a/assets/js/45.a157c19e.js +++ b/assets/js/45.194f36ed.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[45],{353:function(t,e,r){"use strict";r.r(e);var i=r(0),a=Object(i.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"activities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#activities"}},[t._v("#")]),t._v(" Activities")]),t._v(" "),e("p",[t._v("Fault-oblivious stateful "),e("Term",{attrs:{term:"workflow"}}),t._v(" code is the core abstraction of Cadence. But, due to deterministic execution requirements, they are not allowed to call any external API directly.\nInstead they orchestrate execution of "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(". In its simplest form, a Cadence "),e("Term",{attrs:{term:"activity"}}),t._v(" is a function or an object method in one of the supported languages.\nCadence does not recover "),e("Term",{attrs:{term:"activity"}}),t._v(" state in case of failures. Therefore an "),e("Term",{attrs:{term:"activity"}}),t._v(" function is allowed to contain any code without restrictions.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"activity",show:"Activities"}}),t._v(" are invoked asynchronously through "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(". A "),e("Term",{attrs:{term:"task_list"}}),t._v(" is essentially a queue used to store an "),e("Term",{attrs:{term:"activity_task"}}),t._v(" until it is picked up by an available "),e("Term",{attrs:{term:"worker"}}),t._v(". The "),e("Term",{attrs:{term:"worker"}}),t._v(" processes an "),e("Term",{attrs:{term:"activity"}}),t._v(" by invoking its implementation function. When the function returns, the "),e("Term",{attrs:{term:"worker"}}),t._v(" reports the result back to the Cadence service which in turn notifies the "),e("Term",{attrs:{term:"workflow"}}),t._v(" about completion. It is possible to implement an "),e("Term",{attrs:{term:"activity"}}),t._v(" fully asynchronously by completing it from a different process.")],1),t._v(" "),e("h2",{attrs:{id:"timeouts"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#timeouts"}},[t._v("#")]),t._v(" Timeouts")]),t._v(" "),e("p",[t._v("Cadence does not impose any system limit on "),e("Term",{attrs:{term:"activity"}}),t._v(" duration. It is up to the application to choose the timeouts for its execution. These are the configurable "),e("Term",{attrs:{term:"activity"}}),t._v(" timeouts:")],1),t._v(" "),e("ul",[e("li",[e("code",[t._v("ScheduleToStart")]),t._v(" is the maximum time from a "),e("Term",{attrs:{term:"workflow"}}),t._v(" requesting "),e("Term",{attrs:{term:"activity"}}),t._v(" execution to a "),e("Term",{attrs:{term:"worker"}}),t._v(" starting its execution. The usual reason for this timeout to fire is all "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" being down or not being able to keep up with the request rate. We recommend setting this timeout to the maximum time a "),e("Term",{attrs:{term:"workflow"}}),t._v(" is willing to wait for an "),e("Term",{attrs:{term:"activity"}}),t._v(" execution in the presence of all possible "),e("Term",{attrs:{term:"worker"}}),t._v(" outages.")],1),t._v(" "),e("li",[e("code",[t._v("StartToClose")]),t._v(" is the maximum time an "),e("Term",{attrs:{term:"activity"}}),t._v(" can execute after it was picked by a "),e("Term",{attrs:{term:"worker"}}),t._v(".")],1),t._v(" "),e("li",[e("code",[t._v("ScheduleToClose")]),t._v(" is the maximum time from the "),e("Term",{attrs:{term:"workflow"}}),t._v(" requesting an "),e("Term",{attrs:{term:"activity"}}),t._v(" execution to its completion.")],1),t._v(" "),e("li",[e("code",[t._v("Heartbeat")]),t._v(" is the maximum time between heartbeat requests. See "),e("a",{attrs:{href:"#long-running-activities"}},[t._v("Long Running Activities")]),t._v(".")])]),t._v(" "),e("p",[t._v("Either "),e("code",[t._v("ScheduleToClose")]),t._v(" or both "),e("code",[t._v("ScheduleToStart")]),t._v(" and "),e("code",[t._v("StartToClose")]),t._v(" timeouts are required.")]),t._v(" "),e("p",[t._v("Timeouts are the key to manage activities. For more tips of how to set proper timeout, read this "),e("a",{attrs:{href:"https://stackoverflow.com/questions/65139178/how-to-set-proper-timeout-values-for-cadence-activitieslocal-and-regular-activi/65139179#65139179",target:"_blank",rel:"noopener noreferrer"}},[t._v("Stack Overflow QA"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("h2",{attrs:{id:"retries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#retries"}},[t._v("#")]),t._v(" Retries")]),t._v(" "),e("p",[t._v("As Cadence doesn't recover an "),e("Term",{attrs:{term:"activity"}}),t._v("'s state and they can communicate to any external system, failures are expected. Therefore, Cadence supports automatic "),e("Term",{attrs:{term:"activity"}}),t._v(" retries. Any "),e("Term",{attrs:{term:"activity"}}),t._v(" when invoked can have an associated retry policy. Here are the retry policy parameters:")],1),t._v(" "),e("ul",[e("li",[e("code",[t._v("InitialInterval")]),t._v(" is a delay before the first retry.")]),t._v(" "),e("li",[e("code",[t._v("BackoffCoefficient")]),t._v(". Retry policies are exponential. The coefficient specifies how fast the retry interval is growing. The coefficient of 1 means that the retry interval is always equal to the "),e("code",[t._v("InitialInterval")]),t._v(".")]),t._v(" "),e("li",[e("code",[t._v("MaximumInterval")]),t._v(" specifies the maximum interval between retries. Useful for coefficients more than 1.")]),t._v(" "),e("li",[e("code",[t._v("MaximumAttempts")]),t._v(" specifies how many times to attempt to execute an "),e("Term",{attrs:{term:"activity"}}),t._v(" in the presence of failures. If this limit is exceeded, the error is returned back to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked the "),e("Term",{attrs:{term:"activity"}}),t._v(". Not required if "),e("code",[t._v("ExpirationInterval")]),t._v(" is specified.")],1),t._v(" "),e("li",[e("code",[t._v("ExpirationInterval")]),t._v(" specifies for how long to attempt executing an "),e("Term",{attrs:{term:"activity"}}),t._v(" in the presence of failures. If this interval is exceeded, the error is returned back to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked the "),e("Term",{attrs:{term:"activity"}}),t._v(". Not required if "),e("code",[t._v("MaximumAttempts")]),t._v(" is specified.")],1),t._v(" "),e("li",[e("code",[t._v("NonRetryableErrorReasons")]),t._v(" allows you to specify errors that shouldn't be retried. For example retrying invalid arguments error doesn't make sense in some scenarios.")])]),t._v(" "),e("p",[t._v("There are scenarios when not a single "),e("Term",{attrs:{term:"activity"}}),t._v(" but rather the whole part of a "),e("Term",{attrs:{term:"workflow"}}),t._v(" should be retried on failure. For example, a media encoding "),e("Term",{attrs:{term:"workflow"}}),t._v(" that downloads a file to a host, processes it, and then uploads the result back to storage. In this "),e("Term",{attrs:{term:"workflow"}}),t._v(", if the host that hosts the "),e("Term",{attrs:{term:"worker"}}),t._v(" dies, all three "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" should be retried on a different host. Such retries should be handled by the "),e("Term",{attrs:{term:"workflow"}}),t._v(" code as they are very use case specific.")],1),t._v(" "),e("h2",{attrs:{id:"long-running-activities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#long-running-activities"}},[t._v("#")]),t._v(" Long Running Activities")]),t._v(" "),e("p",[t._v("For long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", we recommended that you specify a relatively short heartbeat timeout and constantly heartbeat. This way "),e("Term",{attrs:{term:"worker"}}),t._v(" failures for even very long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" can be handled in a timely manner. An "),e("Term",{attrs:{term:"activity"}}),t._v(" that specifies the heartbeat timeout is expected to call the heartbeat method "),e("em",[t._v("periodically")]),t._v(" from its implementation.")],1),t._v(" "),e("p",[t._v("A heartbeat request can include application specific payload. This is useful to save "),e("Term",{attrs:{term:"activity"}}),t._v(" execution progress. If an "),e("Term",{attrs:{term:"activity"}}),t._v(" times out due to a missed heartbeat, the next attempt to execute it can access that progress and continue its execution from that point.")],1),t._v(" "),e("p",[t._v("Long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" can be used as a special case of leader election. Cadence timeouts use second resolution. So it is not a solution for realtime applications. But if it is okay to react to the process failure within a few seconds, then a Cadence heartbeat "),e("Term",{attrs:{term:"activity"}}),t._v(" is a good fit.")],1),t._v(" "),e("p",[t._v("One common use case for such leader election is monitoring. An "),e("Term",{attrs:{term:"activity"}}),t._v(" executes an internal loop that periodically polls some API and checks for some condition. It also heartbeats on every iteration. If the condition is satisfied, the "),e("Term",{attrs:{term:"activity"}}),t._v(" completes which lets its "),e("Term",{attrs:{term:"workflow"}}),t._v(" to handle it. If the "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" dies, the "),e("Term",{attrs:{term:"activity"}}),t._v(" times out after the heartbeat interval is exceeded and is retried on a different "),e("Term",{attrs:{term:"worker"}}),t._v(". The same pattern works for polling for new files in Amazon S3 buckets or responses in REST or other synchronous APIs.")],1),t._v(" "),e("h2",{attrs:{id:"cancellation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#cancellation"}},[t._v("#")]),t._v(" Cancellation")]),t._v(" "),e("p",[t._v("A "),e("Term",{attrs:{term:"workflow"}}),t._v(" can request an "),e("Term",{attrs:{term:"activity"}}),t._v(" cancellation. Currently the only way for an "),e("Term",{attrs:{term:"activity"}}),t._v(" to learn that it was cancelled is through heart beating. The heartbeat request fails with a special error indicating that the "),e("Term",{attrs:{term:"activity"}}),t._v(" was cancelled. Then it is up to the "),e("Term",{attrs:{term:"activity"}}),t._v(" implementation to perform all the necessary cleanup and report that it is done with it. It is up to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" implementation to decide if it wants to wait for the "),e("Term",{attrs:{term:"activity"}}),t._v(" cancellation confirmation or just proceed without waiting.")],1),t._v(" "),e("p",[t._v("Another common case for "),e("Term",{attrs:{term:"activity"}}),t._v(" heartbeat failure is that the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked it is in a completed state. In this case an "),e("Term",{attrs:{term:"activity"}}),t._v(" is expected to perform cleanup as well.")],1),t._v(" "),e("h2",{attrs:{id:"activity-task-routing-through-task-lists"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#activity-task-routing-through-task-lists"}},[t._v("#")]),t._v(" Activity Task Routing through Task Lists")]),t._v(" "),e("p",[e("Term",{attrs:{term:"activity",show:"Activities"}}),t._v(" are dispatched to "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" through "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(". "),e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" are queues that "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" listen on. "),e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" are highly dynamic and lightweight. They don't need to be explicitly registered. And it is okay to have one "),e("Term",{attrs:{term:"task_list"}}),t._v(" per "),e("Term",{attrs:{term:"worker"}}),t._v(" process. It is normal to have more than one "),e("Term",{attrs:{term:"activity"}}),t._v(" type to be invoked through a single "),e("Term",{attrs:{term:"task_list"}}),t._v(". And it is normal in some cases (like host routing) to invoke the same "),e("Term",{attrs:{term:"activity"}}),t._v(" type on multiple "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(".")],1),t._v(" "),e("p",[t._v("Here are some use cases for employing multiple "),e("Term",{attrs:{term:"activity_task_list",show:"activity_task_lists"}}),t._v(" in a single workflow:")],1),t._v(" "),e("ul",[e("li",[e("em",[t._v("Flow control")]),t._v(". A "),e("Term",{attrs:{term:"worker"}}),t._v(" that consumes from a "),e("Term",{attrs:{term:"task_list"}}),t._v(" asks for an "),e("Term",{attrs:{term:"activity_task"}}),t._v(" only when it has available capacity. So "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" are never overloaded by request spikes. If "),e("Term",{attrs:{term:"activity"}}),t._v(" executions are requested faster than "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" can process them, they are backlogged in the "),e("Term",{attrs:{term:"task_list"}}),t._v(".")],1),t._v(" "),e("li",[e("em",[t._v("Throttling")]),t._v(". Each "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" can specify the maximum rate it is allowed to processes "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" on a "),e("Term",{attrs:{term:"task_list"}}),t._v(". It does not exceed this limit even if it has spare capacity. There is also support for global "),e("Term",{attrs:{term:"task_list"}}),t._v(" rate limiting. This limit works across all "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" for the given "),e("Term",{attrs:{term:"task_list"}}),t._v(". It is frequently used to limit load on a downstream service that an "),e("Term",{attrs:{term:"activity"}}),t._v(" calls into.")],1),t._v(" "),e("li",[e("em",[t._v("Deploying a set of "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" independently")],1),t._v(". Think about a service that hosts "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" and can be deployed independently from other "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" and "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(". To send "),e("Term",{attrs:{term:"activity_task",show:"activity_tasks"}}),t._v(" to this service, a separate "),e("Term",{attrs:{term:"task_list"}}),t._v(" is needed.")],1),t._v(" "),e("li",[e("em",[e("Term",{attrs:{term:"worker",show:"Workers"}}),t._v(" with different capabilities")],1),t._v(". For example, "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" on GPU boxes vs non GPU boxes. Having two separate "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(" in this case allows "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" to pick which one to send "),e("Term",{attrs:{term:"activity"}}),t._v(" an execution request to.")],1),t._v(" "),e("li",[e("em",[t._v("Routing "),e("Term",{attrs:{term:"activity"}}),t._v(" to a specific host")],1),t._v(". For example, in the media encoding case the transform and upload "),e("Term",{attrs:{term:"activity"}}),t._v(" have to run on the same host as the download one.")],1),t._v(" "),e("li",[e("em",[t._v("Routing "),e("Term",{attrs:{term:"activity"}}),t._v(" to a specific process")],1),t._v(". For example, some "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" load large data sets and caches it in the process. The "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" that rely on this data set should be routed to the same process.")],1),t._v(" "),e("li",[e("em",[t._v("Multiple priorities")]),t._v(". One "),e("Term",{attrs:{term:"task_list"}}),t._v(" per priority and having a "),e("Term",{attrs:{term:"worker"}}),t._v(" pool per priority.")],1),t._v(" "),e("li",[e("em",[t._v("Versioning")]),t._v(". A new backwards incompatible implementation of an "),e("Term",{attrs:{term:"activity"}}),t._v(" might use a different "),e("Term",{attrs:{term:"task_list"}}),t._v(".")],1)]),t._v(" "),e("h2",{attrs:{id:"asynchronous-activity-completion"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#asynchronous-activity-completion"}},[t._v("#")]),t._v(" Asynchronous Activity Completion")]),t._v(" "),e("p",[t._v("By default an "),e("Term",{attrs:{term:"activity"}}),t._v(" is a function or a method depending on a client side library language. As soon as the function returns, an "),e("Term",{attrs:{term:"activity"}}),t._v(" completes. But in some cases an "),e("Term",{attrs:{term:"activity"}}),t._v(" implementation is asynchronous. For example it is forwarded to an external system through a message queue. And the reply comes through a different queue.")],1),t._v(" "),e("p",[t._v("To support such use cases, Cadence allows "),e("Term",{attrs:{term:"activity"}}),t._v(" implementations that do not complete upon "),e("Term",{attrs:{term:"activity"}}),t._v(" function completions. A separate API should be used in this case to complete the "),e("Term",{attrs:{term:"activity"}}),t._v(". This API can be called from any process, even in a different programming language, that the original "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" used.")],1),t._v(" "),e("h2",{attrs:{id:"local-activities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#local-activities"}},[t._v("#")]),t._v(" Local Activities")]),t._v(" "),e("p",[t._v("Some of the "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" are very short lived and do not need the queing semantic, flow control, rate limiting and routing capabilities. For these Cadence supports so called "),e("em",[e("Term",{attrs:{term:"local_activity"}})],1),t._v(" feature. "),e("Term",{attrs:{term:"local_activity",show:"Local_activities"}}),t._v(" are executed in the same "),e("Term",{attrs:{term:"worker"}}),t._v(" process as the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked them.")],1),t._v(" "),e("p",[t._v("What you will trade off by using local activities")]),t._v(" "),e("ul",[e("li",[t._v("Less Debuggability: There is no ActivityTaskScheduled and ActivityTaskStarted events. So you would not able to see the input.")]),t._v(" "),e("li",[t._v("No tasklist dispatching: The worker is always the same as the workflow decision worker. You don't have a choice of using activity workers.")]),t._v(" "),e("li",[t._v("More possibility of duplicated execution. Though regular activity could also execute multiple times when using retry policy, local activity has more chance of ocurring. Because local activity result is not recorded into history until DecisionTaskCompleted. Also when executing multiple local activities in a row, SDK(Java+Golang) would optimize recording in a way that only recording by interval(before current decision task timeout).")]),t._v(" "),e("li",[t._v("No long running capability with record heartbeat")]),t._v(" "),e("li",[t._v("No Tasklist global ratelimiting")])]),t._v(" "),e("p",[t._v("Consider using "),e("Term",{attrs:{term:"local_activity",show:"local_activities"}}),t._v(" for functions that are:")],1),t._v(" "),e("ul",[e("li",[t._v("idempotent")]),t._v(" "),e("li",[t._v("no longer than a few seconds")]),t._v(" "),e("li",[t._v("do not require global rate limiting")]),t._v(" "),e("li",[t._v("do not require routing to specific "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" or pools of "),e("Term",{attrs:{term:"worker",show:"workers"}})],1),t._v(" "),e("li",[t._v("can be implemented in the same binary as the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invokes them")],1),t._v(" "),e("li",[t._v("non business critical so that losing some debuggability is okay(e.g. logging, loading config)")]),t._v(" "),e("li",[t._v("when you really need optimization. For example, if there are many timers firing at the same time to invoke activities, it could overload Cadence's server. Using local activities can help save the server capacity.")])]),t._v(" "),e("p",[t._v("The main benefit of "),e("Term",{attrs:{term:"local_activity",show:"local_activities"}}),t._v(" is that they are much more efficient in utilizing Cadence service resources and have much lower latency overhead comparing to the usual "),e("Term",{attrs:{term:"activity"}}),t._v(" invocation.")],1)])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[45],{351:function(t,e,r){"use strict";r.r(e);var i=r(0),a=Object(i.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"activities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#activities"}},[t._v("#")]),t._v(" Activities")]),t._v(" "),e("p",[t._v("Fault-oblivious stateful "),e("Term",{attrs:{term:"workflow"}}),t._v(" code is the core abstraction of Cadence. But, due to deterministic execution requirements, they are not allowed to call any external API directly.\nInstead they orchestrate execution of "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(". In its simplest form, a Cadence "),e("Term",{attrs:{term:"activity"}}),t._v(" is a function or an object method in one of the supported languages.\nCadence does not recover "),e("Term",{attrs:{term:"activity"}}),t._v(" state in case of failures. Therefore an "),e("Term",{attrs:{term:"activity"}}),t._v(" function is allowed to contain any code without restrictions.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"activity",show:"Activities"}}),t._v(" are invoked asynchronously through "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(". A "),e("Term",{attrs:{term:"task_list"}}),t._v(" is essentially a queue used to store an "),e("Term",{attrs:{term:"activity_task"}}),t._v(" until it is picked up by an available "),e("Term",{attrs:{term:"worker"}}),t._v(". The "),e("Term",{attrs:{term:"worker"}}),t._v(" processes an "),e("Term",{attrs:{term:"activity"}}),t._v(" by invoking its implementation function. When the function returns, the "),e("Term",{attrs:{term:"worker"}}),t._v(" reports the result back to the Cadence service which in turn notifies the "),e("Term",{attrs:{term:"workflow"}}),t._v(" about completion. It is possible to implement an "),e("Term",{attrs:{term:"activity"}}),t._v(" fully asynchronously by completing it from a different process.")],1),t._v(" "),e("h2",{attrs:{id:"timeouts"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#timeouts"}},[t._v("#")]),t._v(" Timeouts")]),t._v(" "),e("p",[t._v("Cadence does not impose any system limit on "),e("Term",{attrs:{term:"activity"}}),t._v(" duration. It is up to the application to choose the timeouts for its execution. These are the configurable "),e("Term",{attrs:{term:"activity"}}),t._v(" timeouts:")],1),t._v(" "),e("ul",[e("li",[e("code",[t._v("ScheduleToStart")]),t._v(" is the maximum time from a "),e("Term",{attrs:{term:"workflow"}}),t._v(" requesting "),e("Term",{attrs:{term:"activity"}}),t._v(" execution to a "),e("Term",{attrs:{term:"worker"}}),t._v(" starting its execution. The usual reason for this timeout to fire is all "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" being down or not being able to keep up with the request rate. We recommend setting this timeout to the maximum time a "),e("Term",{attrs:{term:"workflow"}}),t._v(" is willing to wait for an "),e("Term",{attrs:{term:"activity"}}),t._v(" execution in the presence of all possible "),e("Term",{attrs:{term:"worker"}}),t._v(" outages.")],1),t._v(" "),e("li",[e("code",[t._v("StartToClose")]),t._v(" is the maximum time an "),e("Term",{attrs:{term:"activity"}}),t._v(" can execute after it was picked by a "),e("Term",{attrs:{term:"worker"}}),t._v(".")],1),t._v(" "),e("li",[e("code",[t._v("ScheduleToClose")]),t._v(" is the maximum time from the "),e("Term",{attrs:{term:"workflow"}}),t._v(" requesting an "),e("Term",{attrs:{term:"activity"}}),t._v(" execution to its completion.")],1),t._v(" "),e("li",[e("code",[t._v("Heartbeat")]),t._v(" is the maximum time between heartbeat requests. See "),e("a",{attrs:{href:"#long-running-activities"}},[t._v("Long Running Activities")]),t._v(".")])]),t._v(" "),e("p",[t._v("Either "),e("code",[t._v("ScheduleToClose")]),t._v(" or both "),e("code",[t._v("ScheduleToStart")]),t._v(" and "),e("code",[t._v("StartToClose")]),t._v(" timeouts are required.")]),t._v(" "),e("p",[t._v("Timeouts are the key to manage activities. For more tips of how to set proper timeout, read this "),e("a",{attrs:{href:"https://stackoverflow.com/questions/65139178/how-to-set-proper-timeout-values-for-cadence-activitieslocal-and-regular-activi/65139179#65139179",target:"_blank",rel:"noopener noreferrer"}},[t._v("Stack Overflow QA"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("h2",{attrs:{id:"retries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#retries"}},[t._v("#")]),t._v(" Retries")]),t._v(" "),e("p",[t._v("As Cadence doesn't recover an "),e("Term",{attrs:{term:"activity"}}),t._v("'s state and they can communicate to any external system, failures are expected. Therefore, Cadence supports automatic "),e("Term",{attrs:{term:"activity"}}),t._v(" retries. Any "),e("Term",{attrs:{term:"activity"}}),t._v(" when invoked can have an associated retry policy. Here are the retry policy parameters:")],1),t._v(" "),e("ul",[e("li",[e("code",[t._v("InitialInterval")]),t._v(" is a delay before the first retry.")]),t._v(" "),e("li",[e("code",[t._v("BackoffCoefficient")]),t._v(". Retry policies are exponential. The coefficient specifies how fast the retry interval is growing. The coefficient of 1 means that the retry interval is always equal to the "),e("code",[t._v("InitialInterval")]),t._v(".")]),t._v(" "),e("li",[e("code",[t._v("MaximumInterval")]),t._v(" specifies the maximum interval between retries. Useful for coefficients more than 1.")]),t._v(" "),e("li",[e("code",[t._v("MaximumAttempts")]),t._v(" specifies how many times to attempt to execute an "),e("Term",{attrs:{term:"activity"}}),t._v(" in the presence of failures. If this limit is exceeded, the error is returned back to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked the "),e("Term",{attrs:{term:"activity"}}),t._v(". Not required if "),e("code",[t._v("ExpirationInterval")]),t._v(" is specified.")],1),t._v(" "),e("li",[e("code",[t._v("ExpirationInterval")]),t._v(" specifies for how long to attempt executing an "),e("Term",{attrs:{term:"activity"}}),t._v(" in the presence of failures. If this interval is exceeded, the error is returned back to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked the "),e("Term",{attrs:{term:"activity"}}),t._v(". Not required if "),e("code",[t._v("MaximumAttempts")]),t._v(" is specified.")],1),t._v(" "),e("li",[e("code",[t._v("NonRetryableErrorReasons")]),t._v(" allows you to specify errors that shouldn't be retried. For example retrying invalid arguments error doesn't make sense in some scenarios.")])]),t._v(" "),e("p",[t._v("There are scenarios when not a single "),e("Term",{attrs:{term:"activity"}}),t._v(" but rather the whole part of a "),e("Term",{attrs:{term:"workflow"}}),t._v(" should be retried on failure. For example, a media encoding "),e("Term",{attrs:{term:"workflow"}}),t._v(" that downloads a file to a host, processes it, and then uploads the result back to storage. In this "),e("Term",{attrs:{term:"workflow"}}),t._v(", if the host that hosts the "),e("Term",{attrs:{term:"worker"}}),t._v(" dies, all three "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" should be retried on a different host. Such retries should be handled by the "),e("Term",{attrs:{term:"workflow"}}),t._v(" code as they are very use case specific.")],1),t._v(" "),e("h2",{attrs:{id:"long-running-activities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#long-running-activities"}},[t._v("#")]),t._v(" Long Running Activities")]),t._v(" "),e("p",[t._v("For long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", we recommended that you specify a relatively short heartbeat timeout and constantly heartbeat. This way "),e("Term",{attrs:{term:"worker"}}),t._v(" failures for even very long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" can be handled in a timely manner. An "),e("Term",{attrs:{term:"activity"}}),t._v(" that specifies the heartbeat timeout is expected to call the heartbeat method "),e("em",[t._v("periodically")]),t._v(" from its implementation.")],1),t._v(" "),e("p",[t._v("A heartbeat request can include application specific payload. This is useful to save "),e("Term",{attrs:{term:"activity"}}),t._v(" execution progress. If an "),e("Term",{attrs:{term:"activity"}}),t._v(" times out due to a missed heartbeat, the next attempt to execute it can access that progress and continue its execution from that point.")],1),t._v(" "),e("p",[t._v("Long running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" can be used as a special case of leader election. Cadence timeouts use second resolution. So it is not a solution for realtime applications. But if it is okay to react to the process failure within a few seconds, then a Cadence heartbeat "),e("Term",{attrs:{term:"activity"}}),t._v(" is a good fit.")],1),t._v(" "),e("p",[t._v("One common use case for such leader election is monitoring. An "),e("Term",{attrs:{term:"activity"}}),t._v(" executes an internal loop that periodically polls some API and checks for some condition. It also heartbeats on every iteration. If the condition is satisfied, the "),e("Term",{attrs:{term:"activity"}}),t._v(" completes which lets its "),e("Term",{attrs:{term:"workflow"}}),t._v(" to handle it. If the "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" dies, the "),e("Term",{attrs:{term:"activity"}}),t._v(" times out after the heartbeat interval is exceeded and is retried on a different "),e("Term",{attrs:{term:"worker"}}),t._v(". The same pattern works for polling for new files in Amazon S3 buckets or responses in REST or other synchronous APIs.")],1),t._v(" "),e("h2",{attrs:{id:"cancellation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#cancellation"}},[t._v("#")]),t._v(" Cancellation")]),t._v(" "),e("p",[t._v("A "),e("Term",{attrs:{term:"workflow"}}),t._v(" can request an "),e("Term",{attrs:{term:"activity"}}),t._v(" cancellation. Currently the only way for an "),e("Term",{attrs:{term:"activity"}}),t._v(" to learn that it was cancelled is through heart beating. The heartbeat request fails with a special error indicating that the "),e("Term",{attrs:{term:"activity"}}),t._v(" was cancelled. Then it is up to the "),e("Term",{attrs:{term:"activity"}}),t._v(" implementation to perform all the necessary cleanup and report that it is done with it. It is up to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" implementation to decide if it wants to wait for the "),e("Term",{attrs:{term:"activity"}}),t._v(" cancellation confirmation or just proceed without waiting.")],1),t._v(" "),e("p",[t._v("Another common case for "),e("Term",{attrs:{term:"activity"}}),t._v(" heartbeat failure is that the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked it is in a completed state. In this case an "),e("Term",{attrs:{term:"activity"}}),t._v(" is expected to perform cleanup as well.")],1),t._v(" "),e("h2",{attrs:{id:"activity-task-routing-through-task-lists"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#activity-task-routing-through-task-lists"}},[t._v("#")]),t._v(" Activity Task Routing through Task Lists")]),t._v(" "),e("p",[e("Term",{attrs:{term:"activity",show:"Activities"}}),t._v(" are dispatched to "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" through "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(". "),e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" are queues that "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" listen on. "),e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" are highly dynamic and lightweight. They don't need to be explicitly registered. And it is okay to have one "),e("Term",{attrs:{term:"task_list"}}),t._v(" per "),e("Term",{attrs:{term:"worker"}}),t._v(" process. It is normal to have more than one "),e("Term",{attrs:{term:"activity"}}),t._v(" type to be invoked through a single "),e("Term",{attrs:{term:"task_list"}}),t._v(". And it is normal in some cases (like host routing) to invoke the same "),e("Term",{attrs:{term:"activity"}}),t._v(" type on multiple "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(".")],1),t._v(" "),e("p",[t._v("Here are some use cases for employing multiple "),e("Term",{attrs:{term:"activity_task_list",show:"activity_task_lists"}}),t._v(" in a single workflow:")],1),t._v(" "),e("ul",[e("li",[e("em",[t._v("Flow control")]),t._v(". A "),e("Term",{attrs:{term:"worker"}}),t._v(" that consumes from a "),e("Term",{attrs:{term:"task_list"}}),t._v(" asks for an "),e("Term",{attrs:{term:"activity_task"}}),t._v(" only when it has available capacity. So "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" are never overloaded by request spikes. If "),e("Term",{attrs:{term:"activity"}}),t._v(" executions are requested faster than "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" can process them, they are backlogged in the "),e("Term",{attrs:{term:"task_list"}}),t._v(".")],1),t._v(" "),e("li",[e("em",[t._v("Throttling")]),t._v(". Each "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" can specify the maximum rate it is allowed to processes "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" on a "),e("Term",{attrs:{term:"task_list"}}),t._v(". It does not exceed this limit even if it has spare capacity. There is also support for global "),e("Term",{attrs:{term:"task_list"}}),t._v(" rate limiting. This limit works across all "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" for the given "),e("Term",{attrs:{term:"task_list"}}),t._v(". It is frequently used to limit load on a downstream service that an "),e("Term",{attrs:{term:"activity"}}),t._v(" calls into.")],1),t._v(" "),e("li",[e("em",[t._v("Deploying a set of "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" independently")],1),t._v(". Think about a service that hosts "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" and can be deployed independently from other "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" and "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(". To send "),e("Term",{attrs:{term:"activity_task",show:"activity_tasks"}}),t._v(" to this service, a separate "),e("Term",{attrs:{term:"task_list"}}),t._v(" is needed.")],1),t._v(" "),e("li",[e("em",[e("Term",{attrs:{term:"worker",show:"Workers"}}),t._v(" with different capabilities")],1),t._v(". For example, "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" on GPU boxes vs non GPU boxes. Having two separate "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(" in this case allows "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" to pick which one to send "),e("Term",{attrs:{term:"activity"}}),t._v(" an execution request to.")],1),t._v(" "),e("li",[e("em",[t._v("Routing "),e("Term",{attrs:{term:"activity"}}),t._v(" to a specific host")],1),t._v(". For example, in the media encoding case the transform and upload "),e("Term",{attrs:{term:"activity"}}),t._v(" have to run on the same host as the download one.")],1),t._v(" "),e("li",[e("em",[t._v("Routing "),e("Term",{attrs:{term:"activity"}}),t._v(" to a specific process")],1),t._v(". For example, some "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" load large data sets and caches it in the process. The "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" that rely on this data set should be routed to the same process.")],1),t._v(" "),e("li",[e("em",[t._v("Multiple priorities")]),t._v(". One "),e("Term",{attrs:{term:"task_list"}}),t._v(" per priority and having a "),e("Term",{attrs:{term:"worker"}}),t._v(" pool per priority.")],1),t._v(" "),e("li",[e("em",[t._v("Versioning")]),t._v(". A new backwards incompatible implementation of an "),e("Term",{attrs:{term:"activity"}}),t._v(" might use a different "),e("Term",{attrs:{term:"task_list"}}),t._v(".")],1)]),t._v(" "),e("h2",{attrs:{id:"asynchronous-activity-completion"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#asynchronous-activity-completion"}},[t._v("#")]),t._v(" Asynchronous Activity Completion")]),t._v(" "),e("p",[t._v("By default an "),e("Term",{attrs:{term:"activity"}}),t._v(" is a function or a method depending on a client side library language. As soon as the function returns, an "),e("Term",{attrs:{term:"activity"}}),t._v(" completes. But in some cases an "),e("Term",{attrs:{term:"activity"}}),t._v(" implementation is asynchronous. For example it is forwarded to an external system through a message queue. And the reply comes through a different queue.")],1),t._v(" "),e("p",[t._v("To support such use cases, Cadence allows "),e("Term",{attrs:{term:"activity"}}),t._v(" implementations that do not complete upon "),e("Term",{attrs:{term:"activity"}}),t._v(" function completions. A separate API should be used in this case to complete the "),e("Term",{attrs:{term:"activity"}}),t._v(". This API can be called from any process, even in a different programming language, that the original "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" used.")],1),t._v(" "),e("h2",{attrs:{id:"local-activities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#local-activities"}},[t._v("#")]),t._v(" Local Activities")]),t._v(" "),e("p",[t._v("Some of the "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" are very short lived and do not need the queing semantic, flow control, rate limiting and routing capabilities. For these Cadence supports so called "),e("em",[e("Term",{attrs:{term:"local_activity"}})],1),t._v(" feature. "),e("Term",{attrs:{term:"local_activity",show:"Local_activities"}}),t._v(" are executed in the same "),e("Term",{attrs:{term:"worker"}}),t._v(" process as the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invoked them.")],1),t._v(" "),e("p",[t._v("What you will trade off by using local activities")]),t._v(" "),e("ul",[e("li",[t._v("Less Debuggability: There is no ActivityTaskScheduled and ActivityTaskStarted events. So you would not able to see the input.")]),t._v(" "),e("li",[t._v("No tasklist dispatching: The worker is always the same as the workflow decision worker. You don't have a choice of using activity workers.")]),t._v(" "),e("li",[t._v("More possibility of duplicated execution. Though regular activity could also execute multiple times when using retry policy, local activity has more chance of ocurring. Because local activity result is not recorded into history until DecisionTaskCompleted. Also when executing multiple local activities in a row, SDK(Java+Golang) would optimize recording in a way that only recording by interval(before current decision task timeout).")]),t._v(" "),e("li",[t._v("No long running capability with record heartbeat")]),t._v(" "),e("li",[t._v("No Tasklist global ratelimiting")])]),t._v(" "),e("p",[t._v("Consider using "),e("Term",{attrs:{term:"local_activity",show:"local_activities"}}),t._v(" for functions that are:")],1),t._v(" "),e("ul",[e("li",[t._v("idempotent")]),t._v(" "),e("li",[t._v("no longer than a few seconds")]),t._v(" "),e("li",[t._v("do not require global rate limiting")]),t._v(" "),e("li",[t._v("do not require routing to specific "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" or pools of "),e("Term",{attrs:{term:"worker",show:"workers"}})],1),t._v(" "),e("li",[t._v("can be implemented in the same binary as the "),e("Term",{attrs:{term:"workflow"}}),t._v(" that invokes them")],1),t._v(" "),e("li",[t._v("non business critical so that losing some debuggability is okay(e.g. logging, loading config)")]),t._v(" "),e("li",[t._v("when you really need optimization. For example, if there are many timers firing at the same time to invoke activities, it could overload Cadence's server. Using local activities can help save the server capacity.")])]),t._v(" "),e("p",[t._v("The main benefit of "),e("Term",{attrs:{term:"local_activity",show:"local_activities"}}),t._v(" is that they are much more efficient in utilizing Cadence service resources and have much lower latency overhead comparing to the usual "),e("Term",{attrs:{term:"activity"}}),t._v(" invocation.")],1)])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/46.985696e3.js b/assets/js/46.260bf22f.js similarity index 99% rename from assets/js/46.985696e3.js rename to assets/js/46.260bf22f.js index 909413db3..afd350da4 100644 --- a/assets/js/46.985696e3.js +++ b/assets/js/46.260bf22f.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[46],{351:function(e,t,r){"use strict";r.r(t);var a=r(0),s=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"event-handling"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#event-handling"}},[e._v("#")]),e._v(" Event handling")]),e._v(" "),t("p",[e._v("Fault-oblivious stateful "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" can be "),t("Term",{attrs:{term:"signal",show:"signalled"}}),e._v(" about an external "),t("Term",{attrs:{term:"event"}}),e._v(". A "),t("Term",{attrs:{term:"signal"}}),e._v(" is always point to point destined to a specific "),t("Term",{attrs:{term:"workflow"}}),e._v(" instance. "),t("Term",{attrs:{term:"signal",show:"Signals"}}),e._v(" are always processed in the order in which they are received.")],1),e._v(" "),t("p",[e._v("There are multiple scenarios for which "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" are useful.")],1),e._v(" "),t("h2",{attrs:{id:"event-aggregation-and-correlation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#event-aggregation-and-correlation"}},[e._v("#")]),e._v(" Event Aggregation and Correlation")]),e._v(" "),t("p",[e._v("Cadence is not a replacement for generic stream processing engines like Apache Flink or Apache Spark. But in certain scenarios it is a better fit. For example, when all "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" that should be aggregated and correlated are always applied to some business entity with a clear ID. And then when a certain condition is met, actions should be executed.")],1),e._v(" "),t("p",[e._v("The main limitation is that a single Cadence "),t("Term",{attrs:{term:"workflow"}}),e._v(" has a pretty limited throughput, while the number of "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" is practically unlimited. So if you need to aggregate "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" per customer, and your application has 100 million customers and each customer doesn't generate more than 20 "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" per second, then Cadence would work fine. But if you want to aggregate all "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" for US customers then the rate of these "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" would be beyond the single "),t("Term",{attrs:{term:"workflow"}}),e._v(" capacity.")],1),e._v(" "),t("p",[e._v("For example, an IoT device generates "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" and a certain sequence of "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" indicates that the device should be reprovisioned. A "),t("Term",{attrs:{term:"workflow"}}),e._v(" instance per device would be created and each instance would manage the state machine of the device and execute reprovision "),t("Term",{attrs:{term:"activity"}}),e._v(" when necessary.")],1),e._v(" "),t("p",[e._v("Another use case is a customer loyalty program. Every time a customer makes a purchase, an "),t("Term",{attrs:{term:"event"}}),e._v(" is generated into Apache Kafka for downstream systems to process. A loyalty service Kafka consumer receives the "),t("Term",{attrs:{term:"event"}}),e._v(" and "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" a customer "),t("Term",{attrs:{term:"workflow"}}),e._v(" about the purchase using the Cadence "),t("code",[e._v("signalWorkflowExecution")]),e._v(" API. The "),t("Term",{attrs:{term:"workflow"}}),e._v(" accumulates the count of the purchases. If a specified threshold is achieved, the "),t("Term",{attrs:{term:"workflow"}}),e._v(" executes an "),t("Term",{attrs:{term:"activity"}}),e._v(" that notifies some external service that the customer has reached the next level of loyalty program. The "),t("Term",{attrs:{term:"workflow"}}),e._v(" also executes "),t("Term",{attrs:{term:"activity",show:"activities"}}),e._v(" to periodically message the customer about their current status.")],1),e._v(" "),t("h2",{attrs:{id:"human-tasks"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#human-tasks"}},[e._v("#")]),e._v(" Human Tasks")]),e._v(" "),t("p",[e._v("A lot of business processes involve human participants. The standard Cadence pattern for implementing an external interaction is to execute an "),t("Term",{attrs:{term:"activity"}}),e._v(" that creates a human "),t("Term",{attrs:{term:"task"}}),e._v(" in an external system. It can be an email with a form, or a record in some external database, or a mobile app notification. When a user changes the status of the "),t("Term",{attrs:{term:"task"}}),e._v(", a "),t("Term",{attrs:{term:"signal"}}),e._v(" is sent to the corresponding "),t("Term",{attrs:{term:"workflow"}}),e._v(". For example, when the form is submitted, or a mobile app notification is acknowledged. Some "),t("Term",{attrs:{term:"task",show:"tasks"}}),e._v(" have multiple possible actions like claim, return, complete, reject. So multiple "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" can be sent in relation to it.")],1),e._v(" "),t("h2",{attrs:{id:"process-execution-alteration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#process-execution-alteration"}},[e._v("#")]),e._v(" Process Execution Alteration")]),e._v(" "),t("p",[e._v("Some business processes should change their behavior if some external "),t("Term",{attrs:{term:"event"}}),e._v(" has happened. For example, while executing an order shipment "),t("Term",{attrs:{term:"workflow"}}),e._v(", any change in item quantity could be delivered in a form of a "),t("Term",{attrs:{term:"signal"}}),e._v(".")],1),e._v(" "),t("p",[e._v("Another example is a service deployment "),t("Term",{attrs:{term:"workflow"}}),e._v(". While rolling out new software version to a Kubernetes cluster some problem was identified. A "),t("Term",{attrs:{term:"signal"}}),e._v(" can be used to ask the "),t("Term",{attrs:{term:"workflow"}}),e._v(" to pause while the problem is investigated. Then either a continue or a rollback "),t("Term",{attrs:{term:"signal"}}),e._v(" can be used to execute the appropriate action.")],1),e._v(" "),t("h2",{attrs:{id:"synchronization"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#synchronization"}},[e._v("#")]),e._v(" Synchronization")]),e._v(" "),t("p",[e._v("Cadence "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" are strongly consistent so they can be used as a synchronization point for executing actions. For example, there is a requirement that all messages for a single user are processed sequentially but the underlying messaging infrastructure can deliver them in parallel. The Cadence solution would be to have a "),t("Term",{attrs:{term:"workflow"}}),e._v(" per user and "),t("Term",{attrs:{term:"signal"}}),e._v(" it when an "),t("Term",{attrs:{term:"event"}}),e._v(" is received. Then the "),t("Term",{attrs:{term:"workflow"}}),e._v(" would buffer all "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" in an internal data structure and then call an "),t("Term",{attrs:{term:"activity"}}),e._v(" for every "),t("Term",{attrs:{term:"signal"}}),e._v(" received. See the following "),t("a",{attrs:{href:"https://stackoverflow.com/a/56615120/1664318",target:"_blank",rel:"noopener noreferrer"}},[e._v("Stack Overflow answer"),t("OutboundLink")],1),e._v(" for an example.")],1)])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[46],{352:function(e,t,r){"use strict";r.r(t);var a=r(0),s=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"event-handling"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#event-handling"}},[e._v("#")]),e._v(" Event handling")]),e._v(" "),t("p",[e._v("Fault-oblivious stateful "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" can be "),t("Term",{attrs:{term:"signal",show:"signalled"}}),e._v(" about an external "),t("Term",{attrs:{term:"event"}}),e._v(". A "),t("Term",{attrs:{term:"signal"}}),e._v(" is always point to point destined to a specific "),t("Term",{attrs:{term:"workflow"}}),e._v(" instance. "),t("Term",{attrs:{term:"signal",show:"Signals"}}),e._v(" are always processed in the order in which they are received.")],1),e._v(" "),t("p",[e._v("There are multiple scenarios for which "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" are useful.")],1),e._v(" "),t("h2",{attrs:{id:"event-aggregation-and-correlation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#event-aggregation-and-correlation"}},[e._v("#")]),e._v(" Event Aggregation and Correlation")]),e._v(" "),t("p",[e._v("Cadence is not a replacement for generic stream processing engines like Apache Flink or Apache Spark. But in certain scenarios it is a better fit. For example, when all "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" that should be aggregated and correlated are always applied to some business entity with a clear ID. And then when a certain condition is met, actions should be executed.")],1),e._v(" "),t("p",[e._v("The main limitation is that a single Cadence "),t("Term",{attrs:{term:"workflow"}}),e._v(" has a pretty limited throughput, while the number of "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" is practically unlimited. So if you need to aggregate "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" per customer, and your application has 100 million customers and each customer doesn't generate more than 20 "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" per second, then Cadence would work fine. But if you want to aggregate all "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" for US customers then the rate of these "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" would be beyond the single "),t("Term",{attrs:{term:"workflow"}}),e._v(" capacity.")],1),e._v(" "),t("p",[e._v("For example, an IoT device generates "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" and a certain sequence of "),t("Term",{attrs:{term:"event",show:"events"}}),e._v(" indicates that the device should be reprovisioned. A "),t("Term",{attrs:{term:"workflow"}}),e._v(" instance per device would be created and each instance would manage the state machine of the device and execute reprovision "),t("Term",{attrs:{term:"activity"}}),e._v(" when necessary.")],1),e._v(" "),t("p",[e._v("Another use case is a customer loyalty program. Every time a customer makes a purchase, an "),t("Term",{attrs:{term:"event"}}),e._v(" is generated into Apache Kafka for downstream systems to process. A loyalty service Kafka consumer receives the "),t("Term",{attrs:{term:"event"}}),e._v(" and "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" a customer "),t("Term",{attrs:{term:"workflow"}}),e._v(" about the purchase using the Cadence "),t("code",[e._v("signalWorkflowExecution")]),e._v(" API. The "),t("Term",{attrs:{term:"workflow"}}),e._v(" accumulates the count of the purchases. If a specified threshold is achieved, the "),t("Term",{attrs:{term:"workflow"}}),e._v(" executes an "),t("Term",{attrs:{term:"activity"}}),e._v(" that notifies some external service that the customer has reached the next level of loyalty program. The "),t("Term",{attrs:{term:"workflow"}}),e._v(" also executes "),t("Term",{attrs:{term:"activity",show:"activities"}}),e._v(" to periodically message the customer about their current status.")],1),e._v(" "),t("h2",{attrs:{id:"human-tasks"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#human-tasks"}},[e._v("#")]),e._v(" Human Tasks")]),e._v(" "),t("p",[e._v("A lot of business processes involve human participants. The standard Cadence pattern for implementing an external interaction is to execute an "),t("Term",{attrs:{term:"activity"}}),e._v(" that creates a human "),t("Term",{attrs:{term:"task"}}),e._v(" in an external system. It can be an email with a form, or a record in some external database, or a mobile app notification. When a user changes the status of the "),t("Term",{attrs:{term:"task"}}),e._v(", a "),t("Term",{attrs:{term:"signal"}}),e._v(" is sent to the corresponding "),t("Term",{attrs:{term:"workflow"}}),e._v(". For example, when the form is submitted, or a mobile app notification is acknowledged. Some "),t("Term",{attrs:{term:"task",show:"tasks"}}),e._v(" have multiple possible actions like claim, return, complete, reject. So multiple "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" can be sent in relation to it.")],1),e._v(" "),t("h2",{attrs:{id:"process-execution-alteration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#process-execution-alteration"}},[e._v("#")]),e._v(" Process Execution Alteration")]),e._v(" "),t("p",[e._v("Some business processes should change their behavior if some external "),t("Term",{attrs:{term:"event"}}),e._v(" has happened. For example, while executing an order shipment "),t("Term",{attrs:{term:"workflow"}}),e._v(", any change in item quantity could be delivered in a form of a "),t("Term",{attrs:{term:"signal"}}),e._v(".")],1),e._v(" "),t("p",[e._v("Another example is a service deployment "),t("Term",{attrs:{term:"workflow"}}),e._v(". While rolling out new software version to a Kubernetes cluster some problem was identified. A "),t("Term",{attrs:{term:"signal"}}),e._v(" can be used to ask the "),t("Term",{attrs:{term:"workflow"}}),e._v(" to pause while the problem is investigated. Then either a continue or a rollback "),t("Term",{attrs:{term:"signal"}}),e._v(" can be used to execute the appropriate action.")],1),e._v(" "),t("h2",{attrs:{id:"synchronization"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#synchronization"}},[e._v("#")]),e._v(" Synchronization")]),e._v(" "),t("p",[e._v("Cadence "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" are strongly consistent so they can be used as a synchronization point for executing actions. For example, there is a requirement that all messages for a single user are processed sequentially but the underlying messaging infrastructure can deliver them in parallel. The Cadence solution would be to have a "),t("Term",{attrs:{term:"workflow"}}),e._v(" per user and "),t("Term",{attrs:{term:"signal"}}),e._v(" it when an "),t("Term",{attrs:{term:"event"}}),e._v(" is received. Then the "),t("Term",{attrs:{term:"workflow"}}),e._v(" would buffer all "),t("Term",{attrs:{term:"signal",show:"signals"}}),e._v(" in an internal data structure and then call an "),t("Term",{attrs:{term:"activity"}}),e._v(" for every "),t("Term",{attrs:{term:"signal"}}),e._v(" received. See the following "),t("a",{attrs:{href:"https://stackoverflow.com/a/56615120/1664318",target:"_blank",rel:"noopener noreferrer"}},[e._v("Stack Overflow answer"),t("OutboundLink")],1),e._v(" for an example.")],1)])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/46.1d2958cf.js b/assets/js/46.ee24faa0.js similarity index 99% rename from assets/js/46.1d2958cf.js rename to assets/js/46.ee24faa0.js index a02bf6e6d..af33fe98e 100644 --- a/assets/js/46.1d2958cf.js +++ b/assets/js/46.ee24faa0.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[46],{388:function(t,s,e){"use strict";e.r(s);var n=e(4),a=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("p",[t._v("It is conceivable that developers constantly update their Cadence workflow code based upon new business use cases and needs. However,\nthe definition of a Cadence workflow must be deterministic because behind the scenes cadence uses event sourcing to construct\nthe workflow state by replaying the historical events stored for this specific workflow. Introducing components that are not compatible\nwith an existing running workflow will yield to non-deterministic errors and sometimes developers find it tricky to debug. Consider the\nfollowing workflow that executes two activities.")]),t._v(" "),s("div",{staticClass:"language-go extra-class"},[s("pre",{pre:!0,attrs:{class:"language-go"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("SampleWorkflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" data "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ao "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("ActivityOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ScheduleToStartTimeout"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n StartToCloseTimeout"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n ctx "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("WithActivityOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ao"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" result1 "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n err "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ActivityA"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" data"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("result1"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('""')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" result2 "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n err "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ActivityB"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" result1"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("result2"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" result2"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n")])])]),s("p",[t._v("In this example, the workflow will execute ActivityA and Activity B in sequence. These activities may have other logics in background, such as polling long running operations or manipulate database reads or writes. Now if the developer replaces ActivityA with another activity ActivityC, a non-deterministic error could happen for an existing workflow. It is because the workflow expects results from ActivityA but since the definition of the workflow has been changed to use results from ActivityC, the workflow will fail due to failure of identifying history data of ActivityA. Such issues can be detected by introducing replayers and shadowers to the workflow unit tests.")]),t._v(" "),s("p",[t._v("Cadence workflow replayer is a testing component for replaying existing workflow histories against a workflow definition. You may think of replayer as a mock which will rerun your workflow with exactly the same history as your real workflow. The replaying logic is the same as the one used for processing workflow tasks. If it detects any incompatible changes, the replay test will fail.\nWorkflow Replayer works well when verifying the compatibility against a small number of workflow histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from the cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.")]),t._v(" "),s("p",[t._v("Workflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purposes or as a workflow in your worker to continuously replay production workflows.")]),t._v(" "),s("p",[t._v("You may find detailed instructions on how to use replayers and shadowers on "),s("a",{attrs:{href:"https://cadenceworkflow.io/docs/go-client/workflow-replay-shadowing/",target:"_blank",rel:"noopener noreferrer"}},[t._v("our website"),s("OutboundLink")],1),t._v(". We will introduce versioning in the next coming blogs.")])])}),[],!1,null,null,null);s.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[46],{389:function(t,s,e){"use strict";e.r(s);var n=e(4),a=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("p",[t._v("It is conceivable that developers constantly update their Cadence workflow code based upon new business use cases and needs. However,\nthe definition of a Cadence workflow must be deterministic because behind the scenes cadence uses event sourcing to construct\nthe workflow state by replaying the historical events stored for this specific workflow. Introducing components that are not compatible\nwith an existing running workflow will yield to non-deterministic errors and sometimes developers find it tricky to debug. Consider the\nfollowing workflow that executes two activities.")]),t._v(" "),s("div",{staticClass:"language-go extra-class"},[s("pre",{pre:!0,attrs:{class:"language-go"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("SampleWorkflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" data "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ao "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("ActivityOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ScheduleToStartTimeout"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n StartToCloseTimeout"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n ctx "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("WithActivityOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ao"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" result1 "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n err "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ActivityA"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" data"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("result1"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('""')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" result2 "),s("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n err "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ActivityB"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" result1"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("result2"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" result2"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n")])])]),s("p",[t._v("In this example, the workflow will execute ActivityA and Activity B in sequence. These activities may have other logics in background, such as polling long running operations or manipulate database reads or writes. Now if the developer replaces ActivityA with another activity ActivityC, a non-deterministic error could happen for an existing workflow. It is because the workflow expects results from ActivityA but since the definition of the workflow has been changed to use results from ActivityC, the workflow will fail due to failure of identifying history data of ActivityA. Such issues can be detected by introducing replayers and shadowers to the workflow unit tests.")]),t._v(" "),s("p",[t._v("Cadence workflow replayer is a testing component for replaying existing workflow histories against a workflow definition. You may think of replayer as a mock which will rerun your workflow with exactly the same history as your real workflow. The replaying logic is the same as the one used for processing workflow tasks. If it detects any incompatible changes, the replay test will fail.\nWorkflow Replayer works well when verifying the compatibility against a small number of workflow histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from the cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.")]),t._v(" "),s("p",[t._v("Workflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purposes or as a workflow in your worker to continuously replay production workflows.")]),t._v(" "),s("p",[t._v("You may find detailed instructions on how to use replayers and shadowers on "),s("a",{attrs:{href:"https://cadenceworkflow.io/docs/go-client/workflow-replay-shadowing/",target:"_blank",rel:"noopener noreferrer"}},[t._v("our website"),s("OutboundLink")],1),t._v(". We will introduce versioning in the next coming blogs.")])])}),[],!1,null,null,null);s.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/47.b8724474.js b/assets/js/47.90bee20b.js similarity index 98% rename from assets/js/47.b8724474.js rename to assets/js/47.90bee20b.js index eeb4959e1..cfeb4d152 100644 --- a/assets/js/47.b8724474.js +++ b/assets/js/47.90bee20b.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[47],{352:function(e,t,r){"use strict";r.r(t);var a=r(0),o=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"synchronous-query"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#synchronous-query"}},[e._v("#")]),e._v(" Synchronous query")]),e._v(" "),t("p",[t("Term",{attrs:{term:"workflow",show:"Workflow"}}),e._v(" code is stateful with the Cadence framework preserving it over various software and hardware failures. The state is constantly mutated during "),t("Term",{attrs:{term:"workflow_execution"}}),e._v(". To expose this internal state to the external world Cadence provides a synchronous "),t("Term",{attrs:{term:"query"}}),e._v(" feature. From the "),t("Term",{attrs:{term:"workflow"}}),e._v(" implementer point of view the "),t("Term",{attrs:{term:"query"}}),e._v(" is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per "),t("Term",{attrs:{term:"workflow"}}),e._v(" type exposing different information to different external systems.")],1),e._v(" "),t("p",[e._v("To execute a "),t("Term",{attrs:{term:"query"}}),e._v(" an external client calls a synchronous Cadence API providing "),t("em",[t("Term",{attrs:{term:"domain"}}),e._v(", workflowID, "),t("Term",{attrs:{term:"query"}}),e._v(" name")],1),e._v(" and optional "),t("em",[t("Term",{attrs:{term:"query"}}),e._v(" arguments")],1),e._v(".")],1),e._v(" "),t("p",[t("Term",{attrs:{term:"query",show:"Query"}}),e._v(" callbacks must be read-only not mutating the "),t("Term",{attrs:{term:"workflow"}}),e._v(" state in any way. The other limitation is that the "),t("Term",{attrs:{term:"query"}}),e._v(" callback cannot contain any blocking code. Both above limitations rule out ability to invoke "),t("Term",{attrs:{term:"activity",show:"activities"}}),e._v(" from the "),t("Term",{attrs:{term:"query"}}),e._v(" handlers.")],1),e._v(" "),t("p",[e._v("Cadence team is currently working on implementing "),t("em",[e._v("update")]),e._v(" feature that would be similar to "),t("Term",{attrs:{term:"query"}}),e._v(" in the way it is invoked, but would support "),t("Term",{attrs:{term:"workflow"}}),e._v(" state mutation and "),t("Term",{attrs:{term:"local_activity"}}),e._v(" invocations. From user's point of view, "),t("em",[e._v("update")]),e._v(" is similar to signal + strong consistent query, but implemented in a much less expensive way in Cadence.")],1),e._v(" "),t("h2",{attrs:{id:"stack-trace-query"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#stack-trace-query"}},[e._v("#")]),e._v(" Stack Trace Query")]),e._v(" "),t("p",[e._v("The Cadence client libraries expose some predefined "),t("Term",{attrs:{term:"query",show:"queries"}}),e._v(" out of the box. Currently the only supported built-in "),t("Term",{attrs:{term:"query"}}),e._v(" is "),t("em",[e._v("stack_trace")]),e._v(". This "),t("Term",{attrs:{term:"query"}}),e._v(" returns stacks of all "),t("Term",{attrs:{term:"workflow"}}),e._v(" owned threads. This is a great way to troubleshoot any "),t("Term",{attrs:{term:"workflow"}}),e._v(" in production.")],1),e._v(" "),t("p",[e._v("Example")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v('$cadence --do samples-domain wf query -w -qt __stack_trace\n"coroutine 1 [blocked on selector-1.Select]:\\nmain.sampleSignalCounterWorkflow(0x1a99ae8, 0xc00009d700, 0x0, 0x0, 0x0)\\n\\t/Users/qlong/indeed/cadence-samples/cmd/samples/recipes/signalcounter/signal_counter_workflow.go:38 +0x1be\\nreflect.Value.call(0x1852ac0, 0x19cb608, 0x13, 0x1979180, 0x4, 0xc00045aa80, 0x2, 0x2, 0x2, 0x18, ...)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:476 +0x8e7\\nreflect.Value.Call(0x1852ac0, 0x19cb608, 0x13, 0xc00045aa80, 0x2, 0x2, 0x1, 0x2, 0xc00045a720)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:337 +0xb9\\ngo.uber.org/cadence/internal.(*workflowEnvironmentInterceptor).ExecuteWorkflow(0xc00045a720, 0x1a99ae8, 0xc00009d700, 0xc0001ca820, 0x20, 0xc00007fad0, 0x1, 0x1, 0x1, 0x1, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/workflow.go:372 +0x2cb\\ngo.uber.org/cadence/internal.(*workflowExecutor).Execute(0xc000098d80, 0x1a99ae8, 0xc00009d700, 0xc0001b127e, 0x2, 0x2, 0xc00044cb01, 0xc000070101, 0xc000073738, 0x1729f25, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_worker.go:699 +0x28d\\ngo.uber.org/cadence/internal.(*syncWorkflowDefinition).Execute.func1(0x1a99ce0, 0xc00045a9f0)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_workflow.go:466 +0x106"\n')])])])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[47],{353:function(e,t,r){"use strict";r.r(t);var a=r(0),o=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"synchronous-query"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#synchronous-query"}},[e._v("#")]),e._v(" Synchronous query")]),e._v(" "),t("p",[t("Term",{attrs:{term:"workflow",show:"Workflow"}}),e._v(" code is stateful with the Cadence framework preserving it over various software and hardware failures. The state is constantly mutated during "),t("Term",{attrs:{term:"workflow_execution"}}),e._v(". To expose this internal state to the external world Cadence provides a synchronous "),t("Term",{attrs:{term:"query"}}),e._v(" feature. From the "),t("Term",{attrs:{term:"workflow"}}),e._v(" implementer point of view the "),t("Term",{attrs:{term:"query"}}),e._v(" is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per "),t("Term",{attrs:{term:"workflow"}}),e._v(" type exposing different information to different external systems.")],1),e._v(" "),t("p",[e._v("To execute a "),t("Term",{attrs:{term:"query"}}),e._v(" an external client calls a synchronous Cadence API providing "),t("em",[t("Term",{attrs:{term:"domain"}}),e._v(", workflowID, "),t("Term",{attrs:{term:"query"}}),e._v(" name")],1),e._v(" and optional "),t("em",[t("Term",{attrs:{term:"query"}}),e._v(" arguments")],1),e._v(".")],1),e._v(" "),t("p",[t("Term",{attrs:{term:"query",show:"Query"}}),e._v(" callbacks must be read-only not mutating the "),t("Term",{attrs:{term:"workflow"}}),e._v(" state in any way. The other limitation is that the "),t("Term",{attrs:{term:"query"}}),e._v(" callback cannot contain any blocking code. Both above limitations rule out ability to invoke "),t("Term",{attrs:{term:"activity",show:"activities"}}),e._v(" from the "),t("Term",{attrs:{term:"query"}}),e._v(" handlers.")],1),e._v(" "),t("p",[e._v("Cadence team is currently working on implementing "),t("em",[e._v("update")]),e._v(" feature that would be similar to "),t("Term",{attrs:{term:"query"}}),e._v(" in the way it is invoked, but would support "),t("Term",{attrs:{term:"workflow"}}),e._v(" state mutation and "),t("Term",{attrs:{term:"local_activity"}}),e._v(" invocations. From user's point of view, "),t("em",[e._v("update")]),e._v(" is similar to signal + strong consistent query, but implemented in a much less expensive way in Cadence.")],1),e._v(" "),t("h2",{attrs:{id:"stack-trace-query"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#stack-trace-query"}},[e._v("#")]),e._v(" Stack Trace Query")]),e._v(" "),t("p",[e._v("The Cadence client libraries expose some predefined "),t("Term",{attrs:{term:"query",show:"queries"}}),e._v(" out of the box. Currently the only supported built-in "),t("Term",{attrs:{term:"query"}}),e._v(" is "),t("em",[e._v("stack_trace")]),e._v(". This "),t("Term",{attrs:{term:"query"}}),e._v(" returns stacks of all "),t("Term",{attrs:{term:"workflow"}}),e._v(" owned threads. This is a great way to troubleshoot any "),t("Term",{attrs:{term:"workflow"}}),e._v(" in production.")],1),e._v(" "),t("p",[e._v("Example")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v('$cadence --do samples-domain wf query -w -qt __stack_trace\n"coroutine 1 [blocked on selector-1.Select]:\\nmain.sampleSignalCounterWorkflow(0x1a99ae8, 0xc00009d700, 0x0, 0x0, 0x0)\\n\\t/Users/qlong/indeed/cadence-samples/cmd/samples/recipes/signalcounter/signal_counter_workflow.go:38 +0x1be\\nreflect.Value.call(0x1852ac0, 0x19cb608, 0x13, 0x1979180, 0x4, 0xc00045aa80, 0x2, 0x2, 0x2, 0x18, ...)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:476 +0x8e7\\nreflect.Value.Call(0x1852ac0, 0x19cb608, 0x13, 0xc00045aa80, 0x2, 0x2, 0x1, 0x2, 0xc00045a720)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:337 +0xb9\\ngo.uber.org/cadence/internal.(*workflowEnvironmentInterceptor).ExecuteWorkflow(0xc00045a720, 0x1a99ae8, 0xc00009d700, 0xc0001ca820, 0x20, 0xc00007fad0, 0x1, 0x1, 0x1, 0x1, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/workflow.go:372 +0x2cb\\ngo.uber.org/cadence/internal.(*workflowExecutor).Execute(0xc000098d80, 0x1a99ae8, 0xc00009d700, 0xc0001b127e, 0x2, 0x2, 0xc00044cb01, 0xc000070101, 0xc000073738, 0x1729f25, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_worker.go:699 +0x28d\\ngo.uber.org/cadence/internal.(*syncWorkflowDefinition).Execute.func1(0x1a99ce0, 0xc00045a9f0)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_workflow.go:466 +0x106"\n')])])])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/47.71652c82.js b/assets/js/47.c2bead0f.js similarity index 99% rename from assets/js/47.71652c82.js rename to assets/js/47.c2bead0f.js index d78a77808..06ffe10fb 100644 --- a/assets/js/47.71652c82.js +++ b/assets/js/47.c2bead0f.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[47],{391:function(e,t,n){"use strict";n.r(t);var a=n(4),r=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"more-cadence-how-to-s"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#more-cadence-how-to-s"}},[e._v("#")]),e._v(" More Cadence How To's")]),e._v(" "),t("p",[e._v("You might have noticed that we have had a few more contributions to our blog from "),t("a",{attrs:{href:"https://www.linkedin.com/in/chrisqin0610",target:"_blank",rel:"noopener noreferrer"}},[e._v("Chris Qin"),t("OutboundLink")],1),e._v(". Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.")]),e._v(" "),t("p",[e._v("Here are the latest topics:")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://cadenceworkflow.io/blog/2023/07/10/cadence-bad-practices-part-1/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Bad Practices and Anti-Patterns with Cadence - Part 1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://cadenceworkflow.io/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Non-Determistic Errors, Replayers and Shadowers"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("Even if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.")]),e._v(" "),t("p",[e._v("Chris is also going to take a look at the "),t("a",{attrs:{href:"https://cadenceworkflow.io/docs/java-client/client-overview/#samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Samples"),t("OutboundLink")],1),e._v(" to make sure they are all working and if not - he's going to re-write them so that they do!")]),e._v(" "),t("p",[e._v("Thanks very much Chris for all the work you are doing to help improve the project!")]),e._v(" "),t("h2",{attrs:{id:"more-iwf-examaples"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#more-iwf-examaples"}},[e._v("#")]),e._v(" More iWF Examaples")]),e._v(" "),t("p",[e._v("Community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(" has also been busy writing this month. In previous blogs Long has told us about "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF"),t("OutboundLink")],1),e._v(" that is a layer implemented over of Cadence.")]),e._v(" "),t("p",[e._v("During August Long has published a couple of articles on using the 'ContinueAsNew' functionality in iWF. Links to Part 1 and Part are below:")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-2-cedabd732bec",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-1-c24ae5266f07",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 2"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("Please take a look and if you've enjoyed reading them then let Long and us know!")]),e._v(" "),t("h2",{attrs:{id:"cadence-at-the-helm"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-at-the-helm"}},[e._v("#")]),e._v(" Cadence At the Helm!")]),e._v(" "),t("p",[e._v("Last month we mentioned the Cadence Helm charts and all the previous work that had been done by "),t("a",{attrs:{href:"https://www.linkedin.com/in/sagikazarmark/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Mark Sagi-Kazar"),t("OutboundLink")],1),e._v(". We were looking to ensure they are maintained.")]),e._v(" "),t("p",[e._v("So a special thanks goes out this month to "),t("a",{attrs:{href:"ttps://github.com/edmondop"}},[e._v("Edmondo")]),e._v(" for contributing some work on the "),t("a",{attrs:{href:"https://github.com/edmondop/cadence-helm-chart/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Helm Chart"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("h2",{attrs:{id:"community-support"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-support"}},[e._v("#")]),e._v(" Community Support!")]),e._v(" "),t("p",[e._v("Our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel continues to be the main place where people are asking for help and support with Cadence. During August (which is supposed to be holiday season), we still had 9 questions raised around various topics.")]),e._v(" "),t("p",[e._v("Huge thanks to the following community members who took time to respond and help others: David, Edmondo, Chris Qin, Rony Rahman and Ben Slater.")]),e._v(" "),t("p",[e._v("It's good to see that we are continuing to support each other - doing exactly what communities do!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers.\nPlease take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-2-cedabd732bec",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-1-c24ae5266f07",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 2"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/aws-privatelink-for-cadence-on-instaclustr-by-netapp/",target:"_blank",rel:"noopener noreferrer"}},[e._v("AWS PrivateLink Connectivity is now Available with Instaclustr for Cadence"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN_Uh9Y6ruiQSS5EiylNlsMug#/registration",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Introducing the Cadence Workflow HTTP API - 21st September 2023 "),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN_Hv9lO9QtSqyPPWkSAIRj5g#/registration",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Microservices - A Modern Orchestration Approach with Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/events/spinning-your-drones-with-cadence-and-apache-kafka/",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Spinning Your Drones with Cadence and Apache Kafka"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" #community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[47],{390:function(e,t,n){"use strict";n.r(t);var a=n(4),r=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"more-cadence-how-to-s"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#more-cadence-how-to-s"}},[e._v("#")]),e._v(" More Cadence How To's")]),e._v(" "),t("p",[e._v("You might have noticed that we have had a few more contributions to our blog from "),t("a",{attrs:{href:"https://www.linkedin.com/in/chrisqin0610",target:"_blank",rel:"noopener noreferrer"}},[e._v("Chris Qin"),t("OutboundLink")],1),e._v(". Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.")]),e._v(" "),t("p",[e._v("Here are the latest topics:")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://cadenceworkflow.io/blog/2023/07/10/cadence-bad-practices-part-1/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Bad Practices and Anti-Patterns with Cadence - Part 1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://cadenceworkflow.io/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Non-Determistic Errors, Replayers and Shadowers"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("Even if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.")]),e._v(" "),t("p",[e._v("Chris is also going to take a look at the "),t("a",{attrs:{href:"https://cadenceworkflow.io/docs/java-client/client-overview/#samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Samples"),t("OutboundLink")],1),e._v(" to make sure they are all working and if not - he's going to re-write them so that they do!")]),e._v(" "),t("p",[e._v("Thanks very much Chris for all the work you are doing to help improve the project!")]),e._v(" "),t("h2",{attrs:{id:"more-iwf-examaples"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#more-iwf-examaples"}},[e._v("#")]),e._v(" More iWF Examaples")]),e._v(" "),t("p",[e._v("Community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(" has also been busy writing this month. In previous blogs Long has told us about "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF"),t("OutboundLink")],1),e._v(" that is a layer implemented over of Cadence.")]),e._v(" "),t("p",[e._v("During August Long has published a couple of articles on using the 'ContinueAsNew' functionality in iWF. Links to Part 1 and Part are below:")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-2-cedabd732bec",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-1-c24ae5266f07",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 2"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("Please take a look and if you've enjoyed reading them then let Long and us know!")]),e._v(" "),t("h2",{attrs:{id:"cadence-at-the-helm"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-at-the-helm"}},[e._v("#")]),e._v(" Cadence At the Helm!")]),e._v(" "),t("p",[e._v("Last month we mentioned the Cadence Helm charts and all the previous work that had been done by "),t("a",{attrs:{href:"https://www.linkedin.com/in/sagikazarmark/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Mark Sagi-Kazar"),t("OutboundLink")],1),e._v(". We were looking to ensure they are maintained.")]),e._v(" "),t("p",[e._v("So a special thanks goes out this month to "),t("a",{attrs:{href:"ttps://github.com/edmondop"}},[e._v("Edmondo")]),e._v(" for contributing some work on the "),t("a",{attrs:{href:"https://github.com/edmondop/cadence-helm-chart/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Helm Chart"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("h2",{attrs:{id:"community-support"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#community-support"}},[e._v("#")]),e._v(" Community Support!")]),e._v(" "),t("p",[e._v("Our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel continues to be the main place where people are asking for help and support with Cadence. During August (which is supposed to be holiday season), we still had 9 questions raised around various topics.")]),e._v(" "),t("p",[e._v("Huge thanks to the following community members who took time to respond and help others: David, Edmondo, Chris Qin, Rony Rahman and Ben Slater.")]),e._v(" "),t("p",[e._v("It's good to see that we are continuing to support each other - doing exactly what communities do!")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers.\nPlease take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-2-cedabd732bec",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/guide-to-continueasnew-in-cadence-temporal-workflow-using-iwf-as-an-example-part-1-c24ae5266f07",target:"_blank",rel:"noopener noreferrer"}},[e._v("Guide to ContinueAsNew in Cadence/Temporal Workflow Using iWF as an example - Part 2"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/aws-privatelink-for-cadence-on-instaclustr-by-netapp/",target:"_blank",rel:"noopener noreferrer"}},[e._v("AWS PrivateLink Connectivity is now Available with Instaclustr for Cadence"),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN_Uh9Y6ruiQSS5EiylNlsMug#/registration",target:"_blank",rel:"noopener noreferrer"}},[e._v("Webinar: Introducing the Cadence Workflow HTTP API - 21st September 2023 "),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN_Hv9lO9QtSqyPPWkSAIRj5g#/registration",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Microservices - A Modern Orchestration Approach with Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/events/spinning-your-drones-with-cadence-and-apache-kafka/",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Spinning Your Drones with Cadence and Apache Kafka"),t("OutboundLink")],1)])])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" #community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/48.908ad06e.js b/assets/js/48.3888f184.js similarity index 99% rename from assets/js/48.908ad06e.js rename to assets/js/48.3888f184.js index 46cc26c5c..f853acf90 100644 --- a/assets/js/48.908ad06e.js +++ b/assets/js/48.3888f184.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[48],{389:function(e,t,a){"use strict";a.r(t);var o=a(4),n=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("It's been a couple of months since our last update so we have a lot of updates to share with you.")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"proposal-for-cadence-native-authentication"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#proposal-for-cadence-native-authentication"}},[e._v("#")]),e._v(" Proposal for Cadence Native Authentication")]),e._v(" "),t("p",[e._v("Community member "),t("a",{attrs:{href:"https://lt.linkedin.com/in/mantassidlauskas",target:"_blank",rel:"noopener noreferrer"}},[e._v("Mantas Sidlauskas"),t("OutboundLink")],1),e._v(" has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://docs.google.com/document/d/13GxRBZfQkLyhDCrpFaZmRcw7DJJG-zdy0_mPXy3CcWw/edit#heading=h.c8u99ansg7ma",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Native Authentication Proposal"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the results of the community input soon!")]),e._v(" "),t("h2",{attrs:{id:"iwf-deep-dive-and-more"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#iwf-deep-dive-and-more"}},[e._v("#")]),e._v(" iWF Deep Dive and More!")]),e._v(" "),t("p",[e._v("During the last few months community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(" has continued to share his thoughts about "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF"),t("OutboundLink")],1),e._v(", a layer implemented on top of Cadence. Since our last update iWF now has a"),t("a",{attrs:{href:"https://github.com/indeedeng/iwf-python-sdk",target:"_blank",rel:"noopener noreferrer"}},[e._v("Python SDK"),t("OutboundLink")],1),e._v(". Long has been busy writing articles to share iWF tips and tricks as well as some general ideas about workflows and processes. Links to Long's articles can be found below:")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/iwf-deep-dive-workflowstate-durable-timer-1-0bb89e6d6fd4",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF Deep Dive: workflowState+Durable Timer#1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/gotchas-about-signalwithstart-in-cadence-temporal-c3783fe1cc2e",target:"_blank",rel:"noopener noreferrer"}},[e._v("Gotchas About SignalWithStart in Cadence/Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/workflow-could-be-process-in-workflowascode-frameworks-63dcb632c248",target:"_blank",rel:"noopener noreferrer"}},[e._v('"Workflow" could be "Process" in WorkflowAsCode frameworks'),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"new-go-samples-for-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#new-go-samples-for-cadence"}},[e._v("#")]),e._v(" New Go Samples for Cadence")]),e._v(" "),t("p",[e._v("The Cadence core team is deprecating the old samples for Go and replacing them with new version 2 (V2) samples. They have received a lot of feedback from the community that people are having trouble with old samples, so are in the process of publishing a completely new set of samples for Go.")]),e._v(" "),t("p",[e._v("Here are some major changes to the new samples:")]),e._v(" "),t("ul",[t("li",[e._v("Easy to use the read - the new samples will be completely based on CLIs instead of running a binary. (This is consistent with current Cadence use experience)")]),e._v(" "),t("li",[e._v("Simple and transparent worker configuration - the old samples did not provide user a clear demonstration about the relationship between the worker and workflow themselves")]),e._v(" "),t("li",[e._v("The new samples will help you bootstrap your Cadence workflow faster and easier.")]),e._v(" "),t("li",[e._v('More vivid and self-explanatory - instead of the traditional "HelloWorld" type of samples, we want to make it more interesting and engaging. (Each sample will try to simulate a real-life use case to make them more understandable and fun to learn!)')])]),e._v(" "),t("p",[e._v("We hope the community will enjoy these changes. If you have any questions or have new an idea for a new sample then please reach out to "),t("a",{attrs:{href:"https://www.linkedin.com/in/chrisqin0610",target:"_blank",rel:"noopener noreferrer"}},[e._v("Chris Qin"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("The new Go samples can be found at:")]),e._v(" "),t("ul",[t("li",[e._v("https://github.com/uber-common/cadence-samples/tree/master/new_samples.")])]),e._v(" "),t("p",[e._v("Note that the old samples will be removed once the new samples are fully refreshed.")]),e._v(" "),t("h2",{attrs:{id:"cadence-retrospective"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-retrospective"}},[e._v("#")]),e._v(" Cadence Retrospective")]),e._v(" "),t("p",[e._v("We are nearly at the end of another year and yes it has gone so fast! Over this year Cadence and the community have evolved and grown. This is a good time to reflect about all the things that have happened in the project over the year and think about a possible roadmap for the future.")]),e._v(" "),t("p",[e._v("If you have any feedback, or comments about the project or ideas about what features you'd like to see in the roadmap then please feel free to begin a discussion in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers.\nPlease take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/how-to-throttle-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("How to Throttle Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/iwf-deep-dive-workflowstate-durable-timer-1-0bb89e6d6fd4",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF Deep Dive: workflowState+Durable Timer#1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/gotchas-about-signalwithstart-in-cadence-temporal-c3783fe1cc2e",target:"_blank",rel:"noopener noreferrer"}},[e._v("Gotchas About SignalWithStart in Cadence/Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/workflow-could-be-process-in-workflowascode-frameworks-63dcb632c248",target:"_blank",rel:"noopener noreferrer"}},[e._v('"Workflow" could be "Process" in WorkflowAsCode frameworks'),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN_jT5fxSldRhuzV0NSllBd7g#/registration",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Building With Cadence:Quantifiable Efficiency"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" #community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[48],{391:function(e,t,a){"use strict";a.r(t);var o=a(4),n=Object(o.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("p",[e._v("Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!")]),e._v(" "),t("p",[e._v("It's been a couple of months since our last update so we have a lot of updates to share with you.")]),e._v(" "),t("p",[e._v("Please see below for a roundup of the highlights:")]),e._v(" "),t("h2",{attrs:{id:"proposal-for-cadence-native-authentication"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#proposal-for-cadence-native-authentication"}},[e._v("#")]),e._v(" Proposal for Cadence Native Authentication")]),e._v(" "),t("p",[e._v("Community member "),t("a",{attrs:{href:"https://lt.linkedin.com/in/mantassidlauskas",target:"_blank",rel:"noopener noreferrer"}},[e._v("Mantas Sidlauskas"),t("OutboundLink")],1),e._v(" has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://docs.google.com/document/d/13GxRBZfQkLyhDCrpFaZmRcw7DJJG-zdy0_mPXy3CcWw/edit#heading=h.c8u99ansg7ma",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Native Authentication Proposal"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the results of the community input soon!")]),e._v(" "),t("h2",{attrs:{id:"iwf-deep-dive-and-more"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#iwf-deep-dive-and-more"}},[e._v("#")]),e._v(" iWF Deep Dive and More!")]),e._v(" "),t("p",[e._v("During the last few months community member "),t("a",{attrs:{href:"https://www.linkedin.com/in/prclqz/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Quanzheng Long"),t("OutboundLink")],1),e._v(" has continued to share his thoughts about "),t("a",{attrs:{href:"https://github.com/indeedeng/iwf",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF"),t("OutboundLink")],1),e._v(", a layer implemented on top of Cadence. Since our last update iWF now has a"),t("a",{attrs:{href:"https://github.com/indeedeng/iwf-python-sdk",target:"_blank",rel:"noopener noreferrer"}},[e._v("Python SDK"),t("OutboundLink")],1),e._v(". Long has been busy writing articles to share iWF tips and tricks as well as some general ideas about workflows and processes. Links to Long's articles can be found below:")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/iwf-deep-dive-workflowstate-durable-timer-1-0bb89e6d6fd4",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF Deep Dive: workflowState+Durable Timer#1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/gotchas-about-signalwithstart-in-cadence-temporal-c3783fe1cc2e",target:"_blank",rel:"noopener noreferrer"}},[e._v("Gotchas About SignalWithStart in Cadence/Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/workflow-could-be-process-in-workflowascode-frameworks-63dcb632c248",target:"_blank",rel:"noopener noreferrer"}},[e._v('"Workflow" could be "Process" in WorkflowAsCode frameworks'),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"new-go-samples-for-cadence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#new-go-samples-for-cadence"}},[e._v("#")]),e._v(" New Go Samples for Cadence")]),e._v(" "),t("p",[e._v("The Cadence core team is deprecating the old samples for Go and replacing them with new version 2 (V2) samples. They have received a lot of feedback from the community that people are having trouble with old samples, so are in the process of publishing a completely new set of samples for Go.")]),e._v(" "),t("p",[e._v("Here are some major changes to the new samples:")]),e._v(" "),t("ul",[t("li",[e._v("Easy to use the read - the new samples will be completely based on CLIs instead of running a binary. (This is consistent with current Cadence use experience)")]),e._v(" "),t("li",[e._v("Simple and transparent worker configuration - the old samples did not provide user a clear demonstration about the relationship between the worker and workflow themselves")]),e._v(" "),t("li",[e._v("The new samples will help you bootstrap your Cadence workflow faster and easier.")]),e._v(" "),t("li",[e._v('More vivid and self-explanatory - instead of the traditional "HelloWorld" type of samples, we want to make it more interesting and engaging. (Each sample will try to simulate a real-life use case to make them more understandable and fun to learn!)')])]),e._v(" "),t("p",[e._v("We hope the community will enjoy these changes. If you have any questions or have new an idea for a new sample then please reach out to "),t("a",{attrs:{href:"https://www.linkedin.com/in/chrisqin0610",target:"_blank",rel:"noopener noreferrer"}},[e._v("Chris Qin"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("The new Go samples can be found at:")]),e._v(" "),t("ul",[t("li",[e._v("https://github.com/uber-common/cadence-samples/tree/master/new_samples.")])]),e._v(" "),t("p",[e._v("Note that the old samples will be removed once the new samples are fully refreshed.")]),e._v(" "),t("h2",{attrs:{id:"cadence-retrospective"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-retrospective"}},[e._v("#")]),e._v(" Cadence Retrospective")]),e._v(" "),t("p",[e._v("We are nearly at the end of another year and yes it has gone so fast! Over this year Cadence and the community have evolved and grown. This is a good time to reflect about all the things that have happened in the project over the year and think about a possible roadmap for the future.")]),e._v(" "),t("p",[e._v("If you have any feedback, or comments about the project or ideas about what features you'd like to see in the roadmap then please feel free to begin a discussion in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")]),e._v(" "),t("h2",{attrs:{id:"cadence-in-the-news"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-in-the-news"}},[e._v("#")]),e._v(" Cadence in the News!")]),e._v(" "),t("p",[e._v("Below are a selection of Cadence related articles, blogs and whitepapers.\nPlease take a look and feel free to share via your own social media channels.")]),e._v(" "),t("ul",[t("li",[t("p",[t("a",{attrs:{href:"https://www.instaclustr.com/blog/how-to-throttle-cadence/",target:"_blank",rel:"noopener noreferrer"}},[e._v("How to Throttle Cadence"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/iwf-deep-dive-workflowstate-durable-timer-1-0bb89e6d6fd4",target:"_blank",rel:"noopener noreferrer"}},[e._v("iWF Deep Dive: workflowState+Durable Timer#1"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/gotchas-about-signalwithstart-in-cadence-temporal-c3783fe1cc2e",target:"_blank",rel:"noopener noreferrer"}},[e._v("Gotchas About SignalWithStart in Cadence/Temporal"),t("OutboundLink")],1)])]),e._v(" "),t("li",[t("p",[t("a",{attrs:{href:"https://medium.com/@qlong/workflow-could-be-process-in-workflowascode-frameworks-63dcb632c248",target:"_blank",rel:"noopener noreferrer"}},[e._v('"Workflow" could be "Process" in WorkflowAsCode frameworks'),t("OutboundLink")],1)])])]),e._v(" "),t("h2",{attrs:{id:"upcoming-events"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upcoming-events"}},[e._v("#")]),e._v(" Upcoming Events")]),e._v(" "),t("ul",[t("li",[t("a",{attrs:{href:"https://netapp.zoom.us/webinar/register/WN_jT5fxSldRhuzV0NSllBd7g#/registration",target:"_blank",rel:"noopener noreferrer"}},[e._v("On Demand Webinar: Building With Cadence:Quantifiable Efficiency"),t("OutboundLink")],1)])]),e._v(" "),t("p",[e._v("If you have any news or topics you'd like us to include in our next update then please join our "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" #community channel.")]),e._v(" "),t("p",[e._v("Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community "),t("a",{attrs:{href:"http://t.uber.com/cadence-slack",target:"_blank",rel:"noopener noreferrer"}},[e._v("Slack"),t("OutboundLink")],1),e._v(" channel.")])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/49.f52b9328.js b/assets/js/49.38856855.js similarity index 98% rename from assets/js/49.f52b9328.js rename to assets/js/49.38856855.js index ef3b32392..1ab7572f1 100644 --- a/assets/js/49.f52b9328.js +++ b/assets/js/49.38856855.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[49],{354:function(t,e,r){"use strict";r.r(e);var s=r(0),a=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"task-lists"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#task-lists"}},[t._v("#")]),t._v(" Task lists")]),t._v(" "),e("p",[t._v("When a "),e("Term",{attrs:{term:"workflow"}}),t._v(" invokes an "),e("Term",{attrs:{term:"activity"}}),t._v(", it sends the "),e("code",[t._v("ScheduleActivityTask")]),t._v(" "),e("Term",{attrs:{term:"decision"}}),t._v(" to the\nCadence service. As a result, the service updates the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state and dispatches\nan "),e("Term",{attrs:{term:"activity_task"}}),t._v(" to a "),e("Term",{attrs:{term:"worker"}}),t._v(" that implements the "),e("Term",{attrs:{term:"activity"}}),t._v(".\nInstead of calling the "),e("Term",{attrs:{term:"worker"}}),t._v(" directly, an intermediate queue is used. So the service adds an "),e("em",[e("Term",{attrs:{term:"activity_task"}})],1),t._v(" to this\nqueue and a "),e("Term",{attrs:{term:"worker"}}),t._v(" receives the "),e("Term",{attrs:{term:"task"}}),t._v(" using a long poll request.\nCadence calls this queue used to dispatch "),e("Term",{attrs:{term:"activity_task",show:"activity_tasks"}}),t._v(" an "),e("em",[e("Term",{attrs:{term:"activity_task_list"}})],1),t._v(".")],1),t._v(" "),e("p",[t._v("Similarly, when a "),e("Term",{attrs:{term:"workflow"}}),t._v(" needs to handle an external "),e("Term",{attrs:{term:"event"}}),t._v(", a "),e("Term",{attrs:{term:"decision_task"}}),t._v(" is created.\nA "),e("Term",{attrs:{term:"decision_task_list"}}),t._v(" is used to deliver it to the "),e("Term",{attrs:{term:"workflow_worker"}}),t._v(" (also called "),e("em",[t._v("decider")]),t._v(").")],1),t._v(" "),e("p",[t._v("While Cadence "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(" are queues, they have some differences from commonly used queuing technologies.\nThe main one is that they do not require explicit registration and are created on demand. The number of "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v("\nis not limited. A common use case is to have a "),e("Term",{attrs:{term:"task_list"}}),t._v(" per "),e("Term",{attrs:{term:"worker"}}),t._v(" process and use it to deliver "),e("Term",{attrs:{term:"activity_task",show:"activity_tasks"}}),t._v("\nto the process. Another use case is to have a "),e("Term",{attrs:{term:"task_list"}}),t._v(" per pool of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(".")],1),t._v(" "),e("p",[t._v("There are multiple advantages of using a "),e("Term",{attrs:{term:"task_list"}}),t._v(" to deliver "),e("Term",{attrs:{term:"task",show:"tasks"}}),t._v(" instead of invoking an "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" through a synchronous RPC:")],1),t._v(" "),e("ul",[e("li",[e("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" doesn't need to have any open ports, which is more secure.")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" doesn't need to advertise itself through DNS or any other network discovery mechanism.")],1),t._v(" "),e("li",[t._v("When all "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" are down, messages are persisted in a "),e("Term",{attrs:{term:"task_list"}}),t._v(" waiting for the "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" to recover.")],1),t._v(" "),e("li",[t._v("A "),e("Term",{attrs:{term:"worker"}}),t._v(" polls for a message only when it has spare capacity, so it never gets overloaded.")],1),t._v(" "),e("li",[t._v("Automatic load balancing across a large number of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(".")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" support server side throttling. This allows you to limit the "),e("Term",{attrs:{term:"task"}}),t._v(" dispatch rate to the pool of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" and still supports adding a "),e("Term",{attrs:{term:"task"}}),t._v(" with a higher rate when spikes happen.")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" can be used to route a request to specific pools of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" or even a specific process.")],1)])])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[49],{358:function(t,e,r){"use strict";r.r(e);var s=r(0),a=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"task-lists"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#task-lists"}},[t._v("#")]),t._v(" Task lists")]),t._v(" "),e("p",[t._v("When a "),e("Term",{attrs:{term:"workflow"}}),t._v(" invokes an "),e("Term",{attrs:{term:"activity"}}),t._v(", it sends the "),e("code",[t._v("ScheduleActivityTask")]),t._v(" "),e("Term",{attrs:{term:"decision"}}),t._v(" to the\nCadence service. As a result, the service updates the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state and dispatches\nan "),e("Term",{attrs:{term:"activity_task"}}),t._v(" to a "),e("Term",{attrs:{term:"worker"}}),t._v(" that implements the "),e("Term",{attrs:{term:"activity"}}),t._v(".\nInstead of calling the "),e("Term",{attrs:{term:"worker"}}),t._v(" directly, an intermediate queue is used. So the service adds an "),e("em",[e("Term",{attrs:{term:"activity_task"}})],1),t._v(" to this\nqueue and a "),e("Term",{attrs:{term:"worker"}}),t._v(" receives the "),e("Term",{attrs:{term:"task"}}),t._v(" using a long poll request.\nCadence calls this queue used to dispatch "),e("Term",{attrs:{term:"activity_task",show:"activity_tasks"}}),t._v(" an "),e("em",[e("Term",{attrs:{term:"activity_task_list"}})],1),t._v(".")],1),t._v(" "),e("p",[t._v("Similarly, when a "),e("Term",{attrs:{term:"workflow"}}),t._v(" needs to handle an external "),e("Term",{attrs:{term:"event"}}),t._v(", a "),e("Term",{attrs:{term:"decision_task"}}),t._v(" is created.\nA "),e("Term",{attrs:{term:"decision_task_list"}}),t._v(" is used to deliver it to the "),e("Term",{attrs:{term:"workflow_worker"}}),t._v(" (also called "),e("em",[t._v("decider")]),t._v(").")],1),t._v(" "),e("p",[t._v("While Cadence "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v(" are queues, they have some differences from commonly used queuing technologies.\nThe main one is that they do not require explicit registration and are created on demand. The number of "),e("Term",{attrs:{term:"task_list",show:"task_lists"}}),t._v("\nis not limited. A common use case is to have a "),e("Term",{attrs:{term:"task_list"}}),t._v(" per "),e("Term",{attrs:{term:"worker"}}),t._v(" process and use it to deliver "),e("Term",{attrs:{term:"activity_task",show:"activity_tasks"}}),t._v("\nto the process. Another use case is to have a "),e("Term",{attrs:{term:"task_list"}}),t._v(" per pool of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(".")],1),t._v(" "),e("p",[t._v("There are multiple advantages of using a "),e("Term",{attrs:{term:"task_list"}}),t._v(" to deliver "),e("Term",{attrs:{term:"task",show:"tasks"}}),t._v(" instead of invoking an "),e("Term",{attrs:{term:"activity_worker"}}),t._v(" through a synchronous RPC:")],1),t._v(" "),e("ul",[e("li",[e("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" doesn't need to have any open ports, which is more secure.")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" doesn't need to advertise itself through DNS or any other network discovery mechanism.")],1),t._v(" "),e("li",[t._v("When all "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" are down, messages are persisted in a "),e("Term",{attrs:{term:"task_list"}}),t._v(" waiting for the "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" to recover.")],1),t._v(" "),e("li",[t._v("A "),e("Term",{attrs:{term:"worker"}}),t._v(" polls for a message only when it has spare capacity, so it never gets overloaded.")],1),t._v(" "),e("li",[t._v("Automatic load balancing across a large number of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(".")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" support server side throttling. This allows you to limit the "),e("Term",{attrs:{term:"task"}}),t._v(" dispatch rate to the pool of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" and still supports adding a "),e("Term",{attrs:{term:"task"}}),t._v(" with a higher rate when spikes happen.")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"task_list",show:"Task_lists"}}),t._v(" can be used to route a request to specific pools of "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" or even a specific process.")],1)])])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/49.c13379b4.js b/assets/js/49.d781fef3.js similarity index 99% rename from assets/js/49.c13379b4.js rename to assets/js/49.d781fef3.js index 0f5dded45..26a095352 100644 --- a/assets/js/49.c13379b4.js +++ b/assets/js/49.d781fef3.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[49],{392:function(e,i,t){"use strict";t.r(i);var n=t(4),r=Object(n.a)({},(function(){var e=this,i=e._self._c;return i("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[i("h3",{attrs:{id:"if-i-change-code-logic-inside-an-cadence-activity-for-example-my-activity-is-calling-database-a-but-now-i-want-it-to-call-database-b-will-it-trigger-an-non-deterministic-error"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#if-i-change-code-logic-inside-an-cadence-activity-for-example-my-activity-is-calling-database-a-but-now-i-want-it-to-call-database-b-will-it-trigger-an-non-deterministic-error"}},[e._v("#")]),e._v(" If I change code logic inside an Cadence activity (for example, my activity is calling database A but now I want it to call database B), will it trigger an non-deterministic error?")]),e._v(" "),i("p",[i("b",[e._v("NO")]),e._v(". This change will not trigger non-deterministic error.")]),e._v(" "),i("p",[e._v("An Activity is the smallest unit of execution for Cadence and what happens inside activities are not recorded as historical events and therefore will not be replayed. In short, this change is deterministic and it is fine to modify logic inside activities.")]),e._v(" "),i("h3",{attrs:{id:"does-changing-the-workflow-definition-trigger-non-determinstic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#does-changing-the-workflow-definition-trigger-non-determinstic-errors"}},[e._v("#")]),e._v(" Does changing the workflow definition trigger non-determinstic errors?")]),e._v(" "),i("p",[i("b",[e._v("YES")]),e._v(". This is a very typical non-deterministic error.")]),e._v(" "),i("p",[e._v("When a new workflow code change is deployed, Cadence will find if it is compatible with\nCadence history. Changes to workflow definition will fail the replay process of Cadence\nas it finds the new workflow definition imcompatible with previous historical events.")]),e._v(" "),i("p",[e._v("Here is a list of common workflow definition changes.")]),e._v(" "),i("ul",[i("li",[e._v("Changing workflow parameter counts")]),e._v(" "),i("li",[e._v("Changing workflow parameter types")]),e._v(" "),i("li",[e._v("Changing workflow return types")])]),e._v(" "),i("p",[e._v("The following changes are not categorized as definition changes and therefore will not\ntrigger non-deterministic errors.")]),e._v(" "),i("ul",[i("li",[e._v("Changes of workflow return values")]),e._v(" "),i("li",[e._v("Changing workflow parameter names as they are just positional")])]),e._v(" "),i("h3",{attrs:{id:"does-changing-activity-definitions-trigger-non-determinstic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#does-changing-activity-definitions-trigger-non-determinstic-errors"}},[e._v("#")]),e._v(" Does changing activity definitions trigger non-determinstic errors?")]),e._v(" "),i("p",[i("b",[e._v("YES")]),e._v(". Similar to workflow definition change, this is also a very typical non-deterministic error.")]),e._v(" "),i("p",[e._v("Activities are also recorded and replayed by Cadence. Therefore, changes to activity must also be compatible with Cadence history. The following changes are common ones that trigger non-deterministic errors.")]),e._v(" "),i("ul",[i("li",[e._v("Changing activity parameter counts")]),e._v(" "),i("li",[e._v("Changing activity parameter types")]),e._v(" "),i("li",[e._v("Changing activity return types")])]),e._v(" "),i("p",[e._v("As activity paremeters are also positional, these two changes will NOT trigger non-deterministic errors.")]),e._v(" "),i("ul",[i("li",[e._v("Changes of activity return values")]),e._v(" "),i("li",[e._v("Changing activity parameter names")])]),e._v(" "),i("p",[e._v("Activity return values inside workflows are not recorded and replayed.")]),e._v(" "),i("h3",{attrs:{id:"what-changes-inside-workflows-may-potentially-trigger-non-deterministic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#what-changes-inside-workflows-may-potentially-trigger-non-deterministic-errors"}},[e._v("#")]),e._v(" What changes inside workflows may potentially trigger non-deterministic errors?")]),e._v(" "),i("p",[e._v("Cadence records each execution of a workflow and activity execution inside each of them.Therefore, new changes must be compatible with execution orders inside the workflow. The following changes will fail the non-deterministic check.")]),e._v(" "),i("ul",[i("li",[e._v("Append another activity")]),e._v(" "),i("li",[e._v("Delete an existing activity")]),e._v(" "),i("li",[e._v("Reordering activities")])]),e._v(" "),i("p",[e._v("If you really need to change the activity implementation based on new business requirements, you may consider using versioning your workflow.")]),e._v(" "),i("h3",{attrs:{id:"are-cadence-signals-replayed-if-definition-of-signal-is-changed-will-it-trigger-non-deterministic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#are-cadence-signals-replayed-if-definition-of-signal-is-changed-will-it-trigger-non-deterministic-errors"}},[e._v("#")]),e._v(" Are Cadence signals replayed? If definition of signal is changed, will it trigger non-deterministic errors?")]),e._v(" "),i("p",[e._v("Yes. If a signal is used in a workflow, it becomes a critical component of your workflow. Because signals also involve I/O to your workflow, it is also recorded and replayed. Modifications on signal definitions or usage may yield to non-deterministic errors, for instance, changing return type of a signal.")]),e._v(" "),i("h3",{attrs:{id:"if-i-have-new-business-requirement-and-really-need-to-change-the-definition-of-a-workflow-what-should-i-do"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#if-i-have-new-business-requirement-and-really-need-to-change-the-definition-of-a-workflow-what-should-i-do"}},[e._v("#")]),e._v(" If I have new business requirement and really need to change the definition of a workflow, what should I do?")]),e._v(" "),i("p",[e._v("You may introduce a new workflow registered to your worker and divert traffic to it or use versioning for your workflow. Check out "),i("a",{attrs:{href:"https://cadenceworkflow.io/docs/go-client/workflow-versioning/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence website"),i("OutboundLink")],1),e._v(" for more information about versioning.")]),e._v(" "),i("h3",{attrs:{id:"does-changes-to-local-activities-definition-trigger-non-deterministic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#does-changes-to-local-activities-definition-trigger-non-deterministic-errors"}},[e._v("#")]),e._v(" Does changes to local activities' definition trigger non-deterministic errors?")]),e._v(" "),i("p",[e._v("Yes. Local activities are recorded and therefore replayed by Cadence. Imcompatible changes on local activity definitions will yield to non-deterministic errors.")])])}),[],!1,null,null,null);i.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[49],{393:function(e,i,t){"use strict";t.r(i);var n=t(4),r=Object(n.a)({},(function(){var e=this,i=e._self._c;return i("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[i("h3",{attrs:{id:"if-i-change-code-logic-inside-an-cadence-activity-for-example-my-activity-is-calling-database-a-but-now-i-want-it-to-call-database-b-will-it-trigger-an-non-deterministic-error"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#if-i-change-code-logic-inside-an-cadence-activity-for-example-my-activity-is-calling-database-a-but-now-i-want-it-to-call-database-b-will-it-trigger-an-non-deterministic-error"}},[e._v("#")]),e._v(" If I change code logic inside an Cadence activity (for example, my activity is calling database A but now I want it to call database B), will it trigger an non-deterministic error?")]),e._v(" "),i("p",[i("b",[e._v("NO")]),e._v(". This change will not trigger non-deterministic error.")]),e._v(" "),i("p",[e._v("An Activity is the smallest unit of execution for Cadence and what happens inside activities are not recorded as historical events and therefore will not be replayed. In short, this change is deterministic and it is fine to modify logic inside activities.")]),e._v(" "),i("h3",{attrs:{id:"does-changing-the-workflow-definition-trigger-non-determinstic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#does-changing-the-workflow-definition-trigger-non-determinstic-errors"}},[e._v("#")]),e._v(" Does changing the workflow definition trigger non-determinstic errors?")]),e._v(" "),i("p",[i("b",[e._v("YES")]),e._v(". This is a very typical non-deterministic error.")]),e._v(" "),i("p",[e._v("When a new workflow code change is deployed, Cadence will find if it is compatible with\nCadence history. Changes to workflow definition will fail the replay process of Cadence\nas it finds the new workflow definition imcompatible with previous historical events.")]),e._v(" "),i("p",[e._v("Here is a list of common workflow definition changes.")]),e._v(" "),i("ul",[i("li",[e._v("Changing workflow parameter counts")]),e._v(" "),i("li",[e._v("Changing workflow parameter types")]),e._v(" "),i("li",[e._v("Changing workflow return types")])]),e._v(" "),i("p",[e._v("The following changes are not categorized as definition changes and therefore will not\ntrigger non-deterministic errors.")]),e._v(" "),i("ul",[i("li",[e._v("Changes of workflow return values")]),e._v(" "),i("li",[e._v("Changing workflow parameter names as they are just positional")])]),e._v(" "),i("h3",{attrs:{id:"does-changing-activity-definitions-trigger-non-determinstic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#does-changing-activity-definitions-trigger-non-determinstic-errors"}},[e._v("#")]),e._v(" Does changing activity definitions trigger non-determinstic errors?")]),e._v(" "),i("p",[i("b",[e._v("YES")]),e._v(". Similar to workflow definition change, this is also a very typical non-deterministic error.")]),e._v(" "),i("p",[e._v("Activities are also recorded and replayed by Cadence. Therefore, changes to activity must also be compatible with Cadence history. The following changes are common ones that trigger non-deterministic errors.")]),e._v(" "),i("ul",[i("li",[e._v("Changing activity parameter counts")]),e._v(" "),i("li",[e._v("Changing activity parameter types")]),e._v(" "),i("li",[e._v("Changing activity return types")])]),e._v(" "),i("p",[e._v("As activity paremeters are also positional, these two changes will NOT trigger non-deterministic errors.")]),e._v(" "),i("ul",[i("li",[e._v("Changes of activity return values")]),e._v(" "),i("li",[e._v("Changing activity parameter names")])]),e._v(" "),i("p",[e._v("Activity return values inside workflows are not recorded and replayed.")]),e._v(" "),i("h3",{attrs:{id:"what-changes-inside-workflows-may-potentially-trigger-non-deterministic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#what-changes-inside-workflows-may-potentially-trigger-non-deterministic-errors"}},[e._v("#")]),e._v(" What changes inside workflows may potentially trigger non-deterministic errors?")]),e._v(" "),i("p",[e._v("Cadence records each execution of a workflow and activity execution inside each of them.Therefore, new changes must be compatible with execution orders inside the workflow. The following changes will fail the non-deterministic check.")]),e._v(" "),i("ul",[i("li",[e._v("Append another activity")]),e._v(" "),i("li",[e._v("Delete an existing activity")]),e._v(" "),i("li",[e._v("Reordering activities")])]),e._v(" "),i("p",[e._v("If you really need to change the activity implementation based on new business requirements, you may consider using versioning your workflow.")]),e._v(" "),i("h3",{attrs:{id:"are-cadence-signals-replayed-if-definition-of-signal-is-changed-will-it-trigger-non-deterministic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#are-cadence-signals-replayed-if-definition-of-signal-is-changed-will-it-trigger-non-deterministic-errors"}},[e._v("#")]),e._v(" Are Cadence signals replayed? If definition of signal is changed, will it trigger non-deterministic errors?")]),e._v(" "),i("p",[e._v("Yes. If a signal is used in a workflow, it becomes a critical component of your workflow. Because signals also involve I/O to your workflow, it is also recorded and replayed. Modifications on signal definitions or usage may yield to non-deterministic errors, for instance, changing return type of a signal.")]),e._v(" "),i("h3",{attrs:{id:"if-i-have-new-business-requirement-and-really-need-to-change-the-definition-of-a-workflow-what-should-i-do"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#if-i-have-new-business-requirement-and-really-need-to-change-the-definition-of-a-workflow-what-should-i-do"}},[e._v("#")]),e._v(" If I have new business requirement and really need to change the definition of a workflow, what should I do?")]),e._v(" "),i("p",[e._v("You may introduce a new workflow registered to your worker and divert traffic to it or use versioning for your workflow. Check out "),i("a",{attrs:{href:"https://cadenceworkflow.io/docs/go-client/workflow-versioning/",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence website"),i("OutboundLink")],1),e._v(" for more information about versioning.")]),e._v(" "),i("h3",{attrs:{id:"does-changes-to-local-activities-definition-trigger-non-deterministic-errors"}},[i("a",{staticClass:"header-anchor",attrs:{href:"#does-changes-to-local-activities-definition-trigger-non-deterministic-errors"}},[e._v("#")]),e._v(" Does changes to local activities' definition trigger non-deterministic errors?")]),e._v(" "),i("p",[e._v("Yes. Local activities are recorded and therefore replayed by Cadence. Imcompatible changes on local activity definitions will yield to non-deterministic errors.")])])}),[],!1,null,null,null);i.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/50.1e8c6c10.js b/assets/js/50.9dbc848e.js similarity index 99% rename from assets/js/50.1e8c6c10.js rename to assets/js/50.9dbc848e.js index 36bbd1df0..e332f1910 100644 --- a/assets/js/50.1e8c6c10.js +++ b/assets/js/50.9dbc848e.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[50],{356:function(e,t,a){"use strict";a.r(t);var r=a(0),s=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"archival"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#archival"}},[e._v("#")]),e._v(" Archival")]),e._v(" "),t("p",[t("Term",{attrs:{term:"archival",show:"Archival"}}),e._v(" is a feature that automatically moves "),t("Term",{attrs:{term:"workflow"}}),e._v(" histories (history archival) and visibility records (visibility archival) from persistence to a secondary data store after the retention period, thus allowing users to keep workflow history and visibility records as long as necessary without overwhelming Cadence primary data store. There are two reasons you may consider turning on archival for your domain:")],1),e._v(" "),t("ol",[t("li",[t("strong",[e._v("Compliance:")]),e._v(" For legal reasons histories may need to be stored for a long period of time.")]),e._v(" "),t("li",[t("strong",[e._v("Debugging:")]),e._v(" Old histories can still be accessed for debugging.")])]),e._v(" "),t("p",[e._v("The current implementation of the "),t("Term",{attrs:{term:"archival",show:"Archival"}}),e._v(" feature has two limitations:")],1),e._v(" "),t("ol",[t("li",[t("strong",[e._v("RunID Required:")]),e._v(" In order to retrieve an archived workflow history, both workflowID and runID are required.")]),e._v(" "),t("li",[t("strong",[e._v("Best Effort:")]),e._v(" It is possible that a history or visibility record is deleted from Cadence primary persistence without being archived first. These cases are rare but are possible with the current state of "),t("Term",{attrs:{term:"archival"}}),e._v(". Please check the FAQ section for how to get notified when this happens.")],1)]),e._v(" "),t("h2",{attrs:{id:"concepts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#concepts"}},[e._v("#")]),e._v(" Concepts")]),e._v(" "),t("ul",[t("li",[t("strong",[e._v("Archiver:")]),e._v(" Archiver is the component that is responsible for archiving and retrieving "),t("Term",{attrs:{term:"workflow"}}),e._v(" histories and visibility records. Its interface is generic and supports different kinds of "),t("Term",{attrs:{term:"archival"}}),e._v(" locations: local file system, S3, Kafka, etc. Check "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/common/archiver/README.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("this README"),t("OutboundLink")],1),e._v(" if you would like to add a new archiver implementation for your data store.")],1),e._v(" "),t("li",[t("strong",[e._v("URI:")]),e._v(" An URI is used to specify the "),t("Term",{attrs:{term:"archival"}}),e._v(" location. Based on the scheme part of an URI, the corresponding archiver will be selected by the system to perform the "),t("Term",{attrs:{term:"archival"}}),e._v(" operation.")],1)]),e._v(" "),t("h2",{attrs:{id:"configuring-archival"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#configuring-archival"}},[e._v("#")]),e._v(" Configuring Archival")]),e._v(" "),t("p",[t("Term",{attrs:{term:"archival",show:"Archival"}}),e._v(" is controlled by both "),t("Term",{attrs:{term:"domain"}}),e._v(" level config and cluster level config. History and visibility archival have separate domain/cluster configs, but they share the same purpose.")],1),e._v(" "),t("h3",{attrs:{id:"cluster-level-archival-config"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-level-archival-config"}},[e._v("#")]),e._v(" Cluster Level Archival Config")]),e._v(" "),t("p",[e._v("A Cadence cluster can be in one of three "),t("Term",{attrs:{term:"archival"}}),e._v(" states:")],1),e._v(" "),t("ul",[t("li",[t("strong",[e._v("Disabled:")]),e._v(" No "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur and the archivers will be not initialized on service startup.")],1),e._v(" "),t("li",[t("strong",[e._v("Paused:")]),e._v(" This state is not yet implemented. Currently setting cluster to paused is the same as setting it to disabled.")]),e._v(" "),t("li",[t("strong",[e._v("Enabled:")]),e._v(" "),t("Term",{attrs:{term:"archival",show:"Archivals"}}),e._v(" will occur.")],1)]),e._v(" "),t("p",[e._v("Enabling the cluster for "),t("Term",{attrs:{term:"archival"}}),e._v(" simply means workflow histories will be archived. There is another config which controls whether archived histories or visibility records can be accessed. Both configs have defaults defined in the static yaml and can be overwritten via dynamic config. Note, however, dynamic config will take effect only when "),t("Term",{attrs:{term:"archival"}}),e._v(" is enabled in static yaml.")],1),e._v(" "),t("h3",{attrs:{id:"domain-level-archival-config"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-level-archival-config"}},[e._v("#")]),e._v(" Domain Level Archival Config")]),e._v(" "),t("p",[e._v("A "),t("Term",{attrs:{term:"domain"}}),e._v(" includes two pieces of "),t("Term",{attrs:{term:"archival"}}),e._v(" related config:")],1),e._v(" "),t("ul",[t("li",[t("strong",[e._v("Status:")]),e._v(" Either enabled or disabled. If a "),t("Term",{attrs:{term:"domain"}}),e._v(" is in the disabled state, no "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur for that "),t("Term",{attrs:{term:"domain"}}),e._v(".")],1),e._v(" "),t("li",[t("strong",[e._v("URI:")]),e._v(" The scheme and location where histories or visibility records will be archived to. When a "),t("Term",{attrs:{term:"domain"}}),e._v(" enables "),t("Term",{attrs:{term:"archival"}}),e._v(" for the first time URI is set and can never be changed. If URI is not specified when first enabling a "),t("Term",{attrs:{term:"domain"}}),e._v(" for "),t("Term",{attrs:{term:"archival"}}),e._v(", a default URI from the static config will be used.")],1)]),e._v(" "),t("h2",{attrs:{id:"running-locally"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#running-locally"}},[e._v("#")]),e._v(" Running Locally")]),e._v(" "),t("p",[e._v("You can follow the steps below to run and test the "),t("Term",{attrs:{term:"archival"}}),e._v(" feature locally:")],1),e._v(" "),t("ol",[t("li",[t("code",[e._v("./cadence-server start")])]),e._v(" "),t("li",[t("code",[e._v("./cadence --do samples-domain domain register --gd false --history_archival_status enabled --visibility_archival_status enabled --retention 0")])]),e._v(" "),t("li",[e._v("Run the "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("helloworld cadence-sample"),t("OutboundLink")],1),e._v(" by following the README")]),e._v(" "),t("li",[e._v("Copy the workflowID the completed "),t("Term",{attrs:{term:"workflow"}}),e._v(" from log output")],1),e._v(" "),t("li",[e._v("Retrieve runID through archived visibility record "),t("code",[e._v("./cadence --do samples-domain wf listarchived -q 'WorkflowID = \"\"'")])]),e._v(" "),t("li",[e._v("Retrieve archived history "),t("code",[e._v("./cadence --do samples-domain wf show --wid --rid ")])])]),e._v(" "),t("p",[e._v("In step 2, we registered a new "),t("Term",{attrs:{term:"domain"}}),e._v(" and enabled both history and visibility "),t("Term",{attrs:{term:"archival"}}),e._v(" feature for that "),t("Term",{attrs:{term:"domain"}}),e._v(". Since we didn't provide an "),t("Term",{attrs:{term:"archival"}}),e._v(" URI when registering the new "),t("Term",{attrs:{term:"domain"}}),e._v(", the default URI specified in "),t("code",[e._v("config/development.yaml")]),e._v(" is used. The default URI is "),t("code",[e._v("file:///tmp/cadence_archival/development")]),e._v(" for history archival and "),t("code",[e._v('"file:///tmp/cadence_vis_archival/development"')]),e._v(" for visibility archival. You can find the archived "),t("Term",{attrs:{term:"workflow"}}),e._v(" history under the "),t("code",[e._v("/tmp/cadence_archival/development")]),e._v(" directory and archived visibility record under the "),t("code",[e._v("/tmp/cadence_vis_archival/development")]),e._v(" directory.")],1),e._v(" "),t("h2",{attrs:{id:"running-in-production"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#running-in-production"}},[e._v("#")]),e._v(" Running in Production")]),e._v(" "),t("p",[e._v("Cadence supports uploading workflow histories to Google Cloud and Amazon S3 for archival in production.\nCheck documentation in "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/gcloud",target:"_blank",rel:"noopener noreferrer"}},[e._v("GCloud archival component"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/s3store",target:"_blank",rel:"noopener noreferrer"}},[e._v("S3 archival component"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Below is an example of Amazon S3 archival configuration:")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("archival")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("history")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableRead")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("provider")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("s3store")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("region")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"us-east-2"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("visibility")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableRead")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("provider")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("s3store")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("region")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"us-east-2"')]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("domainDefaults")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("archival")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("history")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("URI")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"s3://put-name-of-your-s3-bucket-here"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("visibility")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("URI")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"s3://put-name-of-your-s3-bucket-here"')]),e._v(" "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# most proably the same as the previous URI")]),e._v("\n")])])]),t("h2",{attrs:{id:"faq"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#faq"}},[e._v("#")]),e._v(" FAQ")]),e._v(" "),t("h3",{attrs:{id:"when-does-archival-happen"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#when-does-archival-happen"}},[e._v("#")]),e._v(" When does archival happen?")]),e._v(" "),t("p",[e._v("In theory, we would like both history and visibility archival happen after workflow closes and retention period passes. However, due to some limitations in the implementation, only history archival happens after the retention period, while visibility archival happens immediately after workflow closes. Please treat this as an implementation details inside Cadence and do not relay on this fact. Archived data should only be checked after the retention period, and we may change the way we do visibility archival in the future.")]),e._v(" "),t("h3",{attrs:{id:"what-s-the-query-syntax-for-visibility-archival"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#what-s-the-query-syntax-for-visibility-archival"}},[e._v("#")]),e._v(" What's the query syntax for visibility archival?")]),e._v(" "),t("p",[e._v("The "),t("code",[e._v("listArchived")]),e._v(" CLI command and API accept a SQL-like query for retrieving archived visibility records, similar to how the "),t("code",[e._v("listWorkflow")]),e._v(" command works. Unfortunately, since different Archiver implementations have very different capability, there's currently no universal query syntax that works for all Archiver implementations. Please check the README (for example, "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/s3store",target:"_blank",rel:"noopener noreferrer"}},[e._v("S3"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/gcloud",target:"_blank",rel:"noopener noreferrer"}},[e._v("GCP"),t("OutboundLink")],1),e._v(") of the Archiver used by your domain for the supported query syntax and limitations.")]),e._v(" "),t("h3",{attrs:{id:"how-does-archival-interact-with-global-domains"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-does-archival-interact-with-global-domains"}},[e._v("#")]),e._v(" How does archival interact with global domains?")]),e._v(" "),t("p",[e._v("If you have a global domain, when "),t("Term",{attrs:{term:"archival"}}),e._v(" occurs it will first run on the active cluster and some time later it will run on the standby cluster when replication happens.\nFor history archival, Cadence will check if upload operation has been performed and skip duplicate efforts.\nFor visibility archival, there's no such check and duplicated visibility records will be uploaded. Depending on the Archiver implementation, those duplicated upload may consume more space in the underlying storage and duplicated entries may be returned.")],1),e._v(" "),t("h3",{attrs:{id:"can-i-specify-multiple-archival-uris"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#can-i-specify-multiple-archival-uris"}},[e._v("#")]),e._v(" Can I specify multiple archival URIs?")]),e._v(" "),t("p",[e._v("Each "),t("Term",{attrs:{term:"domain"}}),e._v(" can only have one URI for history "),t("Term",{attrs:{term:"archival"}}),e._v(" and one URI for visibility "),t("Term",{attrs:{term:"archival"}}),e._v(". Different "),t("Term",{attrs:{term:"domain",show:"domains"}}),e._v(", however, can have different URIs (with different schemes).")],1),e._v(" "),t("h3",{attrs:{id:"how-does-archival-work-with-pii"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-does-archival-work-with-pii"}},[e._v("#")]),e._v(" How does archival work with PII?")]),e._v(" "),t("p",[e._v("No cadence "),t("Term",{attrs:{term:"workflow"}}),e._v(" should ever operate on clear text PII. Cadence can be thought of as a database and just as one would not store PII in a database PII should not be stored in Cadence. This is even more important when "),t("Term",{attrs:{term:"archival"}}),e._v(" is enabled because these histories can be kept forever.")],1),e._v(" "),t("h2",{attrs:{id:"planned-future-work"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#planned-future-work"}},[e._v("#")]),e._v(" Planned Future Work")]),e._v(" "),t("ul",[t("li",[e._v("Support retriving archived workflow histories without providing runID.")]),e._v(" "),t("li",[e._v("Provide guarantee that no history or visibility record is deleted from primary persistence before being archived.")]),e._v(" "),t("li",[e._v("Implement "),t("strong",[e._v("Paused")]),e._v(" state. In this state no "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur but histories or visibility record also will not be deleted from persistence.\nOnce enabled again from paused state, all skipped "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur.")],1)])])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[50],{354:function(e,t,a){"use strict";a.r(t);var r=a(0),s=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"archival"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#archival"}},[e._v("#")]),e._v(" Archival")]),e._v(" "),t("p",[t("Term",{attrs:{term:"archival",show:"Archival"}}),e._v(" is a feature that automatically moves "),t("Term",{attrs:{term:"workflow"}}),e._v(" histories (history archival) and visibility records (visibility archival) from persistence to a secondary data store after the retention period, thus allowing users to keep workflow history and visibility records as long as necessary without overwhelming Cadence primary data store. There are two reasons you may consider turning on archival for your domain:")],1),e._v(" "),t("ol",[t("li",[t("strong",[e._v("Compliance:")]),e._v(" For legal reasons histories may need to be stored for a long period of time.")]),e._v(" "),t("li",[t("strong",[e._v("Debugging:")]),e._v(" Old histories can still be accessed for debugging.")])]),e._v(" "),t("p",[e._v("The current implementation of the "),t("Term",{attrs:{term:"archival",show:"Archival"}}),e._v(" feature has two limitations:")],1),e._v(" "),t("ol",[t("li",[t("strong",[e._v("RunID Required:")]),e._v(" In order to retrieve an archived workflow history, both workflowID and runID are required.")]),e._v(" "),t("li",[t("strong",[e._v("Best Effort:")]),e._v(" It is possible that a history or visibility record is deleted from Cadence primary persistence without being archived first. These cases are rare but are possible with the current state of "),t("Term",{attrs:{term:"archival"}}),e._v(". Please check the FAQ section for how to get notified when this happens.")],1)]),e._v(" "),t("h2",{attrs:{id:"concepts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#concepts"}},[e._v("#")]),e._v(" Concepts")]),e._v(" "),t("ul",[t("li",[t("strong",[e._v("Archiver:")]),e._v(" Archiver is the component that is responsible for archiving and retrieving "),t("Term",{attrs:{term:"workflow"}}),e._v(" histories and visibility records. Its interface is generic and supports different kinds of "),t("Term",{attrs:{term:"archival"}}),e._v(" locations: local file system, S3, Kafka, etc. Check "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/common/archiver/README.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("this README"),t("OutboundLink")],1),e._v(" if you would like to add a new archiver implementation for your data store.")],1),e._v(" "),t("li",[t("strong",[e._v("URI:")]),e._v(" An URI is used to specify the "),t("Term",{attrs:{term:"archival"}}),e._v(" location. Based on the scheme part of an URI, the corresponding archiver will be selected by the system to perform the "),t("Term",{attrs:{term:"archival"}}),e._v(" operation.")],1)]),e._v(" "),t("h2",{attrs:{id:"configuring-archival"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#configuring-archival"}},[e._v("#")]),e._v(" Configuring Archival")]),e._v(" "),t("p",[t("Term",{attrs:{term:"archival",show:"Archival"}}),e._v(" is controlled by both "),t("Term",{attrs:{term:"domain"}}),e._v(" level config and cluster level config. History and visibility archival have separate domain/cluster configs, but they share the same purpose.")],1),e._v(" "),t("h3",{attrs:{id:"cluster-level-archival-config"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-level-archival-config"}},[e._v("#")]),e._v(" Cluster Level Archival Config")]),e._v(" "),t("p",[e._v("A Cadence cluster can be in one of three "),t("Term",{attrs:{term:"archival"}}),e._v(" states:")],1),e._v(" "),t("ul",[t("li",[t("strong",[e._v("Disabled:")]),e._v(" No "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur and the archivers will be not initialized on service startup.")],1),e._v(" "),t("li",[t("strong",[e._v("Paused:")]),e._v(" This state is not yet implemented. Currently setting cluster to paused is the same as setting it to disabled.")]),e._v(" "),t("li",[t("strong",[e._v("Enabled:")]),e._v(" "),t("Term",{attrs:{term:"archival",show:"Archivals"}}),e._v(" will occur.")],1)]),e._v(" "),t("p",[e._v("Enabling the cluster for "),t("Term",{attrs:{term:"archival"}}),e._v(" simply means workflow histories will be archived. There is another config which controls whether archived histories or visibility records can be accessed. Both configs have defaults defined in the static yaml and can be overwritten via dynamic config. Note, however, dynamic config will take effect only when "),t("Term",{attrs:{term:"archival"}}),e._v(" is enabled in static yaml.")],1),e._v(" "),t("h3",{attrs:{id:"domain-level-archival-config"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#domain-level-archival-config"}},[e._v("#")]),e._v(" Domain Level Archival Config")]),e._v(" "),t("p",[e._v("A "),t("Term",{attrs:{term:"domain"}}),e._v(" includes two pieces of "),t("Term",{attrs:{term:"archival"}}),e._v(" related config:")],1),e._v(" "),t("ul",[t("li",[t("strong",[e._v("Status:")]),e._v(" Either enabled or disabled. If a "),t("Term",{attrs:{term:"domain"}}),e._v(" is in the disabled state, no "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur for that "),t("Term",{attrs:{term:"domain"}}),e._v(".")],1),e._v(" "),t("li",[t("strong",[e._v("URI:")]),e._v(" The scheme and location where histories or visibility records will be archived to. When a "),t("Term",{attrs:{term:"domain"}}),e._v(" enables "),t("Term",{attrs:{term:"archival"}}),e._v(" for the first time URI is set and can never be changed. If URI is not specified when first enabling a "),t("Term",{attrs:{term:"domain"}}),e._v(" for "),t("Term",{attrs:{term:"archival"}}),e._v(", a default URI from the static config will be used.")],1)]),e._v(" "),t("h2",{attrs:{id:"running-locally"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#running-locally"}},[e._v("#")]),e._v(" Running Locally")]),e._v(" "),t("p",[e._v("You can follow the steps below to run and test the "),t("Term",{attrs:{term:"archival"}}),e._v(" feature locally:")],1),e._v(" "),t("ol",[t("li",[t("code",[e._v("./cadence-server start")])]),e._v(" "),t("li",[t("code",[e._v("./cadence --do samples-domain domain register --gd false --history_archival_status enabled --visibility_archival_status enabled --retention 0")])]),e._v(" "),t("li",[e._v("Run the "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("helloworld cadence-sample"),t("OutboundLink")],1),e._v(" by following the README")]),e._v(" "),t("li",[e._v("Copy the workflowID the completed "),t("Term",{attrs:{term:"workflow"}}),e._v(" from log output")],1),e._v(" "),t("li",[e._v("Retrieve runID through archived visibility record "),t("code",[e._v("./cadence --do samples-domain wf listarchived -q 'WorkflowID = \"\"'")])]),e._v(" "),t("li",[e._v("Retrieve archived history "),t("code",[e._v("./cadence --do samples-domain wf show --wid --rid ")])])]),e._v(" "),t("p",[e._v("In step 2, we registered a new "),t("Term",{attrs:{term:"domain"}}),e._v(" and enabled both history and visibility "),t("Term",{attrs:{term:"archival"}}),e._v(" feature for that "),t("Term",{attrs:{term:"domain"}}),e._v(". Since we didn't provide an "),t("Term",{attrs:{term:"archival"}}),e._v(" URI when registering the new "),t("Term",{attrs:{term:"domain"}}),e._v(", the default URI specified in "),t("code",[e._v("config/development.yaml")]),e._v(" is used. The default URI is "),t("code",[e._v("file:///tmp/cadence_archival/development")]),e._v(" for history archival and "),t("code",[e._v('"file:///tmp/cadence_vis_archival/development"')]),e._v(" for visibility archival. You can find the archived "),t("Term",{attrs:{term:"workflow"}}),e._v(" history under the "),t("code",[e._v("/tmp/cadence_archival/development")]),e._v(" directory and archived visibility record under the "),t("code",[e._v("/tmp/cadence_vis_archival/development")]),e._v(" directory.")],1),e._v(" "),t("h2",{attrs:{id:"running-in-production"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#running-in-production"}},[e._v("#")]),e._v(" Running in Production")]),e._v(" "),t("p",[e._v("Cadence supports uploading workflow histories to Google Cloud and Amazon S3 for archival in production.\nCheck documentation in "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/gcloud",target:"_blank",rel:"noopener noreferrer"}},[e._v("GCloud archival component"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/s3store",target:"_blank",rel:"noopener noreferrer"}},[e._v("S3 archival component"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Below is an example of Amazon S3 archival configuration:")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("archival")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("history")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableRead")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("provider")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("s3store")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("region")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"us-east-2"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("visibility")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableRead")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("provider")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("s3store")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("region")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"us-east-2"')]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("domainDefaults")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("archival")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("history")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("URI")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"s3://put-name-of-your-s3-bucket-here"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("visibility")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("status")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"enabled"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("URI")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"s3://put-name-of-your-s3-bucket-here"')]),e._v(" "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# most proably the same as the previous URI")]),e._v("\n")])])]),t("h2",{attrs:{id:"faq"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#faq"}},[e._v("#")]),e._v(" FAQ")]),e._v(" "),t("h3",{attrs:{id:"when-does-archival-happen"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#when-does-archival-happen"}},[e._v("#")]),e._v(" When does archival happen?")]),e._v(" "),t("p",[e._v("In theory, we would like both history and visibility archival happen after workflow closes and retention period passes. However, due to some limitations in the implementation, only history archival happens after the retention period, while visibility archival happens immediately after workflow closes. Please treat this as an implementation details inside Cadence and do not relay on this fact. Archived data should only be checked after the retention period, and we may change the way we do visibility archival in the future.")]),e._v(" "),t("h3",{attrs:{id:"what-s-the-query-syntax-for-visibility-archival"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#what-s-the-query-syntax-for-visibility-archival"}},[e._v("#")]),e._v(" What's the query syntax for visibility archival?")]),e._v(" "),t("p",[e._v("The "),t("code",[e._v("listArchived")]),e._v(" CLI command and API accept a SQL-like query for retrieving archived visibility records, similar to how the "),t("code",[e._v("listWorkflow")]),e._v(" command works. Unfortunately, since different Archiver implementations have very different capability, there's currently no universal query syntax that works for all Archiver implementations. Please check the README (for example, "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/s3store",target:"_blank",rel:"noopener noreferrer"}},[e._v("S3"),t("OutboundLink")],1),e._v(" and "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/common/archiver/gcloud",target:"_blank",rel:"noopener noreferrer"}},[e._v("GCP"),t("OutboundLink")],1),e._v(") of the Archiver used by your domain for the supported query syntax and limitations.")]),e._v(" "),t("h3",{attrs:{id:"how-does-archival-interact-with-global-domains"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-does-archival-interact-with-global-domains"}},[e._v("#")]),e._v(" How does archival interact with global domains?")]),e._v(" "),t("p",[e._v("If you have a global domain, when "),t("Term",{attrs:{term:"archival"}}),e._v(" occurs it will first run on the active cluster and some time later it will run on the standby cluster when replication happens.\nFor history archival, Cadence will check if upload operation has been performed and skip duplicate efforts.\nFor visibility archival, there's no such check and duplicated visibility records will be uploaded. Depending on the Archiver implementation, those duplicated upload may consume more space in the underlying storage and duplicated entries may be returned.")],1),e._v(" "),t("h3",{attrs:{id:"can-i-specify-multiple-archival-uris"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#can-i-specify-multiple-archival-uris"}},[e._v("#")]),e._v(" Can I specify multiple archival URIs?")]),e._v(" "),t("p",[e._v("Each "),t("Term",{attrs:{term:"domain"}}),e._v(" can only have one URI for history "),t("Term",{attrs:{term:"archival"}}),e._v(" and one URI for visibility "),t("Term",{attrs:{term:"archival"}}),e._v(". Different "),t("Term",{attrs:{term:"domain",show:"domains"}}),e._v(", however, can have different URIs (with different schemes).")],1),e._v(" "),t("h3",{attrs:{id:"how-does-archival-work-with-pii"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-does-archival-work-with-pii"}},[e._v("#")]),e._v(" How does archival work with PII?")]),e._v(" "),t("p",[e._v("No cadence "),t("Term",{attrs:{term:"workflow"}}),e._v(" should ever operate on clear text PII. Cadence can be thought of as a database and just as one would not store PII in a database PII should not be stored in Cadence. This is even more important when "),t("Term",{attrs:{term:"archival"}}),e._v(" is enabled because these histories can be kept forever.")],1),e._v(" "),t("h2",{attrs:{id:"planned-future-work"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#planned-future-work"}},[e._v("#")]),e._v(" Planned Future Work")]),e._v(" "),t("ul",[t("li",[e._v("Support retriving archived workflow histories without providing runID.")]),e._v(" "),t("li",[e._v("Provide guarantee that no history or visibility record is deleted from primary persistence before being archived.")]),e._v(" "),t("li",[e._v("Implement "),t("strong",[e._v("Paused")]),e._v(" state. In this state no "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur but histories or visibility record also will not be deleted from persistence.\nOnce enabled again from paused state, all skipped "),t("Term",{attrs:{term:"archival",show:"archivals"}}),e._v(" will occur.")],1)])])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/51.a45ab7c7.js b/assets/js/51.83f25a70.js similarity index 99% rename from assets/js/51.a45ab7c7.js rename to assets/js/51.83f25a70.js index 2130a3594..3993c3844 100644 --- a/assets/js/51.a45ab7c7.js +++ b/assets/js/51.83f25a70.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[51],{357:function(e,a,t){"use strict";t.r(a);var s=t(0),r=Object(s.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"cross-dc-replication"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#cross-dc-replication"}},[e._v("#")]),e._v(" Cross-DC replication")]),e._v(" "),a("p",[e._v("The Cadence Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" feature provides clients with the capability to continue their "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" from another\ncluster in the event of a datacenter failover. Although you can configure a Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" to be replicated to any number of\nclusters, it is only considered active in a single cluster.")],1),e._v(" "),a("h2",{attrs:{id:"global-domains-architecture"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#global-domains-architecture"}},[e._v("#")]),e._v(" Global Domains Architecture")]),e._v(" "),a("p",[e._v("Cadence has introduced a new top level entity, Global "),a("Term",{attrs:{term:"domain",show:"Domains"}}),e._v(", which provides support for replication of "),a("Term",{attrs:{term:"workflow"}}),e._v("\nexecution across clusters. A global domain can be configured with more than one clusters, but can only be "),a("code",[e._v("active")]),e._v(" in one of the clusters at any point of time.\nWe call it "),a("code",[e._v("passive")]),e._v(" or "),a("code",[e._v("standby")]),e._v(" when not active in other clusters.")],1),e._v(" "),a("p",[e._v("The number of standby clusters can be zero, if a global domain only configured to one cluster. This is preferred/recommended.")]),e._v(" "),a("p",[e._v("Any workflow of a global domain can only make make progress in its "),a("code",[e._v("active")]),e._v(" cluster. And the workflow progress is replicated to other "),a("code",[e._v("standby")]),e._v(" clusters. For example,\nstarting workflow by calling "),a("code",[e._v("StartWorkflow")]),e._v(", or starting activity(by "),a("code",[e._v("PollForActivityTask")]),e._v(" API), can only be processed in its active cluster. After active cluster made progress,\nstandby clusters (if any) will poll the history from active to replicate the workflow states.")]),e._v(" "),a("p",[e._v("However, standby clusters can also receive the requests, e.g. for starting workflows or starting activities. They know which cluster the domain is active at.\nSo the requests can be routed to the active clusters. This is called "),a("code",[e._v("api-forwarding")]),e._v(" in Cadence. "),a("code",[e._v("api-forwarding")]),e._v(" makes it possible to have no downtime during failover.\nThere are two "),a("code",[e._v("api-forwarding")]),e._v(" policy: "),a("code",[e._v("selected-api-forwarding")]),e._v(" and "),a("code",[e._v("all-domain-api-forwarding")]),e._v(" policy.")]),e._v(" "),a("p",[e._v("When using "),a("code",[e._v("selected-api-forwarding")]),e._v(", applications need to run different set of activity & workflow "),a("Term",{attrs:{term:"worker",show:"workers"}}),e._v(" polling on every cluster.\nCadence will only dispatch tasks on the current active cluster; "),a("Term",{attrs:{term:"worker",show:"workers"}}),e._v(" on the standby cluster will sit idle\nuntil the Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" is failed over. This is recommended if XDC is being used in multiple clusters running in very remote data centers(regions), which forwarding is expensive to do.")],1),e._v(" "),a("p",[e._v("When using "),a("code",[e._v("all-domain-api-forwarding")]),e._v(", applications only need to run activity & workflow "),a("Term",{attrs:{term:"worker",show:"workers"}}),e._v(" polling on one cluster. This makes it easier for the application setup. This is recommended\nwhen clusters are all in local or nearby datacenters. See more details in "),a("a",{attrs:{href:"https://github.com/uber/cadence/discussions/4530",target:"_blank",rel:"noopener noreferrer"}},[e._v("discussion"),a("OutboundLink")],1),e._v(".")],1),e._v(" "),a("h3",{attrs:{id:"conflict-resolution"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#conflict-resolution"}},[e._v("#")]),e._v(" Conflict Resolution")]),e._v(" "),a("p",[e._v("Unlike local "),a("Term",{attrs:{term:"domain",show:"domains"}}),e._v(" which provide at-most-once semantics for "),a("Term",{attrs:{term:"activity"}}),e._v(" execution, Global "),a("Term",{attrs:{term:"domain",show:"Domains"}}),e._v(" can only support at-least-once\nsemantics. Cadence global domain relies on asynchronous replication of "),a("Term",{attrs:{term:"event",show:"events"}}),e._v(" across clusters, so in the event of a failover\nit is possible that "),a("Term",{attrs:{term:"activity"}}),e._v(" gets dispatched again on the new active cluster due to a replication "),a("Term",{attrs:{term:"task"}}),e._v(" lag. This also\nmeans that whenever "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" is updated after a failover by the new cluster, any previous replication "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v("\nfor that execution cannot be applied. This results in loss of some progress made by the "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" in the\nprevious active cluster. During such conflict resolution, Cadence re-injects any external "),a("Term",{attrs:{term:"event",show:"events"}}),e._v(" like "),a("Term",{attrs:{term:"signal",show:"Signals"}}),e._v(" to the\nnew history before discarding replication "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v(". Even though some progress could rollback during failovers, Cadence\nprovides the guarantee that "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" won’t get stuck and will continue to make forward progress.")],1),e._v(" "),a("h2",{attrs:{id:"global-domain-concepts-configuration-and-operation"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#global-domain-concepts-configuration-and-operation"}},[e._v("#")]),e._v(" Global Domain Concepts, Configuration and Operation")]),e._v(" "),a("h3",{attrs:{id:"concepts"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#concepts"}},[e._v("#")]),e._v(" Concepts")]),e._v(" "),a("h4",{attrs:{id:"isglobal"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#isglobal"}},[e._v("#")]),e._v(" IsGlobal")]),e._v(" "),a("p",[e._v("This config is used to distinguish "),a("Term",{attrs:{term:"domain",show:"domains"}}),e._v(" local to the cluster from the global "),a("Term",{attrs:{term:"domain"}}),e._v(". It controls the creation of\nreplication "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v(" on updates allowing the state to be replicated across clusters. This is a read-only setting that can\nonly be set when the "),a("Term",{attrs:{term:"domain"}}),e._v(" is provisioned.")],1),e._v(" "),a("h4",{attrs:{id:"clusters"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#clusters"}},[e._v("#")]),e._v(" Clusters")]),e._v(" "),a("p",[e._v("A list of clusters where the "),a("Term",{attrs:{term:"domain"}}),e._v(" can fail over to, including the current active cluster.\nThis is also a read-only setting that can only be set when the "),a("Term",{attrs:{term:"domain"}}),e._v(" is provisioned. A re-replication feature on the\nroadmap will allow updating this config to add/remove clusters in the future.")],1),e._v(" "),a("h4",{attrs:{id:"active-cluster-name"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#active-cluster-name"}},[e._v("#")]),e._v(" Active Cluster Name")]),e._v(" "),a("p",[e._v("Name of the current active cluster for the Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(". This config is updated each time the Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" is failed over to\nanother cluster.")],1),e._v(" "),a("h4",{attrs:{id:"failover-version"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#failover-version"}},[e._v("#")]),e._v(" Failover Version")]),e._v(" "),a("p",[e._v("Unique failover version which also represents the current active cluster for Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(". Cadence allows failover to\nbe triggered from any cluster, so failover version is designed in a way to not allow conflicts if failover is mistakenly\ntriggered simultaneously on two clusters.")],1),e._v(" "),a("h3",{attrs:{id:"operate-by-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#operate-by-cli"}},[e._v("#")]),e._v(" Operate by CLI")]),e._v(" "),a("p",[e._v("The Cadence "),a("Term",{attrs:{term:"CLI"}}),e._v(" can also be used to "),a("Term",{attrs:{term:"query"}}),e._v(" the "),a("Term",{attrs:{term:"domain"}}),e._v(" config or perform failovers. Here are some useful commands.")],1),e._v(" "),a("h4",{attrs:{id:"describe-global-domain"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#describe-global-domain"}},[e._v("#")]),e._v(" Describe Global Domain")]),e._v(" "),a("p",[e._v("The following command can be used to describe Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" metadata:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" cadence-canary-xdc d desc\nName: cadence-canary-xdc\nDescription: cadence canary cross "),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("dc")]),e._v(" testing domain\nOwnerEmail: cadence-dev@cadenceworkflow.io\nDomainData:\nStatus: REGISTERED\nRetentionInDays: "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("7")]),e._v("\nEmitMetrics: "),a("span",{pre:!0,attrs:{class:"token boolean"}},[e._v("true")]),e._v("\nActiveClusterName: dc1\nClusters: dc1, dc2\n")])])]),a("h4",{attrs:{id:"failover-global-domain-using-domain-update-command-being-deprecated-in-favor-of-managed-graceful-failover"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#failover-global-domain-using-domain-update-command-being-deprecated-in-favor-of-managed-graceful-failover"}},[e._v("#")]),e._v(" Failover Global Domain using domain update command(being deprecated in favor of managed graceful failover)")]),e._v(" "),a("p",[e._v("The following command can be used to failover Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" "),a("em",[e._v("my-domain-global")]),e._v(" to the "),a("em",[e._v("dc2")]),e._v(" cluster:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" my-domain-global d up "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--ac")]),e._v(" dc2\n")])])]),a("h4",{attrs:{id:"failover-global-domain-using-managed-graceful-failover"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#failover-global-domain-using-managed-graceful-failover"}},[e._v("#")]),e._v(" Failover Global Domain using Managed Graceful Failover")]),e._v(" "),a("p",[e._v("First of all, update the domain to enable this feature for the domain")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-global-domain-0 d update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain_data")]),e._v(" IsManagedByCadence:true\n$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-global-domain-1 d update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain_data")]),e._v(" IsManagedByCadence:true\n$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-global-domain-2 d update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain_data")]),e._v(" IsManagedByCadence:true\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("..")]),e._v(".\n")])])]),a("p",[e._v("Then you can start failover the those global domains using managed failover:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence admin cluster failover start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--source_cluster")]),e._v(" dc1 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--target_cluster")]),e._v(" dc2\n")])])]),a("p",[e._v("This will failover all the domains with "),a("code",[e._v("IsManagedByCadence:true")]),e._v(" from dc1 to dc2.")]),e._v(" "),a("p",[e._v("You can provide more detailed options when using the command, and also watch the progress of the failover.\nFeel free to explore the "),a("code",[e._v("cadence admin cluster failover")]),e._v(" tab.")]),e._v(" "),a("h2",{attrs:{id:"running-locally"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#running-locally"}},[e._v("#")]),e._v(" Running Locally")]),e._v(" "),a("p",[e._v("The best way is to use Cadence "),a("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/docker",target:"_blank",rel:"noopener noreferrer"}},[e._v("docker-compose"),a("OutboundLink")],1),e._v(":\n"),a("code",[e._v("docker-compose -f docker-compose-multiclusters.yml up")])]),e._v(" "),a("h2",{attrs:{id:"running-in-production"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#running-in-production"}},[e._v("#")]),e._v(" Running in Production")]),e._v(" "),a("p",[e._v("Enable global domain feature needs to be enabled in "),a("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configs"}},[e._v("static config")]),e._v(".")],1),e._v(" "),a("p",[e._v('Here we use clusterDCA and clusterDCB as an example. We pick clusterDCA as the primary(used to called "master") cluster.\nThe only difference of being a primary cluster is that it is responsible for domain registration. Primary can be changed later but it needs to be the same across all clusters.')]),e._v(" "),a("p",[e._v("The ClusterMeta config of clusterDCA should be")]),e._v(" "),a("div",{staticClass:"language-yaml extra-class"},[a("pre",{pre:!0,attrs:{class:"language-yaml"}},[a("code",[a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"selected-apis-forwarding"')]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCA"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCA"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCA")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCB")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n")])])]),a("p",[e._v("And ClusterMeta config of clusterDCB should be")]),e._v(" "),a("div",{staticClass:"language-yaml extra-class"},[a("pre",{pre:!0,attrs:{class:"language-yaml"}},[a("code",[a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"selected-apis-forwarding"')]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCA"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCB"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCA")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCB")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n")])])]),a("p",[e._v("After the configuration is deployed:")]),e._v(" "),a("ol",[a("li",[a("p",[e._v("Register a global domain\n"),a("code",[e._v("cadence --do domain register --global_domain true --clusters clusterDCA clusterDCB --active_cluster clusterDCA")])])]),e._v(" "),a("li",[a("p",[e._v("Run some workflow and failover domain from one to another\n"),a("code",[e._v("cadence --do domain update --active_cluster clusterDCB")])])])]),e._v(" "),a("p",[e._v("Then the domain should be failed over to clusterDCB. Now worklfows are read-only in clusterDCA. So your workers polling tasks from clusterDCA will become idle.")]),e._v(" "),a("p",[e._v("Note 1: that even though clusterDCA is standy/read-only for this domain, it can be active for another domain. So being active/standy is per domain basis not per clusters. In other words, for example if you use XDC in case of DC failure of clusterDCA, you need to failover all domains from clusterDCA to clusterDCB.")]),e._v(" "),a("p",[e._v("Note 2: even though a domain is standy/read-only in a cluster, say clusterDCA, sending write requests(startWF, signalWF, etc) could still work because there is a forwarding component in the Frontend service. It will try to re-route the requests to an active cluster for the domain.")])])}),[],!1,null,null,null);a.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[51],{356:function(e,a,t){"use strict";t.r(a);var s=t(0),r=Object(s.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"cross-dc-replication"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#cross-dc-replication"}},[e._v("#")]),e._v(" Cross-DC replication")]),e._v(" "),a("p",[e._v("The Cadence Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" feature provides clients with the capability to continue their "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" from another\ncluster in the event of a datacenter failover. Although you can configure a Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" to be replicated to any number of\nclusters, it is only considered active in a single cluster.")],1),e._v(" "),a("h2",{attrs:{id:"global-domains-architecture"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#global-domains-architecture"}},[e._v("#")]),e._v(" Global Domains Architecture")]),e._v(" "),a("p",[e._v("Cadence has introduced a new top level entity, Global "),a("Term",{attrs:{term:"domain",show:"Domains"}}),e._v(", which provides support for replication of "),a("Term",{attrs:{term:"workflow"}}),e._v("\nexecution across clusters. A global domain can be configured with more than one clusters, but can only be "),a("code",[e._v("active")]),e._v(" in one of the clusters at any point of time.\nWe call it "),a("code",[e._v("passive")]),e._v(" or "),a("code",[e._v("standby")]),e._v(" when not active in other clusters.")],1),e._v(" "),a("p",[e._v("The number of standby clusters can be zero, if a global domain only configured to one cluster. This is preferred/recommended.")]),e._v(" "),a("p",[e._v("Any workflow of a global domain can only make make progress in its "),a("code",[e._v("active")]),e._v(" cluster. And the workflow progress is replicated to other "),a("code",[e._v("standby")]),e._v(" clusters. For example,\nstarting workflow by calling "),a("code",[e._v("StartWorkflow")]),e._v(", or starting activity(by "),a("code",[e._v("PollForActivityTask")]),e._v(" API), can only be processed in its active cluster. After active cluster made progress,\nstandby clusters (if any) will poll the history from active to replicate the workflow states.")]),e._v(" "),a("p",[e._v("However, standby clusters can also receive the requests, e.g. for starting workflows or starting activities. They know which cluster the domain is active at.\nSo the requests can be routed to the active clusters. This is called "),a("code",[e._v("api-forwarding")]),e._v(" in Cadence. "),a("code",[e._v("api-forwarding")]),e._v(" makes it possible to have no downtime during failover.\nThere are two "),a("code",[e._v("api-forwarding")]),e._v(" policy: "),a("code",[e._v("selected-api-forwarding")]),e._v(" and "),a("code",[e._v("all-domain-api-forwarding")]),e._v(" policy.")]),e._v(" "),a("p",[e._v("When using "),a("code",[e._v("selected-api-forwarding")]),e._v(", applications need to run different set of activity & workflow "),a("Term",{attrs:{term:"worker",show:"workers"}}),e._v(" polling on every cluster.\nCadence will only dispatch tasks on the current active cluster; "),a("Term",{attrs:{term:"worker",show:"workers"}}),e._v(" on the standby cluster will sit idle\nuntil the Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" is failed over. This is recommended if XDC is being used in multiple clusters running in very remote data centers(regions), which forwarding is expensive to do.")],1),e._v(" "),a("p",[e._v("When using "),a("code",[e._v("all-domain-api-forwarding")]),e._v(", applications only need to run activity & workflow "),a("Term",{attrs:{term:"worker",show:"workers"}}),e._v(" polling on one cluster. This makes it easier for the application setup. This is recommended\nwhen clusters are all in local or nearby datacenters. See more details in "),a("a",{attrs:{href:"https://github.com/uber/cadence/discussions/4530",target:"_blank",rel:"noopener noreferrer"}},[e._v("discussion"),a("OutboundLink")],1),e._v(".")],1),e._v(" "),a("h3",{attrs:{id:"conflict-resolution"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#conflict-resolution"}},[e._v("#")]),e._v(" Conflict Resolution")]),e._v(" "),a("p",[e._v("Unlike local "),a("Term",{attrs:{term:"domain",show:"domains"}}),e._v(" which provide at-most-once semantics for "),a("Term",{attrs:{term:"activity"}}),e._v(" execution, Global "),a("Term",{attrs:{term:"domain",show:"Domains"}}),e._v(" can only support at-least-once\nsemantics. Cadence global domain relies on asynchronous replication of "),a("Term",{attrs:{term:"event",show:"events"}}),e._v(" across clusters, so in the event of a failover\nit is possible that "),a("Term",{attrs:{term:"activity"}}),e._v(" gets dispatched again on the new active cluster due to a replication "),a("Term",{attrs:{term:"task"}}),e._v(" lag. This also\nmeans that whenever "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" is updated after a failover by the new cluster, any previous replication "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v("\nfor that execution cannot be applied. This results in loss of some progress made by the "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" in the\nprevious active cluster. During such conflict resolution, Cadence re-injects any external "),a("Term",{attrs:{term:"event",show:"events"}}),e._v(" like "),a("Term",{attrs:{term:"signal",show:"Signals"}}),e._v(" to the\nnew history before discarding replication "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v(". Even though some progress could rollback during failovers, Cadence\nprovides the guarantee that "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" won’t get stuck and will continue to make forward progress.")],1),e._v(" "),a("h2",{attrs:{id:"global-domain-concepts-configuration-and-operation"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#global-domain-concepts-configuration-and-operation"}},[e._v("#")]),e._v(" Global Domain Concepts, Configuration and Operation")]),e._v(" "),a("h3",{attrs:{id:"concepts"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#concepts"}},[e._v("#")]),e._v(" Concepts")]),e._v(" "),a("h4",{attrs:{id:"isglobal"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#isglobal"}},[e._v("#")]),e._v(" IsGlobal")]),e._v(" "),a("p",[e._v("This config is used to distinguish "),a("Term",{attrs:{term:"domain",show:"domains"}}),e._v(" local to the cluster from the global "),a("Term",{attrs:{term:"domain"}}),e._v(". It controls the creation of\nreplication "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v(" on updates allowing the state to be replicated across clusters. This is a read-only setting that can\nonly be set when the "),a("Term",{attrs:{term:"domain"}}),e._v(" is provisioned.")],1),e._v(" "),a("h4",{attrs:{id:"clusters"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#clusters"}},[e._v("#")]),e._v(" Clusters")]),e._v(" "),a("p",[e._v("A list of clusters where the "),a("Term",{attrs:{term:"domain"}}),e._v(" can fail over to, including the current active cluster.\nThis is also a read-only setting that can only be set when the "),a("Term",{attrs:{term:"domain"}}),e._v(" is provisioned. A re-replication feature on the\nroadmap will allow updating this config to add/remove clusters in the future.")],1),e._v(" "),a("h4",{attrs:{id:"active-cluster-name"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#active-cluster-name"}},[e._v("#")]),e._v(" Active Cluster Name")]),e._v(" "),a("p",[e._v("Name of the current active cluster for the Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(". This config is updated each time the Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" is failed over to\nanother cluster.")],1),e._v(" "),a("h4",{attrs:{id:"failover-version"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#failover-version"}},[e._v("#")]),e._v(" Failover Version")]),e._v(" "),a("p",[e._v("Unique failover version which also represents the current active cluster for Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(". Cadence allows failover to\nbe triggered from any cluster, so failover version is designed in a way to not allow conflicts if failover is mistakenly\ntriggered simultaneously on two clusters.")],1),e._v(" "),a("h3",{attrs:{id:"operate-by-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#operate-by-cli"}},[e._v("#")]),e._v(" Operate by CLI")]),e._v(" "),a("p",[e._v("The Cadence "),a("Term",{attrs:{term:"CLI"}}),e._v(" can also be used to "),a("Term",{attrs:{term:"query"}}),e._v(" the "),a("Term",{attrs:{term:"domain"}}),e._v(" config or perform failovers. Here are some useful commands.")],1),e._v(" "),a("h4",{attrs:{id:"describe-global-domain"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#describe-global-domain"}},[e._v("#")]),e._v(" Describe Global Domain")]),e._v(" "),a("p",[e._v("The following command can be used to describe Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" metadata:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" cadence-canary-xdc d desc\nName: cadence-canary-xdc\nDescription: cadence canary cross "),a("span",{pre:!0,attrs:{class:"token function"}},[e._v("dc")]),e._v(" testing domain\nOwnerEmail: cadence-dev@cadenceworkflow.io\nDomainData:\nStatus: REGISTERED\nRetentionInDays: "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("7")]),e._v("\nEmitMetrics: "),a("span",{pre:!0,attrs:{class:"token boolean"}},[e._v("true")]),e._v("\nActiveClusterName: dc1\nClusters: dc1, dc2\n")])])]),a("h4",{attrs:{id:"failover-global-domain-using-domain-update-command-being-deprecated-in-favor-of-managed-graceful-failover"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#failover-global-domain-using-domain-update-command-being-deprecated-in-favor-of-managed-graceful-failover"}},[e._v("#")]),e._v(" Failover Global Domain using domain update command(being deprecated in favor of managed graceful failover)")]),e._v(" "),a("p",[e._v("The following command can be used to failover Global "),a("Term",{attrs:{term:"domain",show:"Domain"}}),e._v(" "),a("em",[e._v("my-domain-global")]),e._v(" to the "),a("em",[e._v("dc2")]),e._v(" cluster:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" my-domain-global d up "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--ac")]),e._v(" dc2\n")])])]),a("h4",{attrs:{id:"failover-global-domain-using-managed-graceful-failover"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#failover-global-domain-using-managed-graceful-failover"}},[e._v("#")]),e._v(" Failover Global Domain using Managed Graceful Failover")]),e._v(" "),a("p",[e._v("First of all, update the domain to enable this feature for the domain")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-global-domain-0 d update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain_data")]),e._v(" IsManagedByCadence:true\n$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-global-domain-1 d update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain_data")]),e._v(" IsManagedByCadence:true\n$ cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" test-global-domain-2 d update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain_data")]),e._v(" IsManagedByCadence:true\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("..")]),e._v(".\n")])])]),a("p",[e._v("Then you can start failover the those global domains using managed failover:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence admin cluster failover start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--source_cluster")]),e._v(" dc1 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--target_cluster")]),e._v(" dc2\n")])])]),a("p",[e._v("This will failover all the domains with "),a("code",[e._v("IsManagedByCadence:true")]),e._v(" from dc1 to dc2.")]),e._v(" "),a("p",[e._v("You can provide more detailed options when using the command, and also watch the progress of the failover.\nFeel free to explore the "),a("code",[e._v("cadence admin cluster failover")]),e._v(" tab.")]),e._v(" "),a("h2",{attrs:{id:"running-locally"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#running-locally"}},[e._v("#")]),e._v(" Running Locally")]),e._v(" "),a("p",[e._v("The best way is to use Cadence "),a("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/docker",target:"_blank",rel:"noopener noreferrer"}},[e._v("docker-compose"),a("OutboundLink")],1),e._v(":\n"),a("code",[e._v("docker-compose -f docker-compose-multiclusters.yml up")])]),e._v(" "),a("h2",{attrs:{id:"running-in-production"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#running-in-production"}},[e._v("#")]),e._v(" Running in Production")]),e._v(" "),a("p",[e._v("Enable global domain feature needs to be enabled in "),a("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configs"}},[e._v("static config")]),e._v(".")],1),e._v(" "),a("p",[e._v('Here we use clusterDCA and clusterDCB as an example. We pick clusterDCA as the primary(used to called "master") cluster.\nThe only difference of being a primary cluster is that it is responsible for domain registration. Primary can be changed later but it needs to be the same across all clusters.')]),e._v(" "),a("p",[e._v("The ClusterMeta config of clusterDCA should be")]),e._v(" "),a("div",{staticClass:"language-yaml extra-class"},[a("pre",{pre:!0,attrs:{class:"language-yaml"}},[a("code",[a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"selected-apis-forwarding"')]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCA"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCA"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCA")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCB")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n")])])]),a("p",[e._v("And ClusterMeta config of clusterDCB should be")]),e._v(" "),a("div",{staticClass:"language-yaml extra-class"},[a("pre",{pre:!0,attrs:{class:"language-yaml"}},[a("code",[a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"selected-apis-forwarding"')]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCA"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"clusterDCB"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCA")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterDCB")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"<>:<>"')]),e._v("\n")])])]),a("p",[e._v("After the configuration is deployed:")]),e._v(" "),a("ol",[a("li",[a("p",[e._v("Register a global domain\n"),a("code",[e._v("cadence --do domain register --global_domain true --clusters clusterDCA clusterDCB --active_cluster clusterDCA")])])]),e._v(" "),a("li",[a("p",[e._v("Run some workflow and failover domain from one to another\n"),a("code",[e._v("cadence --do domain update --active_cluster clusterDCB")])])])]),e._v(" "),a("p",[e._v("Then the domain should be failed over to clusterDCB. Now worklfows are read-only in clusterDCA. So your workers polling tasks from clusterDCA will become idle.")]),e._v(" "),a("p",[e._v("Note 1: that even though clusterDCA is standy/read-only for this domain, it can be active for another domain. So being active/standy is per domain basis not per clusters. In other words, for example if you use XDC in case of DC failure of clusterDCA, you need to failover all domains from clusterDCA to clusterDCB.")]),e._v(" "),a("p",[e._v("Note 2: even though a domain is standy/read-only in a cluster, say clusterDCA, sending write requests(startWF, signalWF, etc) could still work because there is a forwarding component in the Frontend service. It will try to re-route the requests to an active cluster for the domain.")])])}),[],!1,null,null,null);a.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/52.4cc61da9.js b/assets/js/52.f7b2bb3c.js similarity index 99% rename from assets/js/52.4cc61da9.js rename to assets/js/52.f7b2bb3c.js index ecc75eeff..5f099a775 100644 --- a/assets/js/52.4cc61da9.js +++ b/assets/js/52.f7b2bb3c.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[52],{358:function(t,e,a){"use strict";a.r(e);var s=a(0),r=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"searching-workflows-advanced-visibility"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#searching-workflows-advanced-visibility"}},[t._v("#")]),t._v(" Searching Workflows(Advanced visibility)")]),t._v(" "),e("h2",{attrs:{id:"introduction"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#introduction"}},[t._v("#")]),t._v(" Introduction")]),t._v(" "),e("p",[t._v("Cadence supports creating "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with customized key-value pairs, updating the information within the "),e("Term",{attrs:{term:"workflow"}}),t._v(" code, and then listing/searching "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with a SQL-like "),e("Term",{attrs:{term:"query"}}),t._v(". For example, you can create "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with keys "),e("code",[t._v("city")]),t._v(" and "),e("code",[t._v("age")]),t._v(", then search all "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with "),e("code",[t._v("city = seattle and age > 22")]),t._v(".")],1),t._v(" "),e("p",[t._v("Also note that normal "),e("Term",{attrs:{term:"workflow"}}),t._v(" properties like start time and "),e("Term",{attrs:{term:"workflow"}}),t._v(" type can be queried as well. For example, the following "),e("Term",{attrs:{term:"query"}}),t._v(" could be specified when "),e("RouterLink",{attrs:{to:"/docs/06-cli/#list-closed-or-open-workflow-executions"}},[t._v("listing workflows from the CLI")]),t._v(" or using the list APIs ("),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[t._v("Go"),e("OutboundLink")],1),t._v(", "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/WorkflowService.Iface.html#ListWorkflowExecutions-com.uber.cadence.ListWorkflowExecutionsRequest-",target:"_blank",rel:"noopener noreferrer"}},[t._v("Java"),e("OutboundLink")],1),t._v("):")],1),t._v(" "),e("div",{staticClass:"language-sql extra-class"},[e("pre",{pre:!0,attrs:{class:"language-sql"}},[e("code",[t._v("WorkflowType "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"main.Workflow"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("AND")]),t._v(" CloseStatus "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"completed"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("AND")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("StartTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" \n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"2019-06-07T16:46:34-08:00"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("OR")]),t._v(" CloseTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"2019-06-07T16:46:34-08:00"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" \n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("ORDER")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("BY")]),t._v(" StartTime "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("DESC")]),t._v(" \n")])])]),e("p",[t._v("In other places, this is also called as "),e("code",[t._v("advanced visibility")]),t._v(". While "),e("code",[t._v("basic visibility")]),t._v(" is referred to basic listing without being able to search.")]),t._v(" "),e("h2",{attrs:{id:"memo-vs-search-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#memo-vs-search-attributes"}},[t._v("#")]),t._v(" Memo vs Search Attributes")]),t._v(" "),e("p",[t._v("Cadence offers two methods for creating "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with key-value pairs: memo and search attributes. Memo can only be provided on "),e("Term",{attrs:{term:"workflow"}}),t._v(" start. Also, memo data are not indexed, and are therefore not searchable. Memo data are visible when listing "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" using the list APIs. Search attributes data are indexed so you can search "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" by "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" on these attributes. However, search attributes require the use of Elasticsearch.")],1),t._v(" "),e("p",[t._v("Memo and search attributes are available in the Go client in "),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("StartWorkflowOptions"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" StartWorkflowOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ...")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Memo - Optional non-indexed info that will be shown in list workflow.")]),t._v("\n Memo "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// supported when Cadence server is using Elasticsearch). The key and value type must be registered on Cadence server side.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Use GetSearchAttributes API to get valid key and corresponding value type.")]),t._v("\n SearchAttributes "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("In the Java client, the "),e("em",[t._v("WorkflowOptions.Builder")]),t._v(" has similar methods for "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/client/WorkflowOptions.Builder.html#setMemo-java.util.Map-",target:"_blank",rel:"noopener noreferrer"}},[t._v("memo"),e("OutboundLink")],1),t._v(" and "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/client/WorkflowOptions.Builder.html#setSearchAttributes-java.util.Map-",target:"_blank",rel:"noopener noreferrer"}},[t._v("search attributes"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("Some important distinctions between memo and search attributes:")]),t._v(" "),e("ul",[e("li",[t._v("Memo can support all data types because it is not indexed. Search attributes only support basic data types (including String(aka Text), Int, Float, Bool, Datetime) because it is indexed by Elasticsearch.")]),t._v(" "),e("li",[t._v("Memo does not restrict on key names. Search attributes require that keys are allowlisted before using them because Elasticsearch has a limit on indexed keys.")]),t._v(" "),e("li",[t._v("Memo doesn't require Cadence clusters to depend on Elasticsearch while search attributes only works with Elasticsearch.")])]),t._v(" "),e("h2",{attrs:{id:"search-attributes-go-client-usage"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#search-attributes-go-client-usage"}},[t._v("#")]),t._v(" Search Attributes (Go Client Usage)")]),t._v(" "),e("p",[t._v("When using the Cadence Go client, provide key-value pairs as SearchAttributes in "),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("StartWorkflowOptions"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("SearchAttributes is "),e("code",[t._v("map[string]interface{}")]),t._v(" where the keys need to be allowlisted so that Cadence knows the attribute key name and value type. The value provided in the map must be the same type as registered.")]),t._v(" "),e("h3",{attrs:{id:"allow-listing-search-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#allow-listing-search-attributes"}},[t._v("#")]),t._v(" Allow Listing Search Attributes")]),t._v(" "),e("p",[t._v("Start by "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" the list of search attributes using the "),e("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("$ cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--domain")]),t._v(" samples-domain cl get-search-attr\n+---------------------+------------+\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEY "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" VALUE TYPE "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n+---------------------+------------+\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CloseStatus "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CloseTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomBoolField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" DOUBLE "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomDatetimeField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" DATETIME "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomDomain "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomDoubleField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" BOOL "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomIntField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomKeywordField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomStringField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" STRING "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" DomainID "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" ExecutionTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" HistoryLength "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" RunID "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" StartTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" WorkflowID "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" WorkflowType "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n+---------------------+------------+\n")])])]),e("p",[t._v("Use the admin "),e("Term",{attrs:{term:"CLI"}}),t._v(" to add a new search attribute:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--domain")]),t._v(" samples-domain adm cl asa "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--search_attr_key")]),t._v(" NewKey "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--search_attr_type")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v("\n")])])]),e("p",[t._v("The numbers for the attribute types map as follows:")]),t._v(" "),e("ul",[e("li",[t._v("0 = String(Text)")]),t._v(" "),e("li",[t._v("1 = Keyword")]),t._v(" "),e("li",[t._v("2 = Int")]),t._v(" "),e("li",[t._v("3 = Double")]),t._v(" "),e("li",[t._v("4 = Bool")]),t._v(" "),e("li",[t._v("5 = DateTime")])]),t._v(" "),e("h4",{attrs:{id:"keyword-vs-string-text"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#keyword-vs-string-text"}},[t._v("#")]),t._v(" Keyword vs String(Text)")]),t._v(" "),e("p",[t._v("Note 1: "),e("strong",[t._v("String")]),t._v(" has been renamed to "),e("strong",[t._v("Text")]),t._v(" in "),e("a",{attrs:{href:"https://www.elastic.co/blog/strings-are-dead-long-live-strings",target:"_blank",rel:"noopener noreferrer"}},[t._v("ElasticSearch"),e("OutboundLink")],1),t._v(". Cadence is also "),e("a",{attrs:{href:"https://github.com/uber/cadence/issues/4604",target:"_blank",rel:"noopener noreferrer"}},[t._v("planning"),e("OutboundLink")],1),t._v(" to rename it.")]),t._v(" "),e("p",[t._v("Note 2: "),e("strong",[t._v("Keyword")]),t._v(" and "),e("strong",[t._v("String(Text)")]),t._v(" are concepts taken from Elasticsearch. Each word in a "),e("strong",[t._v("String(Text)")]),t._v(" is considered a searchable keyword. For a UUID, that can be problematic as Elasticsearch will index each portion of the UUID separately. To have the whole string considered as a searchable keyword, use the "),e("strong",[t._v("Keyword")]),t._v(" type.")]),t._v(" "),e("p",[t._v('For example, key RunID with value "2dd29ab7-2dd8-4668-83e0-89cae261cfb1"')]),t._v(" "),e("ul",[e("li",[t._v("as a "),e("strong",[t._v("Keyword")]),t._v(' will only be matched by RunID = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1" (or in the future with '),e("a",{attrs:{href:"https://github.com/uber/cadence/issues/1137",target:"_blank",rel:"noopener noreferrer"}},[t._v("regular expressions"),e("OutboundLink")],1),t._v(")")]),t._v(" "),e("li",[t._v("as a "),e("strong",[t._v("String(Text)")]),t._v(' will be matched by RunID = "2dd8", which may cause unwanted matches')])]),t._v(" "),e("p",[e("strong",[t._v("Note:")]),t._v(" String(Text) type can not be used in Order By "),e("Term",{attrs:{term:"query"}}),t._v(".")],1),t._v(" "),e("p",[t._v("There are some pre-allowlisted search attributes that are handy for testing:")]),t._v(" "),e("ul",[e("li",[t._v("CustomKeywordField")]),t._v(" "),e("li",[t._v("CustomIntField")]),t._v(" "),e("li",[t._v("CustomDoubleField")]),t._v(" "),e("li",[t._v("CustomBoolField")]),t._v(" "),e("li",[t._v("CustomDatetimeField")]),t._v(" "),e("li",[t._v("CustomStringField")])]),t._v(" "),e("p",[t._v("Their types are indicated in their names.")]),t._v(" "),e("h3",{attrs:{id:"value-types"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#value-types"}},[t._v("#")]),t._v(" Value Types")]),t._v(" "),e("p",[t._v("Here are the Search Attribute value types and their correspondent Golang types:")]),t._v(" "),e("ul",[e("li",[t._v("Keyword = string")]),t._v(" "),e("li",[t._v("Int = int64")]),t._v(" "),e("li",[t._v("Double = float64")]),t._v(" "),e("li",[t._v("Bool = bool")]),t._v(" "),e("li",[t._v("Datetime = time.Time")]),t._v(" "),e("li",[t._v("String = string")])]),t._v(" "),e("h3",{attrs:{id:"limit"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#limit"}},[t._v("#")]),t._v(" Limit")]),t._v(" "),e("p",[t._v("We recommend limiting the number of Elasticsearch indexes by enforcing limits on the following:")]),t._v(" "),e("ul",[e("li",[t._v("Number of keys: 100 per "),e("Term",{attrs:{term:"workflow"}})],1),t._v(" "),e("li",[t._v("Size of value: 2kb per value")]),t._v(" "),e("li",[t._v("Total size of key and values: 40kb per "),e("Term",{attrs:{term:"workflow"}})],1)]),t._v(" "),e("p",[t._v("Cadence reserves keys like DomainID, WorkflowID, and RunID. These can only be used in list "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(". The values are not updatable.")],1),t._v(" "),e("h3",{attrs:{id:"upsert-search-attributes-in-workflow"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#upsert-search-attributes-in-workflow"}},[t._v("#")]),t._v(" Upsert Search Attributes in Workflow")]),t._v(" "),e("p",[e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/workflow#UpsertSearchAttributes",target:"_blank",rel:"noopener noreferrer"}},[t._v("UpsertSearchAttributes"),e("OutboundLink")],1),t._v(" is used to add or update search attributes from within the "),e("Term",{attrs:{term:"workflow"}}),t._v(" code.")],1),t._v(" "),e("p",[t._v("Go samples for search attributes can be found at "),e("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/tree/master/cmd/samples/recipes/searchattributes",target:"_blank",rel:"noopener noreferrer"}},[t._v("github.com/uber-common/cadence-samples"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("UpsertSearchAttributes will merge attributes to the existing map in the "),e("Term",{attrs:{term:"workflow"}}),t._v(". Consider this example "),e("Term",{attrs:{term:"workflow"}}),t._v(" code:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("MyWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" input "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n attr1 "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomIntField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomBoolField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("true")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("UpsertSearchAttributes")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" attr1"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n attr2 "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomIntField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomKeywordField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"seattle"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("UpsertSearchAttributes")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" attr2"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("After the second call to UpsertSearchAttributes, the map will contain:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomIntField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomBoolField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("true")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomKeywordField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"seattle"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("There is no support for removing a field. To achieve a similar effect, set the field to a sentinel value. For example, to remove “CustomKeywordField”, update it to “impossibleVal”. Then searching "),e("code",[t._v("CustomKeywordField != ‘impossibleVal’")]),t._v(" will match "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(' with CustomKeywordField not equal to "impossibleVal", which '),e("strong",[t._v("includes")]),t._v(" "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without the CustomKeywordField set.")],1),t._v(" "),e("p",[t._v("Use "),e("code",[t._v("workflow.GetInfo")]),t._v(" to get current search attributes.")]),t._v(" "),e("h3",{attrs:{id:"continueasnew-and-cron"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#continueasnew-and-cron"}},[t._v("#")]),t._v(" ContinueAsNew and Cron")]),t._v(" "),e("p",[t._v("When performing a "),e("RouterLink",{attrs:{to:"/docs/go-client/continue-as-new/"}},[t._v("ContinueAsNew")]),t._v(" or using "),e("RouterLink",{attrs:{to:"/docs/go-client/distributed-cron/"}},[t._v("Cron")]),t._v(", search attributes (and memo) will be carried over to the new run by default.")],1),t._v(" "),e("h2",{attrs:{id:"query-capabilities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#query-capabilities"}},[t._v("#")]),t._v(" Query Capabilities")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" by using a SQL-like where clause when "),e("RouterLink",{attrs:{to:"/docs/06-cli/#list-closed-or-open-workflow-executions"}},[t._v("listing workflows from the CLI")]),t._v(" or using the list APIs ("),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[t._v("Go"),e("OutboundLink")],1),t._v(", "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/WorkflowService.Iface.html#ListWorkflowExecutions-com.uber.cadence.ListWorkflowExecutionsRequest-",target:"_blank",rel:"noopener noreferrer"}},[t._v("Java"),e("OutboundLink")],1),t._v(").")],1),t._v(" "),e("p",[t._v("Note that you will only see "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" from one domain when "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(".")],1),t._v(" "),e("h3",{attrs:{id:"supported-operators"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#supported-operators"}},[t._v("#")]),t._v(" Supported Operators")]),t._v(" "),e("ul",[e("li",[t._v("AND, OR, ()")]),t._v(" "),e("li",[t._v("=, !=, >, >=, <, <=")]),t._v(" "),e("li",[t._v("IN")]),t._v(" "),e("li",[t._v("BETWEEN ... AND")]),t._v(" "),e("li",[t._v("ORDER BY")])]),t._v(" "),e("h3",{attrs:{id:"default-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#default-attributes"}},[t._v("#")]),t._v(" Default Attributes")]),t._v(" "),e("p",[t._v("More and more default attributes are added in newer versions.\nPlease get the by using the "),e("Term",{attrs:{term:"CLI"}}),t._v(" get-search-attr command or the GetSearchAttributes API.\nSome names and types are as follows:")],1),t._v(" "),e("table",[e("thead",[e("tr",[e("th",[t._v("KEY")]),t._v(" "),e("th",[t._v("VALUE TYPE")])])]),t._v(" "),e("tbody",[e("tr",[e("td",[t._v("CloseStatus")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("CloseTime")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("CustomBoolField")]),t._v(" "),e("td",[t._v("DOUBLE")])]),t._v(" "),e("tr",[e("td",[t._v("CustomDatetimeField")]),t._v(" "),e("td",[t._v("DATETIME")])]),t._v(" "),e("tr",[e("td",[t._v("CustomDomain")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("CustomDoubleField")]),t._v(" "),e("td",[t._v("BOOL")])]),t._v(" "),e("tr",[e("td",[t._v("CustomIntField")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("CustomKeywordField")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("CustomStringField")]),t._v(" "),e("td",[t._v("STRING")])]),t._v(" "),e("tr",[e("td",[t._v("DomainID")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("ExecutionTime")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("HistoryLength")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("RunID")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("StartTime")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("WorkflowID")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("WorkflowType")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("Tasklist")]),t._v(" "),e("td",[t._v("KEYWORD")])])])]),t._v(" "),e("p",[t._v("There are some special considerations for these attributes:")]),t._v(" "),e("ul",[e("li",[t._v("CloseStatus, CloseTime, DomainID, ExecutionTime, HistoryLength, RunID, StartTime, WorkflowID, WorkflowType are reserved by Cadence and are read-only")]),t._v(" "),e("li",[t._v("Starting from "),e("a",{attrs:{href:"https://github.com/uber/cadence/commit/6e69fa1a6e9ae5d2f683759820f09d1286ba7797",target:"_blank",rel:"noopener noreferrer"}},[t._v("v0.18.0"),e("OutboundLink")],1),t._v(", Cadence automatically maps(case insensitive) string to CloseStatus so that you don't need to use integer in the query, to make it easier to use.\n"),e("ul",[e("li",[t._v('0 = "completed"')]),t._v(" "),e("li",[t._v('1 = "failed"')]),t._v(" "),e("li",[t._v('2 = "canceled"')]),t._v(" "),e("li",[t._v('3 = "terminated"')]),t._v(" "),e("li",[t._v('4 = "continued_as_new"')]),t._v(" "),e("li",[t._v('5 = "timed_out"')])])]),t._v(" "),e("li",[t._v("StartTime, CloseTime and ExecutionTime are stored as INT, but support "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" using both EpochTime in nanoseconds, and string in RFC3339 format (ex. "),e("code",[t._v('"2006-01-02T15:04:05+07:00"')]),t._v(")")],1),t._v(" "),e("li",[t._v("CloseTime, CloseStatus, HistoryLength are only present in closed "),e("Term",{attrs:{term:"workflow"}})],1),t._v(" "),e("li",[t._v("ExecutionTime is for Retry/Cron user to "),e("Term",{attrs:{term:"query"}}),t._v(" a "),e("Term",{attrs:{term:"workflow"}}),t._v(" that will run in the future")],1),t._v(" "),e("li",[t._v("To list only open "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(", add "),e("code",[t._v("CloseTime = missing")]),t._v(" to the end of the "),e("Term",{attrs:{term:"query"}}),t._v(".")],1)]),t._v(" "),e("p",[t._v("If you use retry or the cron feature to "),e("Term",{attrs:{term:"query"}}),t._v(" "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" that will start execution in a certain time range, you can add predicates on ExecutionTime. For example: "),e("code",[t._v("ExecutionTime > 2019-01-01T10:00:00-07:00")]),t._v(". Note that if predicates on ExecutionTime are included, only cron or a "),e("Term",{attrs:{term:"workflow"}}),t._v(" that needs to retry will be returned.")],1),t._v(" "),e("h3",{attrs:{id:"general-notes-about-queries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#general-notes-about-queries"}},[t._v("#")]),t._v(" General Notes About Queries")]),t._v(" "),e("ul",[e("li",[t._v("Pagesize default is 1000, and cannot be larger than 10k")]),t._v(" "),e("li",[t._v("Range "),e("Term",{attrs:{term:"query"}}),t._v(" on Cadence timestamp (StartTime, CloseTime, ExecutionTime) cannot be larger than 9223372036854775807 (maxInt64 - 1001)")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" by time range will have 1ms resolution")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" column names are case sensitive")],1),t._v(" "),e("li",[t._v("ListWorkflow may take longer when retrieving a large number of "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" (10M+)")],1),t._v(" "),e("li",[t._v("To retrieve a large number of "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without caring about order, use the ScanWorkflow API")],1),t._v(" "),e("li",[t._v("To efficiently count the number of "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(", use the CountWorkflow API")],1)]),t._v(" "),e("h2",{attrs:{id:"tools-support"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#tools-support"}},[t._v("#")]),t._v(" Tools Support")]),t._v(" "),e("h3",{attrs:{id:"cli"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#cli"}},[t._v("#")]),t._v(" CLI")]),t._v(" "),e("p",[t._v("Support for search attributes is available as of version 0.6.0 of the Cadence server. You can also use the "),e("Term",{attrs:{term:"CLI"}}),t._v(" from the latest "),e("a",{attrs:{href:"https://hub.docker.com/r/ubercadence/cli",target:"_blank",rel:"noopener noreferrer"}},[t._v("CLI Docker image"),e("OutboundLink")],1),t._v(" (supported on 0.6.4 or later).")],1),t._v(" "),e("h4",{attrs:{id:"start-workflow-with-search-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#start-workflow-with-search-attributes"}},[t._v("#")]),t._v(" Start Workflow with Search Attributes")]),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain workflow start "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tl")]),t._v(" helloWorldGroup "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--wt")]),t._v(" main.Workflow "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--et")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("60")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--dt")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-i")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'\"vancexu\"'")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-search_attr_key")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'CustomIntField | CustomKeywordField | CustomStringField | CustomBoolField | CustomDatetimeField'")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-search_attr_value")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'5 | keyword1 | vancexu test | true | 2019-06-07T16:16:36-08:00'")]),t._v("\n")])])]),e("h4",{attrs:{id:"search-workflows-with-list-api-command"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#search-workflows-with-list-api-command"}},[t._v("#")]),t._v(" Search Workflows with List API/Command")]),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\'')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-psa")]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'CustomKeywordField in ("keyword2", "keyword1") and CustomIntField >= 5 and CloseTime between "2018-06-07T16:16:36-08:00" and "2019-06-07T16:46:34-08:00" order by CustomDatetimeField desc\'')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-psa")]),t._v("\n")])])]),e("p",[t._v("To list only open "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(", add "),e("code",[t._v("CloseTime = missing")]),t._v(" to the end of the "),e("Term",{attrs:{term:"query"}}),t._v(".")],1),t._v(" "),e("p",[t._v("Note that "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" can support more than one type of filter:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'WorkflowType = "main.Workflow" and (WorkflowID = "1645a588-4772-4dab-b276-5f9db108b3a8" or RunID = "be66519b-5f09-40cd-b2e8-20e4106244dc")\'')]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'WorkflowType = "main.Workflow" StartTime > "2019-06-07T16:46:34-08:00" and CloseTime = missing\'')]),t._v("\n")])])]),e("p",[t._v("All above command can be done with ListWorkflowExecutions API.")]),t._v(" "),e("h4",{attrs:{id:"count-workflows-with-count-api-command"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#count-workflows-with-count-api-command"}},[t._v("#")]),t._v(" Count Workflows with Count API/Command")]),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf count "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\'')]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf count "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'CloseStatus=\"failed\"'")]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf count "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'CloseStatus!=\"completed\"'")]),t._v("\n")])])]),e("p",[t._v("All above command can be done with CountWorkflowExecutions API.")]),t._v(" "),e("h3",{attrs:{id:"web-ui-support"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#web-ui-support"}},[t._v("#")]),t._v(" Web UI Support")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Queries"}}),t._v(" are supported in "),e("a",{attrs:{href:"https://github.com/uber/cadence-web",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence Web"),e("OutboundLink")],1),t._v(' as of release 3.4.0. Use the "Basic/Advanced" button to switch to "Advanced" mode and type the '),e("Term",{attrs:{term:"query"}}),t._v(" in the search box.")],1),t._v(" "),e("h3",{attrs:{id:"tls-support-for-connecting-to-elasticsearch"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#tls-support-for-connecting-to-elasticsearch"}},[t._v("#")]),t._v(" TLS Support for connecting to Elasticsearch")]),t._v(" "),e("p",[t._v("If your elasticsearch deployment requires TLS to connect to it, you can add the following to your config template.\nThe TLS config is optional and when not provided it defaults to tls.enabled to "),e("strong",[t._v("false")])]),t._v(" "),e("div",{staticClass:"language-yaml extra-class"},[e("pre",{pre:!0,attrs:{class:"language-yaml"}},[e("code",[e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("elasticsearch")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("url")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("scheme")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"https"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("host")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"127.0.0.1:9200"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("indices")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("visibility")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("-")]),t._v("visibility"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("-")]),t._v("dev\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("tls")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("enabled")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean important"}},[t._v("true")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("caFile")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" /secrets/cadence/elasticsearch_cert.pem\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("enableHostVerification")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean important"}},[t._v("true")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("serverName")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" myServerName\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("certFile")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" /secrets/cadence/certfile.crt\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("keyFile")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" /secrets/cadence/keyfile.key\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("sslmode")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean important"}},[t._v("false")]),t._v("\n")])])]),e("h2",{attrs:{id:"running-locally"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#running-locally"}},[t._v("#")]),t._v(" Running Locally")]),t._v(" "),e("ol",[e("li",[t._v("Increase Docker memory to higher than 6GB. Navigate to Docker -> Preferences -> Advanced -> Memory")]),t._v(" "),e("li",[t._v("Get the Cadence Docker compose file. Run "),e("code",[t._v("curl -O https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose-es.yml")])]),t._v(" "),e("li",[t._v("Start Cadence Docker (which contains Apache Kafka, Apache Zookeeper, and Elasticsearch) using "),e("code",[t._v("docker-compose -f docker-compose-es.yml up")])]),t._v(" "),e("li",[t._v("From the Docker output log, make sure Elasticsearch and Cadence started correctly. If you encounter an insufficient disk space error, try "),e("code",[t._v("docker system prune -a --volumes")])]),t._v(" "),e("li",[t._v("Register a local domain and start using it. "),e("code",[t._v("cadence --do samples-domain d re")])]),t._v(" "),e("li",[t._v("Add the key to ElasticSearch And also allowlist search attributes. "),e("code",[t._v("cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1")])])]),t._v(" "),e("h2",{attrs:{id:"running-in-production"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#running-in-production"}},[t._v("#")]),t._v(" Running in Production")]),t._v(" "),e("p",[t._v("To enable this feature in a Cadence cluster:")]),t._v(" "),e("ul",[e("li",[t._v("Register index schema on ElasticSearch. Run two CURL commands following this "),e("a",{attrs:{href:"https://github.com/uber/cadence/blob/a05ce6b0328b89aa516ae09d5ff601e35df2cc4f/docker/start.sh#L59",target:"_blank",rel:"noopener noreferrer"}},[t._v("script"),e("OutboundLink")],1),t._v(".\n"),e("ul",[e("li",[t._v("Create a index template by using the schema , choose v6/v7 based on your ElasticSearch version")]),t._v(" "),e("li",[t._v("Create an index follow the index template, remember the name")])])]),t._v(" "),e("li",[t._v("Register topic on Kafka, and remember the name\n"),e("ul",[e("li",[t._v("Set up the right number of partitions based on your expected throughput(can be scaled up later)")])])]),t._v(" "),e("li",[e("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/docs/visibility-on-elasticsearch.md#configuration",target:"_blank",rel:"noopener noreferrer"}},[t._v("Configure Cadence for ElasticSearch + Kafka like this documentation"),e("OutboundLink")],1),t._v("\nBased on the full "),e("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configuration"}},[t._v("static config")]),t._v(", you may add some other fields like AuthN.\nSimilarly for Kafka.")],1)]),t._v(" "),e("p",[t._v("To add new search attributes:")]),t._v(" "),e("ol",[e("li",[t._v("Add the key to ElasticSearch "),e("code",[t._v("cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1")])]),t._v(" "),e("li",[t._v("Update the "),e("a",{attrs:{href:"https://cadenceworkflow.io/docs/operation-guide/setup/#dynamic-configuration-overview",target:"_blank",rel:"noopener noreferrer"}},[t._v("dynamic configuration"),e("OutboundLink")],1),t._v(" to allowlist the new attribute")])]),t._v(" "),e("p",[t._v("Note: starting a "),e("Term",{attrs:{term:"workflow"}}),t._v(" with search attributes but without advanced visibility feature will succeed as normal, but will not be searchable and will not be shown in list results.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[52],{357:function(t,e,a){"use strict";a.r(e);var s=a(0),r=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"searching-workflows-advanced-visibility"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#searching-workflows-advanced-visibility"}},[t._v("#")]),t._v(" Searching Workflows(Advanced visibility)")]),t._v(" "),e("h2",{attrs:{id:"introduction"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#introduction"}},[t._v("#")]),t._v(" Introduction")]),t._v(" "),e("p",[t._v("Cadence supports creating "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with customized key-value pairs, updating the information within the "),e("Term",{attrs:{term:"workflow"}}),t._v(" code, and then listing/searching "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with a SQL-like "),e("Term",{attrs:{term:"query"}}),t._v(". For example, you can create "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with keys "),e("code",[t._v("city")]),t._v(" and "),e("code",[t._v("age")]),t._v(", then search all "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with "),e("code",[t._v("city = seattle and age > 22")]),t._v(".")],1),t._v(" "),e("p",[t._v("Also note that normal "),e("Term",{attrs:{term:"workflow"}}),t._v(" properties like start time and "),e("Term",{attrs:{term:"workflow"}}),t._v(" type can be queried as well. For example, the following "),e("Term",{attrs:{term:"query"}}),t._v(" could be specified when "),e("RouterLink",{attrs:{to:"/docs/06-cli/#list-closed-or-open-workflow-executions"}},[t._v("listing workflows from the CLI")]),t._v(" or using the list APIs ("),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[t._v("Go"),e("OutboundLink")],1),t._v(", "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/WorkflowService.Iface.html#ListWorkflowExecutions-com.uber.cadence.ListWorkflowExecutionsRequest-",target:"_blank",rel:"noopener noreferrer"}},[t._v("Java"),e("OutboundLink")],1),t._v("):")],1),t._v(" "),e("div",{staticClass:"language-sql extra-class"},[e("pre",{pre:!0,attrs:{class:"language-sql"}},[e("code",[t._v("WorkflowType "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"main.Workflow"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("AND")]),t._v(" CloseStatus "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"completed"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("AND")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("StartTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" \n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"2019-06-07T16:46:34-08:00"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("OR")]),t._v(" CloseTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"2019-06-07T16:46:34-08:00"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" \n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("ORDER")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("BY")]),t._v(" StartTime "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("DESC")]),t._v(" \n")])])]),e("p",[t._v("In other places, this is also called as "),e("code",[t._v("advanced visibility")]),t._v(". While "),e("code",[t._v("basic visibility")]),t._v(" is referred to basic listing without being able to search.")]),t._v(" "),e("h2",{attrs:{id:"memo-vs-search-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#memo-vs-search-attributes"}},[t._v("#")]),t._v(" Memo vs Search Attributes")]),t._v(" "),e("p",[t._v("Cadence offers two methods for creating "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with key-value pairs: memo and search attributes. Memo can only be provided on "),e("Term",{attrs:{term:"workflow"}}),t._v(" start. Also, memo data are not indexed, and are therefore not searchable. Memo data are visible when listing "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" using the list APIs. Search attributes data are indexed so you can search "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" by "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" on these attributes. However, search attributes require the use of Elasticsearch.")],1),t._v(" "),e("p",[t._v("Memo and search attributes are available in the Go client in "),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("StartWorkflowOptions"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" StartWorkflowOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ...")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Memo - Optional non-indexed info that will be shown in list workflow.")]),t._v("\n Memo "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// supported when Cadence server is using Elasticsearch). The key and value type must be registered on Cadence server side.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Use GetSearchAttributes API to get valid key and corresponding value type.")]),t._v("\n SearchAttributes "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("In the Java client, the "),e("em",[t._v("WorkflowOptions.Builder")]),t._v(" has similar methods for "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/client/WorkflowOptions.Builder.html#setMemo-java.util.Map-",target:"_blank",rel:"noopener noreferrer"}},[t._v("memo"),e("OutboundLink")],1),t._v(" and "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/client/WorkflowOptions.Builder.html#setSearchAttributes-java.util.Map-",target:"_blank",rel:"noopener noreferrer"}},[t._v("search attributes"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("Some important distinctions between memo and search attributes:")]),t._v(" "),e("ul",[e("li",[t._v("Memo can support all data types because it is not indexed. Search attributes only support basic data types (including String(aka Text), Int, Float, Bool, Datetime) because it is indexed by Elasticsearch.")]),t._v(" "),e("li",[t._v("Memo does not restrict on key names. Search attributes require that keys are allowlisted before using them because Elasticsearch has a limit on indexed keys.")]),t._v(" "),e("li",[t._v("Memo doesn't require Cadence clusters to depend on Elasticsearch while search attributes only works with Elasticsearch.")])]),t._v(" "),e("h2",{attrs:{id:"search-attributes-go-client-usage"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#search-attributes-go-client-usage"}},[t._v("#")]),t._v(" Search Attributes (Go Client Usage)")]),t._v(" "),e("p",[t._v("When using the Cadence Go client, provide key-value pairs as SearchAttributes in "),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("StartWorkflowOptions"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("SearchAttributes is "),e("code",[t._v("map[string]interface{}")]),t._v(" where the keys need to be allowlisted so that Cadence knows the attribute key name and value type. The value provided in the map must be the same type as registered.")]),t._v(" "),e("h3",{attrs:{id:"allow-listing-search-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#allow-listing-search-attributes"}},[t._v("#")]),t._v(" Allow Listing Search Attributes")]),t._v(" "),e("p",[t._v("Start by "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" the list of search attributes using the "),e("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("$ cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--domain")]),t._v(" samples-domain cl get-search-attr\n+---------------------+------------+\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEY "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" VALUE TYPE "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n+---------------------+------------+\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CloseStatus "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CloseTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomBoolField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" DOUBLE "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomDatetimeField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" DATETIME "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomDomain "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomDoubleField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" BOOL "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomIntField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomKeywordField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" CustomStringField "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" STRING "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" DomainID "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" ExecutionTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" HistoryLength "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" RunID "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" StartTime "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" INT "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" WorkflowID "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" WorkflowType "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v(" KEYWORD "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("|")]),t._v("\n+---------------------+------------+\n")])])]),e("p",[t._v("Use the admin "),e("Term",{attrs:{term:"CLI"}}),t._v(" to add a new search attribute:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--domain")]),t._v(" samples-domain adm cl asa "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--search_attr_key")]),t._v(" NewKey "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--search_attr_type")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v("\n")])])]),e("p",[t._v("The numbers for the attribute types map as follows:")]),t._v(" "),e("ul",[e("li",[t._v("0 = String(Text)")]),t._v(" "),e("li",[t._v("1 = Keyword")]),t._v(" "),e("li",[t._v("2 = Int")]),t._v(" "),e("li",[t._v("3 = Double")]),t._v(" "),e("li",[t._v("4 = Bool")]),t._v(" "),e("li",[t._v("5 = DateTime")])]),t._v(" "),e("h4",{attrs:{id:"keyword-vs-string-text"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#keyword-vs-string-text"}},[t._v("#")]),t._v(" Keyword vs String(Text)")]),t._v(" "),e("p",[t._v("Note 1: "),e("strong",[t._v("String")]),t._v(" has been renamed to "),e("strong",[t._v("Text")]),t._v(" in "),e("a",{attrs:{href:"https://www.elastic.co/blog/strings-are-dead-long-live-strings",target:"_blank",rel:"noopener noreferrer"}},[t._v("ElasticSearch"),e("OutboundLink")],1),t._v(". Cadence is also "),e("a",{attrs:{href:"https://github.com/uber/cadence/issues/4604",target:"_blank",rel:"noopener noreferrer"}},[t._v("planning"),e("OutboundLink")],1),t._v(" to rename it.")]),t._v(" "),e("p",[t._v("Note 2: "),e("strong",[t._v("Keyword")]),t._v(" and "),e("strong",[t._v("String(Text)")]),t._v(" are concepts taken from Elasticsearch. Each word in a "),e("strong",[t._v("String(Text)")]),t._v(" is considered a searchable keyword. For a UUID, that can be problematic as Elasticsearch will index each portion of the UUID separately. To have the whole string considered as a searchable keyword, use the "),e("strong",[t._v("Keyword")]),t._v(" type.")]),t._v(" "),e("p",[t._v('For example, key RunID with value "2dd29ab7-2dd8-4668-83e0-89cae261cfb1"')]),t._v(" "),e("ul",[e("li",[t._v("as a "),e("strong",[t._v("Keyword")]),t._v(' will only be matched by RunID = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1" (or in the future with '),e("a",{attrs:{href:"https://github.com/uber/cadence/issues/1137",target:"_blank",rel:"noopener noreferrer"}},[t._v("regular expressions"),e("OutboundLink")],1),t._v(")")]),t._v(" "),e("li",[t._v("as a "),e("strong",[t._v("String(Text)")]),t._v(' will be matched by RunID = "2dd8", which may cause unwanted matches')])]),t._v(" "),e("p",[e("strong",[t._v("Note:")]),t._v(" String(Text) type can not be used in Order By "),e("Term",{attrs:{term:"query"}}),t._v(".")],1),t._v(" "),e("p",[t._v("There are some pre-allowlisted search attributes that are handy for testing:")]),t._v(" "),e("ul",[e("li",[t._v("CustomKeywordField")]),t._v(" "),e("li",[t._v("CustomIntField")]),t._v(" "),e("li",[t._v("CustomDoubleField")]),t._v(" "),e("li",[t._v("CustomBoolField")]),t._v(" "),e("li",[t._v("CustomDatetimeField")]),t._v(" "),e("li",[t._v("CustomStringField")])]),t._v(" "),e("p",[t._v("Their types are indicated in their names.")]),t._v(" "),e("h3",{attrs:{id:"value-types"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#value-types"}},[t._v("#")]),t._v(" Value Types")]),t._v(" "),e("p",[t._v("Here are the Search Attribute value types and their correspondent Golang types:")]),t._v(" "),e("ul",[e("li",[t._v("Keyword = string")]),t._v(" "),e("li",[t._v("Int = int64")]),t._v(" "),e("li",[t._v("Double = float64")]),t._v(" "),e("li",[t._v("Bool = bool")]),t._v(" "),e("li",[t._v("Datetime = time.Time")]),t._v(" "),e("li",[t._v("String = string")])]),t._v(" "),e("h3",{attrs:{id:"limit"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#limit"}},[t._v("#")]),t._v(" Limit")]),t._v(" "),e("p",[t._v("We recommend limiting the number of Elasticsearch indexes by enforcing limits on the following:")]),t._v(" "),e("ul",[e("li",[t._v("Number of keys: 100 per "),e("Term",{attrs:{term:"workflow"}})],1),t._v(" "),e("li",[t._v("Size of value: 2kb per value")]),t._v(" "),e("li",[t._v("Total size of key and values: 40kb per "),e("Term",{attrs:{term:"workflow"}})],1)]),t._v(" "),e("p",[t._v("Cadence reserves keys like DomainID, WorkflowID, and RunID. These can only be used in list "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(". The values are not updatable.")],1),t._v(" "),e("h3",{attrs:{id:"upsert-search-attributes-in-workflow"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#upsert-search-attributes-in-workflow"}},[t._v("#")]),t._v(" Upsert Search Attributes in Workflow")]),t._v(" "),e("p",[e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/workflow#UpsertSearchAttributes",target:"_blank",rel:"noopener noreferrer"}},[t._v("UpsertSearchAttributes"),e("OutboundLink")],1),t._v(" is used to add or update search attributes from within the "),e("Term",{attrs:{term:"workflow"}}),t._v(" code.")],1),t._v(" "),e("p",[t._v("Go samples for search attributes can be found at "),e("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/tree/master/cmd/samples/recipes/searchattributes",target:"_blank",rel:"noopener noreferrer"}},[t._v("github.com/uber-common/cadence-samples"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("UpsertSearchAttributes will merge attributes to the existing map in the "),e("Term",{attrs:{term:"workflow"}}),t._v(". Consider this example "),e("Term",{attrs:{term:"workflow"}}),t._v(" code:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("MyWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" input "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n attr1 "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomIntField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomBoolField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("true")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("UpsertSearchAttributes")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" attr1"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n attr2 "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomIntField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomKeywordField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"seattle"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("UpsertSearchAttributes")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" attr2"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("After the second call to UpsertSearchAttributes, the map will contain:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomIntField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomBoolField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("true")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"CustomKeywordField"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"seattle"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("There is no support for removing a field. To achieve a similar effect, set the field to a sentinel value. For example, to remove “CustomKeywordField”, update it to “impossibleVal”. Then searching "),e("code",[t._v("CustomKeywordField != ‘impossibleVal’")]),t._v(" will match "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(' with CustomKeywordField not equal to "impossibleVal", which '),e("strong",[t._v("includes")]),t._v(" "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without the CustomKeywordField set.")],1),t._v(" "),e("p",[t._v("Use "),e("code",[t._v("workflow.GetInfo")]),t._v(" to get current search attributes.")]),t._v(" "),e("h3",{attrs:{id:"continueasnew-and-cron"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#continueasnew-and-cron"}},[t._v("#")]),t._v(" ContinueAsNew and Cron")]),t._v(" "),e("p",[t._v("When performing a "),e("RouterLink",{attrs:{to:"/docs/go-client/continue-as-new/"}},[t._v("ContinueAsNew")]),t._v(" or using "),e("RouterLink",{attrs:{to:"/docs/go-client/distributed-cron/"}},[t._v("Cron")]),t._v(", search attributes (and memo) will be carried over to the new run by default.")],1),t._v(" "),e("h2",{attrs:{id:"query-capabilities"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#query-capabilities"}},[t._v("#")]),t._v(" Query Capabilities")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" by using a SQL-like where clause when "),e("RouterLink",{attrs:{to:"/docs/06-cli/#list-closed-or-open-workflow-executions"}},[t._v("listing workflows from the CLI")]),t._v(" or using the list APIs ("),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[t._v("Go"),e("OutboundLink")],1),t._v(", "),e("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.6.0/com/uber/cadence/WorkflowService.Iface.html#ListWorkflowExecutions-com.uber.cadence.ListWorkflowExecutionsRequest-",target:"_blank",rel:"noopener noreferrer"}},[t._v("Java"),e("OutboundLink")],1),t._v(").")],1),t._v(" "),e("p",[t._v("Note that you will only see "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" from one domain when "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(".")],1),t._v(" "),e("h3",{attrs:{id:"supported-operators"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#supported-operators"}},[t._v("#")]),t._v(" Supported Operators")]),t._v(" "),e("ul",[e("li",[t._v("AND, OR, ()")]),t._v(" "),e("li",[t._v("=, !=, >, >=, <, <=")]),t._v(" "),e("li",[t._v("IN")]),t._v(" "),e("li",[t._v("BETWEEN ... AND")]),t._v(" "),e("li",[t._v("ORDER BY")])]),t._v(" "),e("h3",{attrs:{id:"default-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#default-attributes"}},[t._v("#")]),t._v(" Default Attributes")]),t._v(" "),e("p",[t._v("More and more default attributes are added in newer versions.\nPlease get the by using the "),e("Term",{attrs:{term:"CLI"}}),t._v(" get-search-attr command or the GetSearchAttributes API.\nSome names and types are as follows:")],1),t._v(" "),e("table",[e("thead",[e("tr",[e("th",[t._v("KEY")]),t._v(" "),e("th",[t._v("VALUE TYPE")])])]),t._v(" "),e("tbody",[e("tr",[e("td",[t._v("CloseStatus")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("CloseTime")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("CustomBoolField")]),t._v(" "),e("td",[t._v("DOUBLE")])]),t._v(" "),e("tr",[e("td",[t._v("CustomDatetimeField")]),t._v(" "),e("td",[t._v("DATETIME")])]),t._v(" "),e("tr",[e("td",[t._v("CustomDomain")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("CustomDoubleField")]),t._v(" "),e("td",[t._v("BOOL")])]),t._v(" "),e("tr",[e("td",[t._v("CustomIntField")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("CustomKeywordField")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("CustomStringField")]),t._v(" "),e("td",[t._v("STRING")])]),t._v(" "),e("tr",[e("td",[t._v("DomainID")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("ExecutionTime")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("HistoryLength")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("RunID")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("StartTime")]),t._v(" "),e("td",[t._v("INT")])]),t._v(" "),e("tr",[e("td",[t._v("WorkflowID")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("WorkflowType")]),t._v(" "),e("td",[t._v("KEYWORD")])]),t._v(" "),e("tr",[e("td",[t._v("Tasklist")]),t._v(" "),e("td",[t._v("KEYWORD")])])])]),t._v(" "),e("p",[t._v("There are some special considerations for these attributes:")]),t._v(" "),e("ul",[e("li",[t._v("CloseStatus, CloseTime, DomainID, ExecutionTime, HistoryLength, RunID, StartTime, WorkflowID, WorkflowType are reserved by Cadence and are read-only")]),t._v(" "),e("li",[t._v("Starting from "),e("a",{attrs:{href:"https://github.com/uber/cadence/commit/6e69fa1a6e9ae5d2f683759820f09d1286ba7797",target:"_blank",rel:"noopener noreferrer"}},[t._v("v0.18.0"),e("OutboundLink")],1),t._v(", Cadence automatically maps(case insensitive) string to CloseStatus so that you don't need to use integer in the query, to make it easier to use.\n"),e("ul",[e("li",[t._v('0 = "completed"')]),t._v(" "),e("li",[t._v('1 = "failed"')]),t._v(" "),e("li",[t._v('2 = "canceled"')]),t._v(" "),e("li",[t._v('3 = "terminated"')]),t._v(" "),e("li",[t._v('4 = "continued_as_new"')]),t._v(" "),e("li",[t._v('5 = "timed_out"')])])]),t._v(" "),e("li",[t._v("StartTime, CloseTime and ExecutionTime are stored as INT, but support "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" using both EpochTime in nanoseconds, and string in RFC3339 format (ex. "),e("code",[t._v('"2006-01-02T15:04:05+07:00"')]),t._v(")")],1),t._v(" "),e("li",[t._v("CloseTime, CloseStatus, HistoryLength are only present in closed "),e("Term",{attrs:{term:"workflow"}})],1),t._v(" "),e("li",[t._v("ExecutionTime is for Retry/Cron user to "),e("Term",{attrs:{term:"query"}}),t._v(" a "),e("Term",{attrs:{term:"workflow"}}),t._v(" that will run in the future")],1),t._v(" "),e("li",[t._v("To list only open "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(", add "),e("code",[t._v("CloseTime = missing")]),t._v(" to the end of the "),e("Term",{attrs:{term:"query"}}),t._v(".")],1)]),t._v(" "),e("p",[t._v("If you use retry or the cron feature to "),e("Term",{attrs:{term:"query"}}),t._v(" "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" that will start execution in a certain time range, you can add predicates on ExecutionTime. For example: "),e("code",[t._v("ExecutionTime > 2019-01-01T10:00:00-07:00")]),t._v(". Note that if predicates on ExecutionTime are included, only cron or a "),e("Term",{attrs:{term:"workflow"}}),t._v(" that needs to retry will be returned.")],1),t._v(" "),e("h3",{attrs:{id:"general-notes-about-queries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#general-notes-about-queries"}},[t._v("#")]),t._v(" General Notes About Queries")]),t._v(" "),e("ul",[e("li",[t._v("Pagesize default is 1000, and cannot be larger than 10k")]),t._v(" "),e("li",[t._v("Range "),e("Term",{attrs:{term:"query"}}),t._v(" on Cadence timestamp (StartTime, CloseTime, ExecutionTime) cannot be larger than 9223372036854775807 (maxInt64 - 1001)")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" by time range will have 1ms resolution")],1),t._v(" "),e("li",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" column names are case sensitive")],1),t._v(" "),e("li",[t._v("ListWorkflow may take longer when retrieving a large number of "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" (10M+)")],1),t._v(" "),e("li",[t._v("To retrieve a large number of "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without caring about order, use the ScanWorkflow API")],1),t._v(" "),e("li",[t._v("To efficiently count the number of "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(", use the CountWorkflow API")],1)]),t._v(" "),e("h2",{attrs:{id:"tools-support"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#tools-support"}},[t._v("#")]),t._v(" Tools Support")]),t._v(" "),e("h3",{attrs:{id:"cli"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#cli"}},[t._v("#")]),t._v(" CLI")]),t._v(" "),e("p",[t._v("Support for search attributes is available as of version 0.6.0 of the Cadence server. You can also use the "),e("Term",{attrs:{term:"CLI"}}),t._v(" from the latest "),e("a",{attrs:{href:"https://hub.docker.com/r/ubercadence/cli",target:"_blank",rel:"noopener noreferrer"}},[t._v("CLI Docker image"),e("OutboundLink")],1),t._v(" (supported on 0.6.4 or later).")],1),t._v(" "),e("h4",{attrs:{id:"start-workflow-with-search-attributes"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#start-workflow-with-search-attributes"}},[t._v("#")]),t._v(" Start Workflow with Search Attributes")]),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain workflow start "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tl")]),t._v(" helloWorldGroup "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--wt")]),t._v(" main.Workflow "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--et")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("60")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--dt")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-i")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'\"vancexu\"'")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-search_attr_key")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'CustomIntField | CustomKeywordField | CustomStringField | CustomBoolField | CustomDatetimeField'")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-search_attr_value")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'5 | keyword1 | vancexu test | true | 2019-06-07T16:16:36-08:00'")]),t._v("\n")])])]),e("h4",{attrs:{id:"search-workflows-with-list-api-command"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#search-workflows-with-list-api-command"}},[t._v("#")]),t._v(" Search Workflows with List API/Command")]),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\'')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-psa")]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'CustomKeywordField in ("keyword2", "keyword1") and CustomIntField >= 5 and CloseTime between "2018-06-07T16:16:36-08:00" and "2019-06-07T16:46:34-08:00" order by CustomDatetimeField desc\'')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-psa")]),t._v("\n")])])]),e("p",[t._v("To list only open "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(", add "),e("code",[t._v("CloseTime = missing")]),t._v(" to the end of the "),e("Term",{attrs:{term:"query"}}),t._v(".")],1),t._v(" "),e("p",[t._v("Note that "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" can support more than one type of filter:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'WorkflowType = "main.Workflow" and (WorkflowID = "1645a588-4772-4dab-b276-5f9db108b3a8" or RunID = "be66519b-5f09-40cd-b2e8-20e4106244dc")\'')]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf list "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'WorkflowType = "main.Workflow" StartTime > "2019-06-07T16:46:34-08:00" and CloseTime = missing\'')]),t._v("\n")])])]),e("p",[t._v("All above command can be done with ListWorkflowExecutions API.")]),t._v(" "),e("h4",{attrs:{id:"count-workflows-with-count-api-command"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#count-workflows-with-count-api-command"}},[t._v("#")]),t._v(" Count Workflows with Count API/Command")]),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf count "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('\'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\'')]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf count "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'CloseStatus=\"failed\"'")]),t._v("\n")])])]),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" samples-domain wf count "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-q")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v("'CloseStatus!=\"completed\"'")]),t._v("\n")])])]),e("p",[t._v("All above command can be done with CountWorkflowExecutions API.")]),t._v(" "),e("h3",{attrs:{id:"web-ui-support"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#web-ui-support"}},[t._v("#")]),t._v(" Web UI Support")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Queries"}}),t._v(" are supported in "),e("a",{attrs:{href:"https://github.com/uber/cadence-web",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence Web"),e("OutboundLink")],1),t._v(' as of release 3.4.0. Use the "Basic/Advanced" button to switch to "Advanced" mode and type the '),e("Term",{attrs:{term:"query"}}),t._v(" in the search box.")],1),t._v(" "),e("h3",{attrs:{id:"tls-support-for-connecting-to-elasticsearch"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#tls-support-for-connecting-to-elasticsearch"}},[t._v("#")]),t._v(" TLS Support for connecting to Elasticsearch")]),t._v(" "),e("p",[t._v("If your elasticsearch deployment requires TLS to connect to it, you can add the following to your config template.\nThe TLS config is optional and when not provided it defaults to tls.enabled to "),e("strong",[t._v("false")])]),t._v(" "),e("div",{staticClass:"language-yaml extra-class"},[e("pre",{pre:!0,attrs:{class:"language-yaml"}},[e("code",[e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("elasticsearch")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("url")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("scheme")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"https"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("host")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"127.0.0.1:9200"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("indices")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("visibility")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("-")]),t._v("visibility"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("-")]),t._v("dev\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("tls")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("enabled")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean important"}},[t._v("true")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("caFile")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" /secrets/cadence/elasticsearch_cert.pem\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("enableHostVerification")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean important"}},[t._v("true")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("serverName")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" myServerName\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("certFile")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" /secrets/cadence/certfile.crt\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("keyFile")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" /secrets/cadence/keyfile.key\n "),e("span",{pre:!0,attrs:{class:"token key atrule"}},[t._v("sslmode")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean important"}},[t._v("false")]),t._v("\n")])])]),e("h2",{attrs:{id:"running-locally"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#running-locally"}},[t._v("#")]),t._v(" Running Locally")]),t._v(" "),e("ol",[e("li",[t._v("Increase Docker memory to higher than 6GB. Navigate to Docker -> Preferences -> Advanced -> Memory")]),t._v(" "),e("li",[t._v("Get the Cadence Docker compose file. Run "),e("code",[t._v("curl -O https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose-es.yml")])]),t._v(" "),e("li",[t._v("Start Cadence Docker (which contains Apache Kafka, Apache Zookeeper, and Elasticsearch) using "),e("code",[t._v("docker-compose -f docker-compose-es.yml up")])]),t._v(" "),e("li",[t._v("From the Docker output log, make sure Elasticsearch and Cadence started correctly. If you encounter an insufficient disk space error, try "),e("code",[t._v("docker system prune -a --volumes")])]),t._v(" "),e("li",[t._v("Register a local domain and start using it. "),e("code",[t._v("cadence --do samples-domain d re")])]),t._v(" "),e("li",[t._v("Add the key to ElasticSearch And also allowlist search attributes. "),e("code",[t._v("cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1")])])]),t._v(" "),e("h2",{attrs:{id:"running-in-production"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#running-in-production"}},[t._v("#")]),t._v(" Running in Production")]),t._v(" "),e("p",[t._v("To enable this feature in a Cadence cluster:")]),t._v(" "),e("ul",[e("li",[t._v("Register index schema on ElasticSearch. Run two CURL commands following this "),e("a",{attrs:{href:"https://github.com/uber/cadence/blob/a05ce6b0328b89aa516ae09d5ff601e35df2cc4f/docker/start.sh#L59",target:"_blank",rel:"noopener noreferrer"}},[t._v("script"),e("OutboundLink")],1),t._v(".\n"),e("ul",[e("li",[t._v("Create a index template by using the schema , choose v6/v7 based on your ElasticSearch version")]),t._v(" "),e("li",[t._v("Create an index follow the index template, remember the name")])])]),t._v(" "),e("li",[t._v("Register topic on Kafka, and remember the name\n"),e("ul",[e("li",[t._v("Set up the right number of partitions based on your expected throughput(can be scaled up later)")])])]),t._v(" "),e("li",[e("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/docs/visibility-on-elasticsearch.md#configuration",target:"_blank",rel:"noopener noreferrer"}},[t._v("Configure Cadence for ElasticSearch + Kafka like this documentation"),e("OutboundLink")],1),t._v("\nBased on the full "),e("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configuration"}},[t._v("static config")]),t._v(", you may add some other fields like AuthN.\nSimilarly for Kafka.")],1)]),t._v(" "),e("p",[t._v("To add new search attributes:")]),t._v(" "),e("ol",[e("li",[t._v("Add the key to ElasticSearch "),e("code",[t._v("cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1")])]),t._v(" "),e("li",[t._v("Update the "),e("a",{attrs:{href:"https://cadenceworkflow.io/docs/operation-guide/setup/#dynamic-configuration-overview",target:"_blank",rel:"noopener noreferrer"}},[t._v("dynamic configuration"),e("OutboundLink")],1),t._v(" to allowlist the new attribute")])]),t._v(" "),e("p",[t._v("Note: starting a "),e("Term",{attrs:{term:"workflow"}}),t._v(" with search attributes but without advanced visibility feature will succeed as normal, but will not be searchable and will not be shown in list results.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/54.10818dcd.js b/assets/js/54.2cf53462.js similarity index 95% rename from assets/js/54.10818dcd.js rename to assets/js/54.2cf53462.js index d7ffbdcb4..977b5e3c7 100644 --- a/assets/js/54.10818dcd.js +++ b/assets/js/54.2cf53462.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[54],{360:function(t,e,o){"use strict";o.r(e);var s=o(0),r=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"concepts"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#concepts"}},[t._v("#")]),t._v(" Concepts")]),t._v(" "),e("p",[t._v("Cadence is a new developer friendly way to develop distributed applications.")]),t._v(" "),e("p",[t._v("It borrows the core terminology from the workflow-automation space. So its concepts include "),e("RouterLink",{attrs:{to:"/docs/03-concepts/01-workflows/"}},[t._v("workflows")]),t._v(" and "),e("RouterLink",{attrs:{to:"/docs/03-concepts/02-activities/"}},[t._v("activities")]),t._v(". "),e("Term",{attrs:{term:"workflow",show:"Workflows"}}),t._v(" can react to "),e("RouterLink",{attrs:{to:"/docs/03-concepts/03-events/"}},[t._v("events")]),t._v(" and return internal state through "),e("RouterLink",{attrs:{to:"/docs/03-concepts/04-queries/"}},[t._v("queries")]),t._v(".")],1),t._v(" "),e("p",[t._v("The "),e("RouterLink",{attrs:{to:"/docs/03-concepts/05-topology/"}},[t._v("deployment topology")]),t._v(" explains how all these concepts are mapped to deployable software components.")],1),t._v(" "),e("p",[t._v("The "),e("RouterLink",{attrs:{to:"/docs/03-concepts/10-http-api/"}},[t._v("HTTP API reference")]),t._v(" describes how to use HTTP API to interact with Cadence server.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[54],{361:function(t,e,o){"use strict";o.r(e);var s=o(0),r=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"concepts"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#concepts"}},[t._v("#")]),t._v(" Concepts")]),t._v(" "),e("p",[t._v("Cadence is a new developer friendly way to develop distributed applications.")]),t._v(" "),e("p",[t._v("It borrows the core terminology from the workflow-automation space. So its concepts include "),e("RouterLink",{attrs:{to:"/docs/03-concepts/01-workflows/"}},[t._v("workflows")]),t._v(" and "),e("RouterLink",{attrs:{to:"/docs/03-concepts/02-activities/"}},[t._v("activities")]),t._v(". "),e("Term",{attrs:{term:"workflow",show:"Workflows"}}),t._v(" can react to "),e("RouterLink",{attrs:{to:"/docs/03-concepts/03-events/"}},[t._v("events")]),t._v(" and return internal state through "),e("RouterLink",{attrs:{to:"/docs/03-concepts/04-queries/"}},[t._v("queries")]),t._v(".")],1),t._v(" "),e("p",[t._v("The "),e("RouterLink",{attrs:{to:"/docs/03-concepts/05-topology/"}},[t._v("deployment topology")]),t._v(" explains how all these concepts are mapped to deployable software components.")],1),t._v(" "),e("p",[t._v("The "),e("RouterLink",{attrs:{to:"/docs/03-concepts/10-http-api/"}},[t._v("HTTP API reference")]),t._v(" describes how to use HTTP API to interact with Cadence server.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/55.511b85cf.js b/assets/js/55.c3531289.js similarity index 98% rename from assets/js/55.511b85cf.js rename to assets/js/55.c3531289.js index acbaa5685..a58c75675 100644 --- a/assets/js/55.511b85cf.js +++ b/assets/js/55.c3531289.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[55],{361:function(e,a,t){"use strict";t.r(a);var r=t(0),s=Object(r.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"client-sdk-overview"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#client-sdk-overview"}},[e._v("#")]),e._v(" Client SDK Overview")]),e._v(" "),a("ul",[a("li",[e._v("Samples: "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber/cadence-java-samples"),a("OutboundLink")],1)]),e._v(" "),a("li",[e._v("JavaDoc documentation: "),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://www.javadoc.io/doc/com.uber.cadence/cadence-client"),a("OutboundLink")],1)])]),e._v(" "),a("h2",{attrs:{id:"javadoc-packages"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#javadoc-packages"}},[e._v("#")]),e._v(" "),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/index.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("JavaDoc Packages"),a("OutboundLink")],1)]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-activity"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-activity"}},[e._v("#")]),e._v(" com.uber.cadence.activity")]),e._v(" "),a("p",[e._v("APIs to implement activity: accessing activity info, or sending heartbeat.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-client"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-client"}},[e._v("#")]),e._v(" com.uber.cadence.client")]),e._v(" "),a("p",[e._v("APIs for external application code to interact with Cadence workflows: start workflows, send signals or query workflows.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-workflow"}},[e._v("#")]),e._v(" com.uber.cadence.workflow")]),e._v(" "),a("p",[e._v("APIs to implement workflows.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-worker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-worker"}},[e._v("#")]),e._v(" com.uber.cadence.worker")]),e._v(" "),a("p",[e._v("APIs to configure and start workers.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-testing"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-testing"}},[e._v("#")]),e._v(" com.uber.cadence.testing")]),e._v(" "),a("p",[e._v("APIs to write unit tests for workflows.")]),e._v(" "),a("h2",{attrs:{id:"samples"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#samples"}},[e._v("#")]),e._v(" "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/tree/master/src/main/java/com/uber/cadence/samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Samples"),a("OutboundLink")],1)]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-samples-hello"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-samples-hello"}},[e._v("#")]),e._v(" com.uber.cadence.samples.hello")]),e._v(" "),a("p",[e._v("Samples of how to use the basic feature: activity, local activity, ChildWorkflow, Query, etc.\nThis is the most important package you need to start with.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-samples-bookingsaga"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-samples-bookingsaga"}},[e._v("#")]),e._v(" com.uber.cadence.samples.bookingsaga")]),e._v(" "),a("p",[e._v("An end-to-end example to write workflow using SAGA APIs.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-samples-fileprocessing"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-samples-fileprocessing"}},[e._v("#")]),e._v(" com.uber.cadence.samples.fileprocessing")]),e._v(" "),a("p",[e._v("An end-to-end example to write workflows to download a file, zips it, and uploads it to a destination.")]),e._v(" "),a("p",[e._v("An important requirement for such a workflow is that while a first activity can run\non any host, the second and third must run on the same host as the first one. This is achieved\nthrough use of a host specific task list. The first activity returns the name of the host\nspecific task list and all other activities are dispatched using the stub that is configured with\nit. This assumes that FileProcessingWorker has a worker running on the same task list.")])])}),[],!1,null,null,null);a.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[55],{360:function(e,a,t){"use strict";t.r(a);var r=t(0),s=Object(r.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"client-sdk-overview"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#client-sdk-overview"}},[e._v("#")]),e._v(" Client SDK Overview")]),e._v(" "),a("ul",[a("li",[e._v("Samples: "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber/cadence-java-samples"),a("OutboundLink")],1)]),e._v(" "),a("li",[e._v("JavaDoc documentation: "),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://www.javadoc.io/doc/com.uber.cadence/cadence-client"),a("OutboundLink")],1)])]),e._v(" "),a("h2",{attrs:{id:"javadoc-packages"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#javadoc-packages"}},[e._v("#")]),e._v(" "),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/index.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("JavaDoc Packages"),a("OutboundLink")],1)]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-activity"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-activity"}},[e._v("#")]),e._v(" com.uber.cadence.activity")]),e._v(" "),a("p",[e._v("APIs to implement activity: accessing activity info, or sending heartbeat.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-client"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-client"}},[e._v("#")]),e._v(" com.uber.cadence.client")]),e._v(" "),a("p",[e._v("APIs for external application code to interact with Cadence workflows: start workflows, send signals or query workflows.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-workflow"}},[e._v("#")]),e._v(" com.uber.cadence.workflow")]),e._v(" "),a("p",[e._v("APIs to implement workflows.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-worker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-worker"}},[e._v("#")]),e._v(" com.uber.cadence.worker")]),e._v(" "),a("p",[e._v("APIs to configure and start workers.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-testing"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-testing"}},[e._v("#")]),e._v(" com.uber.cadence.testing")]),e._v(" "),a("p",[e._v("APIs to write unit tests for workflows.")]),e._v(" "),a("h2",{attrs:{id:"samples"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#samples"}},[e._v("#")]),e._v(" "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/tree/master/src/main/java/com/uber/cadence/samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("Samples"),a("OutboundLink")],1)]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-samples-hello"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-samples-hello"}},[e._v("#")]),e._v(" com.uber.cadence.samples.hello")]),e._v(" "),a("p",[e._v("Samples of how to use the basic feature: activity, local activity, ChildWorkflow, Query, etc.\nThis is the most important package you need to start with.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-samples-bookingsaga"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-samples-bookingsaga"}},[e._v("#")]),e._v(" com.uber.cadence.samples.bookingsaga")]),e._v(" "),a("p",[e._v("An end-to-end example to write workflow using SAGA APIs.")]),e._v(" "),a("h3",{attrs:{id:"com-uber-cadence-samples-fileprocessing"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#com-uber-cadence-samples-fileprocessing"}},[e._v("#")]),e._v(" com.uber.cadence.samples.fileprocessing")]),e._v(" "),a("p",[e._v("An end-to-end example to write workflows to download a file, zips it, and uploads it to a destination.")]),e._v(" "),a("p",[e._v("An important requirement for such a workflow is that while a first activity can run\non any host, the second and third must run on the same host as the first one. This is achieved\nthrough use of a host specific task list. The first activity returns the name of the host\nspecific task list and all other activities are dispatched using the stub that is configured with\nit. This assumes that FileProcessingWorker has a worker running on the same task list.")])])}),[],!1,null,null,null);a.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/61.480667b8.js b/assets/js/61.6f931f00.js similarity index 99% rename from assets/js/61.480667b8.js rename to assets/js/61.6f931f00.js index a237f87ab..9dd882fdb 100644 --- a/assets/js/61.480667b8.js +++ b/assets/js/61.6f931f00.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[61],{367:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"versioning"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#versioning"}},[t._v("#")]),t._v(" Versioning")]),t._v(" "),s("p",[t._v("As outlined in the "),s("em",[t._v("Workflow Implementation Constraints")]),t._v(" section, "),s("Term",{attrs:{term:"workflow"}}),t._v(" code has to be deterministic by taking the same\ncode path when replaying history "),s("Term",{attrs:{term:"event",show:"events"}}),t._v(". Any "),s("Term",{attrs:{term:"workflow"}}),t._v(" code change that affects the order in which "),s("Term",{attrs:{term:"decision",show:"decisions"}}),t._v(" are generated breaks\nthis assumption. The solution that allows updating code of already running "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" is to keep both the old and new code.\nWhen replaying, use the code version that the "),s("Term",{attrs:{term:"event",show:"events"}}),t._v(" were generated with and when executing a new code path, always take the\nnew code.")],1),t._v(" "),s("p",[t._v("Use the "),s("code",[t._v("Workflow.getVersion")]),t._v(" function to return a version of the code that should be executed and then use the returned\nvalue to pick a correct branch. Let's look at an example.")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Arguments")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("download")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("upload")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("finally")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was downloaded.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was processed.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("Now we decide to calculate the processed file checksum and pass it to upload.\nThe correct way to implement this change is:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Arguments")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("download")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" version "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getVersion")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"checksumAdded"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DEFAULT_VERSION")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("version "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("==")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DEFAULT_VERSION")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("upload")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("long")]),t._v(" checksum "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("calculateChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("uploadWithChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" checksum"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("finally")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was downloaded.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was processed.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("Later, when all "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" that use the old version are completed, the old branch can be removed.")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Arguments")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("download")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// getVersion call is left here to ensure that any attempt to replay history")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// for a different version fails. It can be removed later when there is no possibility")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// of this happening.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getVersion")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"checksumAdded"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("long")]),t._v(" checksum "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("calculateChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("uploadWithChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" checksum"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("finally")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was downloaded.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was processed.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("The ID that is passed to the "),s("code",[t._v("getVersion")]),t._v(" call identifies the change. Each change is expected to have its own ID. But if\na change spawns multiple places in the "),s("Term",{attrs:{term:"workflow"}}),t._v(" code and the new code should be either executed in all of them or\nin none of them, then they have to share the ID.")],1)])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[61],{369:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"versioning"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#versioning"}},[t._v("#")]),t._v(" Versioning")]),t._v(" "),s("p",[t._v("As outlined in the "),s("em",[t._v("Workflow Implementation Constraints")]),t._v(" section, "),s("Term",{attrs:{term:"workflow"}}),t._v(" code has to be deterministic by taking the same\ncode path when replaying history "),s("Term",{attrs:{term:"event",show:"events"}}),t._v(". Any "),s("Term",{attrs:{term:"workflow"}}),t._v(" code change that affects the order in which "),s("Term",{attrs:{term:"decision",show:"decisions"}}),t._v(" are generated breaks\nthis assumption. The solution that allows updating code of already running "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" is to keep both the old and new code.\nWhen replaying, use the code version that the "),s("Term",{attrs:{term:"event",show:"events"}}),t._v(" were generated with and when executing a new code path, always take the\nnew code.")],1),t._v(" "),s("p",[t._v("Use the "),s("code",[t._v("Workflow.getVersion")]),t._v(" function to return a version of the code that should be executed and then use the returned\nvalue to pick a correct branch. Let's look at an example.")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Arguments")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("download")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("upload")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("finally")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was downloaded.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was processed.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("Now we decide to calculate the processed file checksum and pass it to upload.\nThe correct way to implement this change is:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Arguments")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("download")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" version "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getVersion")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"checksumAdded"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DEFAULT_VERSION")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("version "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("==")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DEFAULT_VERSION")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("upload")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("long")]),t._v(" checksum "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("calculateChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("uploadWithChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" checksum"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("finally")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was downloaded.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was processed.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("Later, when all "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" that use the old version are completed, the old branch can be removed.")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Arguments")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("download")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getSourceFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("processFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// getVersion call is left here to ensure that any attempt to replay history")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// for a different version fails. It can be removed later when there is no possibility")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// of this happening.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getVersion")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"checksumAdded"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("long")]),t._v(" checksum "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("calculateChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("uploadWithChecksum")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetBucketName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getTargetFilename")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" checksum"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("finally")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was downloaded.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("localName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// File was processed.")]),t._v("\n activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("deleteLocalFile")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("processedName"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("The ID that is passed to the "),s("code",[t._v("getVersion")]),t._v(" call identifies the change. Each change is expected to have its own ID. But if\na change spawns multiple places in the "),s("Term",{attrs:{term:"workflow"}}),t._v(" code and the new code should be either executed in all of them or\nin none of them, then they have to share the ID.")],1)])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/62.6eea74df.js b/assets/js/62.d377d742.js similarity index 99% rename from assets/js/62.6eea74df.js rename to assets/js/62.d377d742.js index ecb481769..2c8f8e1b9 100644 --- a/assets/js/62.6eea74df.js +++ b/assets/js/62.d377d742.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[62],{368:function(e,t,s){"use strict";s.r(t);var r=s(0),n=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"distributed-cron"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#distributed-cron"}},[e._v("#")]),e._v(" Distributed CRON")]),e._v(" "),t("p",[e._v("It is relatively straightforward to turn any Cadence "),t("Term",{attrs:{term:"workflow"}}),e._v(" into a Cron "),t("Term",{attrs:{term:"workflow"}}),e._v(". All you need\nis to supply a cron schedule when starting the "),t("Term",{attrs:{term:"workflow"}}),e._v(" using the CronSchedule\nparameter of\n"),t("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.5.1/com/uber/cadence/client/WorkflowOptions.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("StartWorkflowOptions"),t("OutboundLink")],1),e._v(".")],1),e._v(" "),t("p",[e._v("You can also start a "),t("Term",{attrs:{term:"workflow"}}),e._v(" using the Cadence "),t("Term",{attrs:{term:"CLI"}}),e._v(" with an optional cron schedule using the "),t("code",[e._v("--cron")]),e._v(" argument.")],1),e._v(" "),t("p",[e._v("For "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" with CronSchedule:")],1),e._v(" "),t("ul",[t("li",[e._v('CronSchedule is based on UTC time. For example cron schedule "15 8 * * *"\nwill run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays\nand saturdays.')]),e._v(" "),t("li",[e._v("If a "),t("Term",{attrs:{term:"workflow"}}),e._v(" failed and a RetryPolicy is supplied to the StartWorkflowOptions\nas well, the "),t("Term",{attrs:{term:"workflow"}}),e._v(" will retry based on the RetryPolicy. While the "),t("Term",{attrs:{term:"workflow"}}),e._v(" is\nretrying, the server will not schedule the next cron run.")],1),e._v(" "),t("li",[e._v("Cadence server only schedules the next cron run after the current run is\ncompleted. If the next schedule is due while a "),t("Term",{attrs:{term:"workflow"}}),e._v(" is running (or retrying),\nthen it will skip that schedule.")],1),e._v(" "),t("li",[e._v("Cron "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" will not stop until they are terminated or cancelled.")],1)]),e._v(" "),t("p",[e._v("Cadence supports the standard cron spec:")]),e._v(" "),t("div",{staticClass:"language-java extra-class"},[t("pre",{pre:!0,attrs:{class:"language-java"}},[t("code",[t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for the next run only happens")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won't")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// schedule its next run. If the next schedule is due while the workflow is running (or retrying), then it will skip that")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// The cron spec is as follows:")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// ┌───────────── minute (0 - 59)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ ┌───────────── hour (0 - 23)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ ┌───────────── day of the month (1 - 31)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ ┌───────────── month (1 - 12)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ │ │")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ │ │")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// * * * * *")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("CronSchedule")]),e._v(" string\n")])])]),t("p",[e._v("Cadence also supports more "),t("a",{attrs:{href:"https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format",target:"_blank",rel:"noopener noreferrer"}},[e._v("advanced cron expressions"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("The "),t("a",{attrs:{href:"https://crontab.guru/",target:"_blank",rel:"noopener noreferrer"}},[e._v("crontab guru site"),t("OutboundLink")],1),e._v(" is useful for testing your cron expressions.")]),e._v(" "),t("h2",{attrs:{id:"convert-an-existing-cron-workflow"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#convert-an-existing-cron-workflow"}},[e._v("#")]),e._v(" Convert an existing cron workflow")]),e._v(" "),t("p",[e._v("Before CronSchedule was available, the previous approach to implementing cron\n"),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" was to use a delay timer as the last step and then return\n"),t("code",[e._v("ContinueAsNew")]),e._v(". One problem with that implementation is that if the "),t("Term",{attrs:{term:"workflow"}}),e._v("\nfails or times out, the cron would stop.")],1),e._v(" "),t("p",[e._v("To convert those "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" to make use of Cadence CronSchedule, all you need is to remove the delay timer and return without using\n"),t("code",[e._v("ContinueAsNew")]),e._v(". Then start the "),t("Term",{attrs:{term:"workflow"}}),e._v(" with the desired CronSchedule.")],1),e._v(" "),t("h2",{attrs:{id:"retrieve-last-successful-result"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#retrieve-last-successful-result"}},[e._v("#")]),e._v(" Retrieve last successful result")]),e._v(" "),t("p",[e._v("Sometimes it is useful to obtain the progress of previous successful runs.\nThis is supported by two new APIs in the client library:\n"),t("code",[e._v("HasLastCompletionResult")]),e._v(" and "),t("code",[e._v("GetLastCompletionResult")]),e._v(". Below is an example of how\nto use this in Java:")]),e._v(" "),t("div",{staticClass:"language-java extra-class"},[t("pre",{pre:!0,attrs:{class:"language-java"}},[t("code",[t("span",{pre:!0,attrs:{class:"token keyword"}},[e._v("public")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("String")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token function"}},[e._v("cronWorkflow")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("(")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(")")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("{")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("String")]),e._v(" lastProcessedFileName "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("=")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("Workflow")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(".")]),t("span",{pre:!0,attrs:{class:"token function"}},[e._v("getLastCompletionResult")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("(")]),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("String")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(".")]),t("span",{pre:!0,attrs:{class:"token keyword"}},[e._v("class")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(")")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(";")]),e._v("\n\n "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// Process work starting from the lastProcessedFileName.")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// Business logic implementation goes here.")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// Updates lastProcessedFileName to the new value.")]),e._v("\n\n "),t("span",{pre:!0,attrs:{class:"token keyword"}},[e._v("return")]),e._v(" lastProcessedFileName"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(";")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("}")]),e._v("\n")])])]),t("p",[e._v("Note that this works even if one of the cron schedule runs failed. The\nnext schedule will still get the last successful result if it ever successfully\ncompleted at least once. For example, for a daily cron "),t("Term",{attrs:{term:"workflow"}}),e._v(", if the first day\nrun succeeds and the second day fails, then the third day run will still get\nthe result from first day's run using these APIs.")],1)])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[62],{367:function(e,t,s){"use strict";s.r(t);var r=s(0),n=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"distributed-cron"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#distributed-cron"}},[e._v("#")]),e._v(" Distributed CRON")]),e._v(" "),t("p",[e._v("It is relatively straightforward to turn any Cadence "),t("Term",{attrs:{term:"workflow"}}),e._v(" into a Cron "),t("Term",{attrs:{term:"workflow"}}),e._v(". All you need\nis to supply a cron schedule when starting the "),t("Term",{attrs:{term:"workflow"}}),e._v(" using the CronSchedule\nparameter of\n"),t("a",{attrs:{href:"https://static.javadoc.io/com.uber.cadence/cadence-client/2.5.1/com/uber/cadence/client/WorkflowOptions.html",target:"_blank",rel:"noopener noreferrer"}},[e._v("StartWorkflowOptions"),t("OutboundLink")],1),e._v(".")],1),e._v(" "),t("p",[e._v("You can also start a "),t("Term",{attrs:{term:"workflow"}}),e._v(" using the Cadence "),t("Term",{attrs:{term:"CLI"}}),e._v(" with an optional cron schedule using the "),t("code",[e._v("--cron")]),e._v(" argument.")],1),e._v(" "),t("p",[e._v("For "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" with CronSchedule:")],1),e._v(" "),t("ul",[t("li",[e._v('CronSchedule is based on UTC time. For example cron schedule "15 8 * * *"\nwill run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays\nand saturdays.')]),e._v(" "),t("li",[e._v("If a "),t("Term",{attrs:{term:"workflow"}}),e._v(" failed and a RetryPolicy is supplied to the StartWorkflowOptions\nas well, the "),t("Term",{attrs:{term:"workflow"}}),e._v(" will retry based on the RetryPolicy. While the "),t("Term",{attrs:{term:"workflow"}}),e._v(" is\nretrying, the server will not schedule the next cron run.")],1),e._v(" "),t("li",[e._v("Cadence server only schedules the next cron run after the current run is\ncompleted. If the next schedule is due while a "),t("Term",{attrs:{term:"workflow"}}),e._v(" is running (or retrying),\nthen it will skip that schedule.")],1),e._v(" "),t("li",[e._v("Cron "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" will not stop until they are terminated or cancelled.")],1)]),e._v(" "),t("p",[e._v("Cadence supports the standard cron spec:")]),e._v(" "),t("div",{staticClass:"language-java extra-class"},[t("pre",{pre:!0,attrs:{class:"language-java"}},[t("code",[t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for the next run only happens")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won't")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// schedule its next run. If the next schedule is due while the workflow is running (or retrying), then it will skip that")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// The cron spec is as follows:")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// ┌───────────── minute (0 - 59)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ ┌───────────── hour (0 - 23)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ ┌───────────── day of the month (1 - 31)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ ┌───────────── month (1 - 12)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ │ │")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// │ │ │ │ │")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// * * * * *")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("CronSchedule")]),e._v(" string\n")])])]),t("p",[e._v("Cadence also supports more "),t("a",{attrs:{href:"https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format",target:"_blank",rel:"noopener noreferrer"}},[e._v("advanced cron expressions"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("The "),t("a",{attrs:{href:"https://crontab.guru/",target:"_blank",rel:"noopener noreferrer"}},[e._v("crontab guru site"),t("OutboundLink")],1),e._v(" is useful for testing your cron expressions.")]),e._v(" "),t("h2",{attrs:{id:"convert-an-existing-cron-workflow"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#convert-an-existing-cron-workflow"}},[e._v("#")]),e._v(" Convert an existing cron workflow")]),e._v(" "),t("p",[e._v("Before CronSchedule was available, the previous approach to implementing cron\n"),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" was to use a delay timer as the last step and then return\n"),t("code",[e._v("ContinueAsNew")]),e._v(". One problem with that implementation is that if the "),t("Term",{attrs:{term:"workflow"}}),e._v("\nfails or times out, the cron would stop.")],1),e._v(" "),t("p",[e._v("To convert those "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" to make use of Cadence CronSchedule, all you need is to remove the delay timer and return without using\n"),t("code",[e._v("ContinueAsNew")]),e._v(". Then start the "),t("Term",{attrs:{term:"workflow"}}),e._v(" with the desired CronSchedule.")],1),e._v(" "),t("h2",{attrs:{id:"retrieve-last-successful-result"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#retrieve-last-successful-result"}},[e._v("#")]),e._v(" Retrieve last successful result")]),e._v(" "),t("p",[e._v("Sometimes it is useful to obtain the progress of previous successful runs.\nThis is supported by two new APIs in the client library:\n"),t("code",[e._v("HasLastCompletionResult")]),e._v(" and "),t("code",[e._v("GetLastCompletionResult")]),e._v(". Below is an example of how\nto use this in Java:")]),e._v(" "),t("div",{staticClass:"language-java extra-class"},[t("pre",{pre:!0,attrs:{class:"language-java"}},[t("code",[t("span",{pre:!0,attrs:{class:"token keyword"}},[e._v("public")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("String")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token function"}},[e._v("cronWorkflow")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("(")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(")")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("{")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("String")]),e._v(" lastProcessedFileName "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("=")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("Workflow")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(".")]),t("span",{pre:!0,attrs:{class:"token function"}},[e._v("getLastCompletionResult")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("(")]),t("span",{pre:!0,attrs:{class:"token class-name"}},[e._v("String")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(".")]),t("span",{pre:!0,attrs:{class:"token keyword"}},[e._v("class")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(")")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(";")]),e._v("\n\n "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// Process work starting from the lastProcessedFileName.")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// Business logic implementation goes here.")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("// Updates lastProcessedFileName to the new value.")]),e._v("\n\n "),t("span",{pre:!0,attrs:{class:"token keyword"}},[e._v("return")]),e._v(" lastProcessedFileName"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(";")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("}")]),e._v("\n")])])]),t("p",[e._v("Note that this works even if one of the cron schedule runs failed. The\nnext schedule will still get the last successful result if it ever successfully\ncompleted at least once. For example, for a daily cron "),t("Term",{attrs:{term:"workflow"}}),e._v(", if the first day\nrun succeeds and the second day fails, then the third day run will still get\nthe result from first day's run using these APIs.")],1)])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/63.ce28c4a7.js b/assets/js/63.86313be8.js similarity index 99% rename from assets/js/63.ce28c4a7.js rename to assets/js/63.86313be8.js index fdc1c9852..b87c81889 100644 --- a/assets/js/63.ce28c4a7.js +++ b/assets/js/63.86313be8.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[63],{369:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"worker-service"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#worker-service"}},[t._v("#")]),t._v(" Worker service")]),t._v(" "),s("p",[t._v("A "),s("Term",{attrs:{term:"worker"}}),t._v(" or "),s("em",[s("Term",{attrs:{term:"worker"}}),t._v(" service")],1),t._v(" is a service that hosts the "),s("Term",{attrs:{term:"workflow"}}),t._v(" and "),s("Term",{attrs:{term:"activity"}}),t._v(" implementations. The "),s("Term",{attrs:{term:"worker"}}),t._v(" polls the "),s("em",[t._v("Cadence service")]),t._v(" for "),s("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", performs those "),s("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", and communicates "),s("Term",{attrs:{term:"task"}}),t._v(" execution results back to the "),s("em",[t._v("Cadence service")]),t._v(". "),s("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" services are developed, deployed, and operated by Cadence customers.")],1),t._v(" "),s("p",[t._v("You can run a Cadence "),s("Term",{attrs:{term:"worker"}}),t._v(" in a new or an existing service. Use the framework APIs to start the Cadence "),s("Term",{attrs:{term:"worker"}}),t._v(" and link in all "),s("Term",{attrs:{term:"activity"}}),t._v(" and "),s("Term",{attrs:{term:"workflow"}}),t._v(" implementations that you require the service to execute.")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactoryOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxWorkflowThreadCount")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setStickyCacheSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDisableStickyExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("false")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentActivityExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentWorkflowExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n \n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful. So you need a type to create instances.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Activities are stateless and thread safe. So a shared instance is used.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Start listening to the workflow and activity task lists.")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),s("p",[t._v("The code is slightly different if you are using client version prior to 3.0.0:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("FactoryOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxWorkflowThreadCount")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setCacheMaximumSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDisableStickyExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("false")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentActivityExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentWorkflowExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful. So you need a type to create instances.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Activities are stateless and thread safe. So a shared instance is used.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Start listening to the workflow and activity task lists.")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),s("p",[t._v("The "),s("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/worker/WorkerFactoryOptions.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkerFactoryOptions"),s("OutboundLink")],1),t._v(" includes those that need to be shared across workers on the hosts like thread pool, sticky cache.")]),t._v(" "),s("p",[t._v("In "),s("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/worker/WorkerOptions.Builder.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkerOptions"),s("OutboundLink")],1),t._v(" you can customize things like pollerOptions, activities per second.")])])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[63],{374:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"worker-service"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#worker-service"}},[t._v("#")]),t._v(" Worker service")]),t._v(" "),s("p",[t._v("A "),s("Term",{attrs:{term:"worker"}}),t._v(" or "),s("em",[s("Term",{attrs:{term:"worker"}}),t._v(" service")],1),t._v(" is a service that hosts the "),s("Term",{attrs:{term:"workflow"}}),t._v(" and "),s("Term",{attrs:{term:"activity"}}),t._v(" implementations. The "),s("Term",{attrs:{term:"worker"}}),t._v(" polls the "),s("em",[t._v("Cadence service")]),t._v(" for "),s("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", performs those "),s("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", and communicates "),s("Term",{attrs:{term:"task"}}),t._v(" execution results back to the "),s("em",[t._v("Cadence service")]),t._v(". "),s("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" services are developed, deployed, and operated by Cadence customers.")],1),t._v(" "),s("p",[t._v("You can run a Cadence "),s("Term",{attrs:{term:"worker"}}),t._v(" in a new or an existing service. Use the framework APIs to start the Cadence "),s("Term",{attrs:{term:"worker"}}),t._v(" and link in all "),s("Term",{attrs:{term:"activity"}}),t._v(" and "),s("Term",{attrs:{term:"workflow"}}),t._v(" implementations that you require the service to execute.")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactoryOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxWorkflowThreadCount")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setStickyCacheSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDisableStickyExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("false")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentActivityExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentWorkflowExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n \n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful. So you need a type to create instances.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Activities are stateless and thread safe. So a shared instance is used.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Start listening to the workflow and activity task lists.")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),s("p",[t._v("The code is slightly different if you are using client version prior to 3.0.0:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("FactoryOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxWorkflowThreadCount")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("1000")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setCacheMaximumSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDisableStickyExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("false")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentActivityExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setMaxConcurrentWorkflowExecutionSize")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful. So you need a type to create instances.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Activities are stateless and thread safe. So a shared instance is used.")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Start listening to the workflow and activity task lists.")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),s("p",[t._v("The "),s("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/worker/WorkerFactoryOptions.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkerFactoryOptions"),s("OutboundLink")],1),t._v(" includes those that need to be shared across workers on the hosts like thread pool, sticky cache.")]),t._v(" "),s("p",[t._v("In "),s("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/worker/WorkerOptions.Builder.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkerOptions"),s("OutboundLink")],1),t._v(" you can customize things like pollerOptions, activities per second.")])])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/64.9669571c.js b/assets/js/64.49b5f379.js similarity index 99% rename from assets/js/64.9669571c.js rename to assets/js/64.49b5f379.js index e2af38f72..2eee2d2f2 100644 --- a/assets/js/64.9669571c.js +++ b/assets/js/64.49b5f379.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[64],{370:function(t,a,e){"use strict";e.r(a);var s=e(0),r=Object(s.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"signals"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signals"}},[t._v("#")]),t._v(" Signals")]),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(" provide a mechanism to send data directly to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(". Previously, you had\ntwo options for passing data to the "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation:")],1),t._v(" "),a("ul",[a("li",[t._v("Via start parameters")]),t._v(" "),a("li",[t._v("As return values from "),a("Term",{attrs:{term:"activity",show:"activities"}})],1)]),t._v(" "),a("p",[t._v("With start parameters, we could only pass in values before "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" began.")],1),t._v(" "),a("p",[t._v("Return values from "),a("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" allowed us to pass information to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", but this\napproach comes with its own complications. One major drawback is reliance on polling. This means\nthat the data needs to be stored in a third-party location until it's ready to be picked up by\nthe "),a("Term",{attrs:{term:"activity"}}),t._v(". Further, the lifecycle of this "),a("Term",{attrs:{term:"activity"}}),t._v(" requires management, and the "),a("Term",{attrs:{term:"activity"}}),t._v("\nrequires manual restart if it fails before acquiring the data.")],1),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(", on the other hand, provide a fully asynchronous and durable mechanism for providing data to\na running "),a("Term",{attrs:{term:"workflow"}}),t._v(". When a "),a("Term",{attrs:{term:"signal"}}),t._v(" is received for a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", Cadence persists the "),a("Term",{attrs:{term:"event"}}),t._v("\nand the payload in the "),a("Term",{attrs:{term:"workflow"}}),t._v(" history. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" can then process the "),a("Term",{attrs:{term:"signal"}}),t._v(" at any time\nafterwards without the risk of losing the information. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" also has the option to stop\nexecution by blocking on a "),a("Term",{attrs:{term:"signal"}}),t._v(" channel.")],1),t._v(" "),a("h2",{attrs:{id:"implement-signal-handler-in-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#implement-signal-handler-in-workflow"}},[t._v("#")]),t._v(" Implement Signal Handler in Workflow")]),t._v(" "),a("p",[t._v("See the below example from "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloSignal.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("sample"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@SignalMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" count "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("while")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Bye"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n logger"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" oldGreeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("await")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Objects")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" oldGreeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n logger"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("this")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("The "),a("Term",{attrs:{term:"workflow"}}),t._v(" interface now has a new method annotated with @SignalMethod. It is a callback method that is invoked\nevery time a new "),a("Term",{attrs:{term:"signal"}}),t._v(' of "HelloWorld'),a("Term",{attrs:{term:""}}),t._v('updateGreeting" is delivered to a '),a("Term",{attrs:{term:"workflow"}}),t._v(". The "),a("Term",{attrs:{term:"workflow"}}),t._v(" interface can have only\none @WorkflowMethod which is a "),a("em",[t._v("main")]),t._v(" function of the "),a("Term",{attrs:{term:"workflow"}}),t._v(" and as many "),a("Term",{attrs:{term:"signal"}}),t._v(" methods as needed.")],1),t._v(" "),a("p",[t._v("The updated "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation demonstrates a few important Cadence concepts. The first is that "),a("Term",{attrs:{term:"workflow"}}),t._v(" is stateful and can\nhave fields of any complex type. Another is that the "),a("code",[t._v("Workflow.await")]),t._v(" function that blocks until the function it receives as a parameter evaluates to true. The condition is going to be evaluated only on "),a("Term",{attrs:{term:"workflow"}}),t._v(" state changes, so it is not a busy wait in traditional sense.")],1),t._v(" "),a("h2",{attrs:{id:"signal-from-command-line"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-from-command-line"}},[t._v("#")]),t._v(" Signal From Command Line")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"World'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: HelloSignal, run Id: 6fa204cb-f478-469a-9432-78060b83b6cd\n')])])]),a("p",[t._v("Program output:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":53:56.120 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Let's send a "),a("Term",{attrs:{term:"signal"}}),t._v(" using "),a("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Hi'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),a("p",[t._v("Program output:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":53:56.120 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":54:57.901 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(": Hi World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Try sending the same "),a("Term",{attrs:{term:"signal"}}),t._v(" with the same input again. Note that the output doesn't change. This happens because the await condition\ndoesn't unblock when it sees the same value. But a new greeting unblocks it:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Welcome'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),a("p",[t._v("Program output:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":53:56.120 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":54:57.901 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(": Hi World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":56:24.400 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v(": Welcome World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Now shut down the "),a("Term",{attrs:{term:"worker"}}),t._v(" and send the same "),a("Term",{attrs:{term:"signal"}}),t._v(" again:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Welcome'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),a("p",[t._v("Note that sending "),a("Term",{attrs:{term:"signal",show:"signals"}}),t._v(" as well as starting "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" does not need a "),a("Term",{attrs:{term:"worker"}}),t._v(" running. The requests are queued inside the Cadence service.")],1),t._v(" "),a("p",[t._v("Now bring the "),a("Term",{attrs:{term:"worker"}}),t._v(" back. Note that it doesn't log anything besides the standard startup messages.\nThis occurs because it ignores the queued "),a("Term",{attrs:{term:"signal"}}),t._v(" that contains the same input as the current value of greeting.\nNote that the restart of the "),a("Term",{attrs:{term:"worker"}}),t._v(" didn't affect the "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(". It is still blocked on the same line of code as before the failure.\nThis is the most important feature of Cadence. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" code doesn't need to deal with "),a("Term",{attrs:{term:"worker"}}),t._v(" failures at all. Its state is fully recovered to its current state that includes all the local variables and threads.")],1),t._v(" "),a("p",[t._v("Let's look at the line where the "),a("Term",{attrs:{term:"workflow"}}),t._v(" is blocked:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow stack "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello2"')]),t._v("\nQuery result:\n"),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflow-root: (BLOCKED on await)\ncom.uber.cadence.internal.sync.SyncDecisionContext.await(SyncDecisionContext.java:546)\ncom.uber.cadence.internal.sync.WorkflowInternal.await(WorkflowInternal.java:243)\ncom.uber.cadence.workflow.Workflow.await(Workflow.java:611)\ncom.uber.cadence.samples.hello.GettingStarted'),a("span",{pre:!0,attrs:{class:"token variable"}},[t._v("$HelloWorldImpl")]),t._v('.sayHello(GettingStarted.java:32)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)"')]),t._v("\n")])])]),a("p",[t._v("Yes, indeed the "),a("Term",{attrs:{term:"workflow"}}),t._v(" is blocked on await. This feature works for any open "),a("Term",{attrs:{term:"workflow"}}),t._v(", greatly simplifying troubleshooting in production.\nLet's complete the "),a("Term",{attrs:{term:"workflow"}}),t._v(" by sending a "),a("Term",{attrs:{term:"signal"}}),t._v(' with a "Bye" greeting:')],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":58:22.962 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("4")]),t._v(": Bye World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Note that the value of the count variable was not lost during the restart.")]),t._v(" "),a("p",[t._v("Also note that while a single "),a("Term",{attrs:{term:"worker"}}),t._v(" instance is used for this\nwalkthrough, any real production deployment has multiple "),a("Term",{attrs:{term:"worker"}}),t._v(" instances running. So any "),a("Term",{attrs:{term:"worker"}}),t._v(" failure or restart does not delay any\n"),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" because it is just migrated to any other available "),a("Term",{attrs:{term:"worker"}}),t._v(".")],1),t._v(" "),a("h2",{attrs:{id:"signalwithstart-from-command-line"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signalwithstart-from-command-line"}},[t._v("#")]),t._v(" SignalWithStart From Command Line")]),t._v(" "),a("p",[t._v("You may not know if a "),a("Term",{attrs:{term:"workflow"}}),t._v(" is running and can accept a "),a("Term",{attrs:{term:"signal"}}),t._v(".\nThe signalWithStart feature allows you to send a "),a("Term",{attrs:{term:"signal"}}),t._v(" to the current "),a("Term",{attrs:{term:"workflow"}}),t._v(" instance if one exists or to create a new\nrun and then send the "),a("Term",{attrs:{term:"signal"}}),t._v(". "),a("code",[t._v("SignalWithStartWorkflow")]),t._v(" therefore doesn't take a "),a("Term",{attrs:{term:"run_ID"}}),t._v(" as a\nparameter.")],1),t._v(" "),a("p",[t._v("Learn more from the "),a("code",[t._v("--help")]),t._v(" manual:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signalwithstart "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-h")]),t._v("\nNAME:\n cadence workflow signalwithstart - signal the current "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("open")]),t._v(" workflow "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" exists, or attempt to start a new run based on IDResuePolicy and signals it\n\nUSAGE:\n cadence workflow signalwithstart "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("command options"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("arguments"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v("."),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v(".\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v(".\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v(".\n")])])]),a("h2",{attrs:{id:"signal-from-user-application-code"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-from-user-application-code"}},[t._v("#")]),t._v(" Signal from user/application code")]),t._v(" "),a("p",[t._v("You may want to signal workflows without running the command line.")]),t._v(" "),a("p",[t._v("The\n"),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/com/uber/cadence/client/WorkflowClient.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowClient"),a("OutboundLink")],1),t._v(" API allows you to send signal (or SignalWithStartWorkflow) from outside of the workflow\nto send a "),a("Term",{attrs:{term:"signal"}}),t._v(" to the current "),a("Term",{attrs:{term:"workflow"}}),t._v(".")],1),t._v(" "),a("p",[t._v("Note that when using "),a("code",[t._v("newWorkflowStub")]),t._v(" to signal a workflow, you MUST NOT passing WorkflowOptions.")]),t._v(" "),a("p",[t._v("The "),a("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/client/WorkflowClient.html#newWorkflowStub-java.lang.Class-com.uber.cadence.client.WorkflowOptions-",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowStub"),a("OutboundLink")],1),t._v(" with WorkflowOptions is only for starting workflows.")]),t._v(" "),a("p",[t._v("The "),a("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/client/WorkflowClient.html#newWorkflowStub-java.lang.Class-java.lang.String-",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowStub"),a("OutboundLink")],1),t._v(" without WorkflowOptions is for signal or "),a("a",{attrs:{href:"/docs/java-client/queries"}},[t._v("query")])])])}),[],!1,null,null,null);a.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[64],{368:function(t,a,e){"use strict";e.r(a);var s=e(0),r=Object(s.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"signals"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signals"}},[t._v("#")]),t._v(" Signals")]),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(" provide a mechanism to send data directly to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(". Previously, you had\ntwo options for passing data to the "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation:")],1),t._v(" "),a("ul",[a("li",[t._v("Via start parameters")]),t._v(" "),a("li",[t._v("As return values from "),a("Term",{attrs:{term:"activity",show:"activities"}})],1)]),t._v(" "),a("p",[t._v("With start parameters, we could only pass in values before "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" began.")],1),t._v(" "),a("p",[t._v("Return values from "),a("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" allowed us to pass information to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", but this\napproach comes with its own complications. One major drawback is reliance on polling. This means\nthat the data needs to be stored in a third-party location until it's ready to be picked up by\nthe "),a("Term",{attrs:{term:"activity"}}),t._v(". Further, the lifecycle of this "),a("Term",{attrs:{term:"activity"}}),t._v(" requires management, and the "),a("Term",{attrs:{term:"activity"}}),t._v("\nrequires manual restart if it fails before acquiring the data.")],1),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(", on the other hand, provide a fully asynchronous and durable mechanism for providing data to\na running "),a("Term",{attrs:{term:"workflow"}}),t._v(". When a "),a("Term",{attrs:{term:"signal"}}),t._v(" is received for a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", Cadence persists the "),a("Term",{attrs:{term:"event"}}),t._v("\nand the payload in the "),a("Term",{attrs:{term:"workflow"}}),t._v(" history. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" can then process the "),a("Term",{attrs:{term:"signal"}}),t._v(" at any time\nafterwards without the risk of losing the information. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" also has the option to stop\nexecution by blocking on a "),a("Term",{attrs:{term:"signal"}}),t._v(" channel.")],1),t._v(" "),a("h2",{attrs:{id:"implement-signal-handler-in-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#implement-signal-handler-in-workflow"}},[t._v("#")]),t._v(" Implement Signal Handler in Workflow")]),t._v(" "),a("p",[t._v("See the below example from "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloSignal.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("sample"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@SignalMethod")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" count "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("while")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Bye"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n logger"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" oldGreeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("await")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Objects")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" oldGreeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n logger"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("this")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("greeting "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("The "),a("Term",{attrs:{term:"workflow"}}),t._v(" interface now has a new method annotated with @SignalMethod. It is a callback method that is invoked\nevery time a new "),a("Term",{attrs:{term:"signal"}}),t._v(' of "HelloWorld'),a("Term",{attrs:{term:""}}),t._v('updateGreeting" is delivered to a '),a("Term",{attrs:{term:"workflow"}}),t._v(". The "),a("Term",{attrs:{term:"workflow"}}),t._v(" interface can have only\none @WorkflowMethod which is a "),a("em",[t._v("main")]),t._v(" function of the "),a("Term",{attrs:{term:"workflow"}}),t._v(" and as many "),a("Term",{attrs:{term:"signal"}}),t._v(" methods as needed.")],1),t._v(" "),a("p",[t._v("The updated "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation demonstrates a few important Cadence concepts. The first is that "),a("Term",{attrs:{term:"workflow"}}),t._v(" is stateful and can\nhave fields of any complex type. Another is that the "),a("code",[t._v("Workflow.await")]),t._v(" function that blocks until the function it receives as a parameter evaluates to true. The condition is going to be evaluated only on "),a("Term",{attrs:{term:"workflow"}}),t._v(" state changes, so it is not a busy wait in traditional sense.")],1),t._v(" "),a("h2",{attrs:{id:"signal-from-command-line"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-from-command-line"}},[t._v("#")]),t._v(" Signal From Command Line")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"World'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: HelloSignal, run Id: 6fa204cb-f478-469a-9432-78060b83b6cd\n')])])]),a("p",[t._v("Program output:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":53:56.120 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Let's send a "),a("Term",{attrs:{term:"signal"}}),t._v(" using "),a("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Hi'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),a("p",[t._v("Program output:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":53:56.120 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":54:57.901 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(": Hi World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Try sending the same "),a("Term",{attrs:{term:"signal"}}),t._v(" with the same input again. Note that the output doesn't change. This happens because the await condition\ndoesn't unblock when it sees the same value. But a new greeting unblocks it:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Welcome'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),a("p",[t._v("Program output:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":53:56.120 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":54:57.901 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(": Hi World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":56:24.400 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v(": Welcome World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Now shut down the "),a("Term",{attrs:{term:"worker"}}),t._v(" and send the same "),a("Term",{attrs:{term:"signal"}}),t._v(" again:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[t._v("cadence: "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloSignal"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Welcome'),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),a("p",[t._v("Note that sending "),a("Term",{attrs:{term:"signal",show:"signals"}}),t._v(" as well as starting "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" does not need a "),a("Term",{attrs:{term:"worker"}}),t._v(" running. The requests are queued inside the Cadence service.")],1),t._v(" "),a("p",[t._v("Now bring the "),a("Term",{attrs:{term:"worker"}}),t._v(" back. Note that it doesn't log anything besides the standard startup messages.\nThis occurs because it ignores the queued "),a("Term",{attrs:{term:"signal"}}),t._v(" that contains the same input as the current value of greeting.\nNote that the restart of the "),a("Term",{attrs:{term:"worker"}}),t._v(" didn't affect the "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(". It is still blocked on the same line of code as before the failure.\nThis is the most important feature of Cadence. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" code doesn't need to deal with "),a("Term",{attrs:{term:"worker"}}),t._v(" failures at all. Its state is fully recovered to its current state that includes all the local variables and threads.")],1),t._v(" "),a("p",[t._v("Let's look at the line where the "),a("Term",{attrs:{term:"workflow"}}),t._v(" is blocked:")],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow stack "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello2"')]),t._v("\nQuery result:\n"),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflow-root: (BLOCKED on await)\ncom.uber.cadence.internal.sync.SyncDecisionContext.await(SyncDecisionContext.java:546)\ncom.uber.cadence.internal.sync.WorkflowInternal.await(WorkflowInternal.java:243)\ncom.uber.cadence.workflow.Workflow.await(Workflow.java:611)\ncom.uber.cadence.samples.hello.GettingStarted'),a("span",{pre:!0,attrs:{class:"token variable"}},[t._v("$HelloWorldImpl")]),t._v('.sayHello(GettingStarted.java:32)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)"')]),t._v("\n")])])]),a("p",[t._v("Yes, indeed the "),a("Term",{attrs:{term:"workflow"}}),t._v(" is blocked on await. This feature works for any open "),a("Term",{attrs:{term:"workflow"}}),t._v(", greatly simplifying troubleshooting in production.\nLet's complete the "),a("Term",{attrs:{term:"workflow"}}),t._v(" by sending a "),a("Term",{attrs:{term:"signal"}}),t._v(' with a "Bye" greeting:')],1),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token number"}},[t._v("16")]),t._v(":58:22.962 "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("4")]),t._v(": Bye World"),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),a("p",[t._v("Note that the value of the count variable was not lost during the restart.")]),t._v(" "),a("p",[t._v("Also note that while a single "),a("Term",{attrs:{term:"worker"}}),t._v(" instance is used for this\nwalkthrough, any real production deployment has multiple "),a("Term",{attrs:{term:"worker"}}),t._v(" instances running. So any "),a("Term",{attrs:{term:"worker"}}),t._v(" failure or restart does not delay any\n"),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" because it is just migrated to any other available "),a("Term",{attrs:{term:"worker"}}),t._v(".")],1),t._v(" "),a("h2",{attrs:{id:"signalwithstart-from-command-line"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signalwithstart-from-command-line"}},[t._v("#")]),t._v(" SignalWithStart From Command Line")]),t._v(" "),a("p",[t._v("You may not know if a "),a("Term",{attrs:{term:"workflow"}}),t._v(" is running and can accept a "),a("Term",{attrs:{term:"signal"}}),t._v(".\nThe signalWithStart feature allows you to send a "),a("Term",{attrs:{term:"signal"}}),t._v(" to the current "),a("Term",{attrs:{term:"workflow"}}),t._v(" instance if one exists or to create a new\nrun and then send the "),a("Term",{attrs:{term:"signal"}}),t._v(". "),a("code",[t._v("SignalWithStartWorkflow")]),t._v(" therefore doesn't take a "),a("Term",{attrs:{term:"run_ID"}}),t._v(" as a\nparameter.")],1),t._v(" "),a("p",[t._v("Learn more from the "),a("code",[t._v("--help")]),t._v(" manual:")]),t._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signalwithstart "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("-h")]),t._v("\nNAME:\n cadence workflow signalwithstart - signal the current "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("open")]),t._v(" workflow "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" exists, or attempt to start a new run based on IDResuePolicy and signals it\n\nUSAGE:\n cadence workflow signalwithstart "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("command options"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("arguments"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v("."),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v(".\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v(".\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("..")]),t._v(".\n")])])]),a("h2",{attrs:{id:"signal-from-user-application-code"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-from-user-application-code"}},[t._v("#")]),t._v(" Signal from user/application code")]),t._v(" "),a("p",[t._v("You may want to signal workflows without running the command line.")]),t._v(" "),a("p",[t._v("The\n"),a("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/com/uber/cadence/client/WorkflowClient.html",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowClient"),a("OutboundLink")],1),t._v(" API allows you to send signal (or SignalWithStartWorkflow) from outside of the workflow\nto send a "),a("Term",{attrs:{term:"signal"}}),t._v(" to the current "),a("Term",{attrs:{term:"workflow"}}),t._v(".")],1),t._v(" "),a("p",[t._v("Note that when using "),a("code",[t._v("newWorkflowStub")]),t._v(" to signal a workflow, you MUST NOT passing WorkflowOptions.")]),t._v(" "),a("p",[t._v("The "),a("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/client/WorkflowClient.html#newWorkflowStub-java.lang.Class-com.uber.cadence.client.WorkflowOptions-",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowStub"),a("OutboundLink")],1),t._v(" with WorkflowOptions is only for starting workflows.")]),t._v(" "),a("p",[t._v("The "),a("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/client/WorkflowClient.html#newWorkflowStub-java.lang.Class-java.lang.String-",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowStub"),a("OutboundLink")],1),t._v(" without WorkflowOptions is for signal or "),a("a",{attrs:{href:"/docs/java-client/queries"}},[t._v("query")])])])}),[],!1,null,null,null);a.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/65.93c19f44.js b/assets/js/65.7bdfdbd8.js similarity index 99% rename from assets/js/65.93c19f44.js rename to assets/js/65.7bdfdbd8.js index 4a97815a4..5e90995b7 100644 --- a/assets/js/65.93c19f44.js +++ b/assets/js/65.7bdfdbd8.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[65],{372:function(t,e,a){"use strict";a.r(e);var r=a(0),s=Object(r.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"queries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#queries"}},[t._v("#")]),t._v(" Queries")]),t._v(" "),e("p",[t._v("Query is to expose this internal state to the external world Cadence provides a synchronous "),e("Term",{attrs:{term:"query"}}),t._v(" feature. From the "),e("Term",{attrs:{term:"workflow"}}),t._v(" implementer point of view the "),e("Term",{attrs:{term:"query"}}),t._v(" is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per "),e("Term",{attrs:{term:"workflow"}}),t._v(" type exposing different information to different external systems.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" callbacks must be read-only not mutating the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state in any way. The other limitation is that the "),e("Term",{attrs:{term:"query"}}),t._v(" callback cannot contain any blocking code. Both above limitations rule out ability to invoke "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" from the "),e("Term",{attrs:{term:"query"}}),t._v(" handlers.")],1),t._v(" "),e("h2",{attrs:{id:"built-in-query-stack-trace"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#built-in-query-stack-trace"}},[t._v("#")]),t._v(" Built-in Query: Stack Trace")]),t._v(" "),e("p",[t._v("If a "),e("Term",{attrs:{term:"workflow_execution"}}),t._v(" has been stuck at a state for longer than an expected period of time, you\nmight want to "),e("Term",{attrs:{term:"query"}}),t._v(" the current call stack. You can use the Cadence "),e("Term",{attrs:{term:"CLI"}}),t._v(" to perform this "),e("Term",{attrs:{term:"query"}}),t._v(". For\nexample:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace")])]),t._v(" "),e("p",[t._v("This command uses "),e("code",[t._v("__stack_trace")]),t._v(", which is a built-in "),e("Term",{attrs:{term:"query"}}),t._v(" type supported by the Cadence client\nlibrary. You can add custom "),e("Term",{attrs:{term:"query"}}),t._v(" types to handle "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" such as "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" the current state of a\n"),e("Term",{attrs:{term:"workflow"}}),t._v(", or "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" how many "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" the "),e("Term",{attrs:{term:"workflow"}}),t._v(" has completed.")],1),t._v(" "),e("h2",{attrs:{id:"customized-query"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#customized-query"}},[t._v("#")]),t._v(" Customized Query")]),t._v(" "),e("p",[t._v("Cadence provides a "),e("Term",{attrs:{term:"query"}}),t._v(" feature that supports synchronously returning any information from a "),e("Term",{attrs:{term:"workflow"}}),t._v(" to an external caller.")],1),t._v(" "),e("p",[t._v("Interface "),e("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/com/uber/cadence/workflow/QueryMethod.html",target:"_blank",rel:"noopener noreferrer"}},[e("strong",[t._v("QueryMethod")]),e("OutboundLink")],1),t._v(" indicates that the method is a query method. Query method can be used to query a workflow state by external process at any time during its execution. This annotation applies only to workflow interface methods.")]),t._v(" "),e("p",[t._v("See the "),e("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloQuery.java",target:"_blank",rel:"noopener noreferrer"}},[e("Term",{attrs:{term:"workflow"}}),e("OutboundLink")],1),t._v(" example code :")]),t._v(" "),e("div",{staticClass:"language-java extra-class"},[e("pre",{pre:!0,attrs:{class:"language-java"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@SignalMethod")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@QueryMethod")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("getCount")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" count "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("while")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Bye"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n logger"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" oldGreeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("await")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Objects")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" oldGreeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n logger"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("this")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("getCount")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" count"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("The new "),e("code",[t._v("getCount")]),t._v(" method annotated with "),e("code",[t._v("@QueryMethod")]),t._v(" was added to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" interface definition. It is allowed\nto have multiple "),e("Term",{attrs:{term:"query"}}),t._v(" methods per "),e("Term",{attrs:{term:"workflow"}}),t._v(" interface.")],1),t._v(" "),e("p",[t._v("The main restriction on the implementation of the "),e("Term",{attrs:{term:"query"}}),t._v(" method is that it is not allowed to modify "),e("Term",{attrs:{term:"workflow"}}),t._v(" state in any form.\nIt also is not allowed to block its thread in any way. It usually just returns a value derived from the fields of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" object.")],1),t._v(" "),e("h2",{attrs:{id:"run-query-from-command-line"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#run-query-from-command-line"}},[t._v("#")]),t._v(" Run Query from Command Line")]),t._v(" "),e("p",[t._v("Let's run the updated "),e("Term",{attrs:{term:"worker"}}),t._v(" and send a couple "),e("Term",{attrs:{term:"signal",show:"signals"}}),t._v(" to it:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence: "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"World'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: HelloQuery, run Id: 1925f668-45b5-4405-8cba-74f7c68c3135\ncadence: '),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Hi'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\ncadence: '),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Welcome'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),e("p",[t._v("The "),e("Term",{attrs:{term:"worker"}}),t._v(" output:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[e("span",{pre:!0,attrs:{class:"token number"}},[t._v("17")]),t._v(":35:50.485 "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("17")]),t._v(":36:10.483 "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(": Hi World"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("17")]),t._v(":36:16.204 "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v(": Welcome World"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),e("p",[t._v("Now let's "),e("Term",{attrs:{term:"query"}}),t._v(" the "),e("Term",{attrs:{term:"workflow"}}),t._v(" using the "),e("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence: "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow query "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--query_type")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::getCount"')]),t._v("\n:query:Query: result as JSON:\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v("\n")])])]),e("p",[t._v("One limitation of the "),e("Term",{attrs:{term:"query"}}),t._v(" is that it requires a "),e("Term",{attrs:{term:"worker"}}),t._v(" process running because it is executing callback code.\nAn interesting feature of the "),e("Term",{attrs:{term:"query"}}),t._v(" is that it works for completed "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" as well. Let's complete the "),e("Term",{attrs:{term:"workflow"}}),t._v(' by sending "Bye" and '),e("Term",{attrs:{term:"query"}}),t._v(" it.")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence: "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Bye'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\ncadence: '),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow query "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--query_type")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::getCount"')]),t._v("\n:query:Query: result as JSON:\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("4")]),t._v("\n")])])]),e("p",[t._v("The "),e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" method can accept parameters. This might be useful if only part of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state should be returned.")],1),t._v(" "),e("h2",{attrs:{id:"run-query-from-external-application-code"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#run-query-from-external-application-code"}},[t._v("#")]),t._v(" Run Query from external application code")]),t._v(" "),e("p",[t._v("The "),e("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/client/WorkflowClient.html#newWorkflowStub-java.lang.Class-java.lang.String-",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowStub"),e("OutboundLink")],1),t._v(" without WorkflowOptions is for signal or "),e("a",{attrs:{href:"/docs/java-client/queries"}},[t._v("query")])]),t._v(" "),e("h2",{attrs:{id:"consistent-query"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#consistent-query"}},[t._v("#")]),t._v(" Consistent Query")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has two consistency levels, eventual and strong. Consider if you were to "),e("Term",{attrs:{term:"signal"}}),t._v(" a "),e("Term",{attrs:{term:"workflow"}}),t._v(" and then\nimmediately "),e("Term",{attrs:{term:"query"}}),t._v(" the "),e("Term",{attrs:{term:"workflow",show:""}})],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json")])]),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state")])]),t._v(" "),e("p",[t._v("In this example if "),e("Term",{attrs:{term:"signal"}}),t._v(" were to change "),e("Term",{attrs:{term:"workflow"}}),t._v(" state, "),e("Term",{attrs:{term:"query"}}),t._v(" may or may not see that state update reflected\nin the "),e("Term",{attrs:{term:"query"}}),t._v(" result. This is what it means for "),e("Term",{attrs:{term:"query"}}),t._v(" to be eventually consistent.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has another consistency level called strong consistency. A strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" is guaranteed\nto be based on "),e("Term",{attrs:{term:"workflow"}}),t._v(" state which includes all "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" that came before the "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. An "),e("Term",{attrs:{term:"event"}}),t._v("\nis considered to have come before a "),e("Term",{attrs:{term:"query"}}),t._v(" if the call creating the external "),e("Term",{attrs:{term:"event"}}),t._v(" returned success before\nthe "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. External "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" which are created while the "),e("Term",{attrs:{term:"query"}}),t._v(" is outstanding may or may not\nbe reflected in the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state the "),e("Term",{attrs:{term:"query"}}),t._v(" result is based on.")],1),t._v(" "),e("p",[t._v("In order to run consistent "),e("Term",{attrs:{term:"query"}}),t._v(" through the "),e("Term",{attrs:{term:"CLI"}}),t._v(" do the following:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong")])]),t._v(" "),e("p",[t._v("In order to run a "),e("Term",{attrs:{term:"query"}}),t._v(" using application code, you need to use "),e("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/com/uber/cadence/WorkflowService.Iface.html#SignalWorkflowExecution-com.uber.cadence.SignalWorkflowExecutionRequest-",target:"_blank",rel:"noopener noreferrer"}},[t._v("service client"),e("OutboundLink")],1),t._v(".")],1),t._v(" "),e("p",[t._v("When using strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" you should expect higher latency than eventually consistent "),e("Term",{attrs:{term:"query"}}),t._v(".")],1)])}),[],!1,null,null,null);e.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[65],{371:function(t,e,a){"use strict";a.r(e);var r=a(0),s=Object(r.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"queries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#queries"}},[t._v("#")]),t._v(" Queries")]),t._v(" "),e("p",[t._v("Query is to expose this internal state to the external world Cadence provides a synchronous "),e("Term",{attrs:{term:"query"}}),t._v(" feature. From the "),e("Term",{attrs:{term:"workflow"}}),t._v(" implementer point of view the "),e("Term",{attrs:{term:"query"}}),t._v(" is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per "),e("Term",{attrs:{term:"workflow"}}),t._v(" type exposing different information to different external systems.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" callbacks must be read-only not mutating the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state in any way. The other limitation is that the "),e("Term",{attrs:{term:"query"}}),t._v(" callback cannot contain any blocking code. Both above limitations rule out ability to invoke "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" from the "),e("Term",{attrs:{term:"query"}}),t._v(" handlers.")],1),t._v(" "),e("h2",{attrs:{id:"built-in-query-stack-trace"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#built-in-query-stack-trace"}},[t._v("#")]),t._v(" Built-in Query: Stack Trace")]),t._v(" "),e("p",[t._v("If a "),e("Term",{attrs:{term:"workflow_execution"}}),t._v(" has been stuck at a state for longer than an expected period of time, you\nmight want to "),e("Term",{attrs:{term:"query"}}),t._v(" the current call stack. You can use the Cadence "),e("Term",{attrs:{term:"CLI"}}),t._v(" to perform this "),e("Term",{attrs:{term:"query"}}),t._v(". For\nexample:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace")])]),t._v(" "),e("p",[t._v("This command uses "),e("code",[t._v("__stack_trace")]),t._v(", which is a built-in "),e("Term",{attrs:{term:"query"}}),t._v(" type supported by the Cadence client\nlibrary. You can add custom "),e("Term",{attrs:{term:"query"}}),t._v(" types to handle "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" such as "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" the current state of a\n"),e("Term",{attrs:{term:"workflow"}}),t._v(", or "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" how many "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" the "),e("Term",{attrs:{term:"workflow"}}),t._v(" has completed.")],1),t._v(" "),e("h2",{attrs:{id:"customized-query"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#customized-query"}},[t._v("#")]),t._v(" Customized Query")]),t._v(" "),e("p",[t._v("Cadence provides a "),e("Term",{attrs:{term:"query"}}),t._v(" feature that supports synchronously returning any information from a "),e("Term",{attrs:{term:"workflow"}}),t._v(" to an external caller.")],1),t._v(" "),e("p",[t._v("Interface "),e("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/com/uber/cadence/workflow/QueryMethod.html",target:"_blank",rel:"noopener noreferrer"}},[e("strong",[t._v("QueryMethod")]),e("OutboundLink")],1),t._v(" indicates that the method is a query method. Query method can be used to query a workflow state by external process at any time during its execution. This annotation applies only to workflow interface methods.")]),t._v(" "),e("p",[t._v("See the "),e("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloQuery.java",target:"_blank",rel:"noopener noreferrer"}},[e("Term",{attrs:{term:"workflow"}}),e("OutboundLink")],1),t._v(" example code :")]),t._v(" "),e("div",{staticClass:"language-java extra-class"},[e("pre",{pre:!0,attrs:{class:"language-java"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@SignalMethod")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@QueryMethod")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("getCount")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorldImpl")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloWorld")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" count "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("sayHello")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("while")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Bye"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n logger"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" oldGreeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("await")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Objects")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("equals")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" oldGreeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n logger"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("info")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("count "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('": "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateGreeting")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("this")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("greeting "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" greeting"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("getCount")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" count"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("The new "),e("code",[t._v("getCount")]),t._v(" method annotated with "),e("code",[t._v("@QueryMethod")]),t._v(" was added to the "),e("Term",{attrs:{term:"workflow"}}),t._v(" interface definition. It is allowed\nto have multiple "),e("Term",{attrs:{term:"query"}}),t._v(" methods per "),e("Term",{attrs:{term:"workflow"}}),t._v(" interface.")],1),t._v(" "),e("p",[t._v("The main restriction on the implementation of the "),e("Term",{attrs:{term:"query"}}),t._v(" method is that it is not allowed to modify "),e("Term",{attrs:{term:"workflow"}}),t._v(" state in any form.\nIt also is not allowed to block its thread in any way. It usually just returns a value derived from the fields of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" object.")],1),t._v(" "),e("h2",{attrs:{id:"run-query-from-command-line"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#run-query-from-command-line"}},[t._v("#")]),t._v(" Run Query from Command Line")]),t._v(" "),e("p",[t._v("Let's run the updated "),e("Term",{attrs:{term:"worker"}}),t._v(" and send a couple "),e("Term",{attrs:{term:"signal",show:"signals"}}),t._v(" to it:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence: "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow start "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--tasklist")]),t._v(" HelloWorldTaskList "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_type")]),t._v(" HelloWorld::sayHello "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--execution_timeout")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("3600")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"World'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nStarted Workflow Id: HelloQuery, run Id: 1925f668-45b5-4405-8cba-74f7c68c3135\ncadence: '),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Hi'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\ncadence: '),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Welcome'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\n')])])]),e("p",[t._v("The "),e("Term",{attrs:{term:"worker"}}),t._v(" output:")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[e("span",{pre:!0,attrs:{class:"token number"}},[t._v("17")]),t._v(":35:50.485 "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("1")]),t._v(": Hello World"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("17")]),t._v(":36:10.483 "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("2")]),t._v(": Hi World"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("17")]),t._v(":36:16.204 "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),t._v("workflow-root"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" INFO c.u.c.samples.hello.GettingStarted - "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v(": Welcome World"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("\n")])])]),e("p",[t._v("Now let's "),e("Term",{attrs:{term:"query"}}),t._v(" the "),e("Term",{attrs:{term:"workflow"}}),t._v(" using the "),e("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence: "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow query "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--query_type")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::getCount"')]),t._v("\n:query:Query: result as JSON:\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("3")]),t._v("\n")])])]),e("p",[t._v("One limitation of the "),e("Term",{attrs:{term:"query"}}),t._v(" is that it requires a "),e("Term",{attrs:{term:"worker"}}),t._v(" process running because it is executing callback code.\nAn interesting feature of the "),e("Term",{attrs:{term:"query"}}),t._v(" is that it works for completed "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" as well. Let's complete the "),e("Term",{attrs:{term:"workflow"}}),t._v(' by sending "Bye" and '),e("Term",{attrs:{term:"query"}}),t._v(" it.")],1),t._v(" "),e("div",{staticClass:"language-bash extra-class"},[e("pre",{pre:!0,attrs:{class:"language-bash"}},[e("code",[t._v("cadence: "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow signal "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--name")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::updateGreeting"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--input")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"Bye'),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("\\")]),t._v('"\nSignal workflow succeeded.\ncadence: '),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("docker")]),t._v(" run "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--network")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("host "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--rm")]),t._v(" ubercadence/cli:master "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--do")]),t._v(" test-domain workflow query "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--workflow_id")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloQuery"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token parameter variable"}},[t._v("--query_type")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloWorld::getCount"')]),t._v("\n:query:Query: result as JSON:\n"),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("4")]),t._v("\n")])])]),e("p",[t._v("The "),e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" method can accept parameters. This might be useful if only part of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state should be returned.")],1),t._v(" "),e("h2",{attrs:{id:"run-query-from-external-application-code"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#run-query-from-external-application-code"}},[t._v("#")]),t._v(" Run Query from external application code")]),t._v(" "),e("p",[t._v("The "),e("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/client/WorkflowClient.html#newWorkflowStub-java.lang.Class-java.lang.String-",target:"_blank",rel:"noopener noreferrer"}},[t._v("WorkflowStub"),e("OutboundLink")],1),t._v(" without WorkflowOptions is for signal or "),e("a",{attrs:{href:"/docs/java-client/queries"}},[t._v("query")])]),t._v(" "),e("h2",{attrs:{id:"consistent-query"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#consistent-query"}},[t._v("#")]),t._v(" Consistent Query")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has two consistency levels, eventual and strong. Consider if you were to "),e("Term",{attrs:{term:"signal"}}),t._v(" a "),e("Term",{attrs:{term:"workflow"}}),t._v(" and then\nimmediately "),e("Term",{attrs:{term:"query"}}),t._v(" the "),e("Term",{attrs:{term:"workflow",show:""}})],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json")])]),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state")])]),t._v(" "),e("p",[t._v("In this example if "),e("Term",{attrs:{term:"signal"}}),t._v(" were to change "),e("Term",{attrs:{term:"workflow"}}),t._v(" state, "),e("Term",{attrs:{term:"query"}}),t._v(" may or may not see that state update reflected\nin the "),e("Term",{attrs:{term:"query"}}),t._v(" result. This is what it means for "),e("Term",{attrs:{term:"query"}}),t._v(" to be eventually consistent.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has another consistency level called strong consistency. A strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" is guaranteed\nto be based on "),e("Term",{attrs:{term:"workflow"}}),t._v(" state which includes all "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" that came before the "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. An "),e("Term",{attrs:{term:"event"}}),t._v("\nis considered to have come before a "),e("Term",{attrs:{term:"query"}}),t._v(" if the call creating the external "),e("Term",{attrs:{term:"event"}}),t._v(" returned success before\nthe "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. External "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" which are created while the "),e("Term",{attrs:{term:"query"}}),t._v(" is outstanding may or may not\nbe reflected in the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state the "),e("Term",{attrs:{term:"query"}}),t._v(" result is based on.")],1),t._v(" "),e("p",[t._v("In order to run consistent "),e("Term",{attrs:{term:"query"}}),t._v(" through the "),e("Term",{attrs:{term:"CLI"}}),t._v(" do the following:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong")])]),t._v(" "),e("p",[t._v("In order to run a "),e("Term",{attrs:{term:"query"}}),t._v(" using application code, you need to use "),e("a",{attrs:{href:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client/latest/com/uber/cadence/WorkflowService.Iface.html#SignalWorkflowExecution-com.uber.cadence.SignalWorkflowExecutionRequest-",target:"_blank",rel:"noopener noreferrer"}},[t._v("service client"),e("OutboundLink")],1),t._v(".")],1),t._v(" "),e("p",[t._v("When using strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" you should expect higher latency than eventually consistent "),e("Term",{attrs:{term:"query"}}),t._v(".")],1)])}),[],!1,null,null,null);e.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/66.a0a4a07a.js b/assets/js/66.1cd33ca3.js similarity index 99% rename from assets/js/66.a0a4a07a.js rename to assets/js/66.1cd33ca3.js index 29d1476ed..f7f92c1be 100644 --- a/assets/js/66.a0a4a07a.js +++ b/assets/js/66.1cd33ca3.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[66],{371:function(e,t,o){"use strict";o.r(t);var i=o(0),a=Object(i.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"activity-and-workflow-retries"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-and-workflow-retries"}},[e._v("#")]),e._v(" Activity and workflow retries")]),e._v(" "),t("p",[t("Term",{attrs:{term:"activity",show:"Activities"}}),e._v(" and "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" can fail due to various intermediate conditions. In those cases, we want\nto retry the failed "),t("Term",{attrs:{term:"activity"}}),e._v(" or child "),t("Term",{attrs:{term:"workflow"}}),e._v(" or even the parent "),t("Term",{attrs:{term:"workflow"}}),e._v(". This can be achieved\nby supplying an optional "),t("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/common/RetryOptions.Builder.html#setInitialInterval-java.time.Duration-",target:"_blank",rel:"noopener noreferrer"}},[e._v("retry options"),t("OutboundLink")],1),e._v(".")],1),e._v(" "),t("blockquote",[t("p",[e._v("Note that sometimes it's also referred as RetryPolicy")])]),e._v(" "),t("h2",{attrs:{id:"retryoptions"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#retryoptions"}},[e._v("#")]),e._v(" RetryOptions")]),e._v(" "),t("p",[e._v("A RetryOptions includes the following.")]),e._v(" "),t("h3",{attrs:{id:"initialinterval"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#initialinterval"}},[e._v("#")]),e._v(" InitialInterval")]),e._v(" "),t("p",[e._v("Backoff interval for the first retry. If coefficient is 1.0 then it is used for all retries.\nRequired, no default value.")]),e._v(" "),t("h3",{attrs:{id:"backoffcoefficient"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#backoffcoefficient"}},[e._v("#")]),e._v(" BackoffCoefficient")]),e._v(" "),t("p",[e._v("Coefficient used to calculate the next retry backoff interval.\nThe next retry interval is previous interval multiplied by this coefficient.\nMust be 1 or larger. Default is 2.0.")]),e._v(" "),t("h3",{attrs:{id:"maximuminterval"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#maximuminterval"}},[e._v("#")]),e._v(" MaximumInterval")]),e._v(" "),t("p",[e._v("Maximum backoff interval between retries. Exponential backoff leads to interval increase.\nThis value is the cap of the interval. Default is 100x of initial interval.")]),e._v(" "),t("h3",{attrs:{id:"expirationinterval"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#expirationinterval"}},[e._v("#")]),e._v(" ExpirationInterval")]),e._v(" "),t("p",[e._v("Maximum time to retry. Either ExpirationInterval or MaximumAttempts is required.\nWhen exceeded the retries stop even if maximum retries is not reached yet.\nFirst (non-retry) attempt is unaffected by this field and is guaranteed to run\nfor the entirety of the workflow timeout duration (ExecutionStartToCloseTimeoutSeconds).")]),e._v(" "),t("h3",{attrs:{id:"maximumattempts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#maximumattempts"}},[e._v("#")]),e._v(" MaximumAttempts")]),e._v(" "),t("p",[e._v("Maximum number of attempts. When exceeded the retries stop even if not expired yet.\nIf not set or set to 0, it means unlimited, and relies on ExpirationInterval to stop.\nEither MaximumAttempts or ExpirationInterval is required.")]),e._v(" "),t("h3",{attrs:{id:"nonretriableerrorreasons-via-setdonotretry"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#nonretriableerrorreasons-via-setdonotretry"}},[e._v("#")]),e._v(" NonRetriableErrorReasons(via setDoNotRetry)")]),e._v(" "),t("p",[e._v("Non-Retriable errors. This is optional. Cadence server will stop retry if error reason matches this list.\nWhen matching an exact match is used. So adding RuntimeException.class to this list is going to include only RuntimeException itself, not all of its subclasses. The reason for such behaviour is to be able to support server side retries without knowledge of Java exception hierarchy. When considering an exception type a cause of ActivityFailureException and ChildWorkflowFailureException is looked at.\nError and CancellationException are never retried and are not even passed to this filter.")]),e._v(" "),t("h2",{attrs:{id:"activity-timeout-usage"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-timeout-usage"}},[e._v("#")]),e._v(" Activity Timeout Usage")]),e._v(" "),t("p",[e._v("It's probably too complicated to learn how to set those timeouts by reading the above. There is an easy way to deal with it.")]),e._v(" "),t("p",[t("strong",[e._v("LocalActivity without retry")]),e._v(": Use ScheduleToClose for overall timeout")]),e._v(" "),t("p",[t("strong",[e._v("Regular Activity without retry")]),e._v(":")]),e._v(" "),t("ol",[t("li",[e._v("Use ScheduleToClose for overall timeout")]),e._v(" "),t("li",[e._v("Leave ScheduleToStart and StartToClose empty")]),e._v(" "),t("li",[e._v("If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.")])]),e._v(" "),t("p",[t("strong",[e._v("LocalActivity with retry")]),e._v(":")]),e._v(" "),t("ol",[t("li",[e._v("Use ScheduleToClose as timeout of each attempt.")]),e._v(" "),t("li",[e._v("Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.")]),e._v(" "),t("li",[e._v("Use retryOptions.ExperiationInterval as overall timeout of all attempts.")]),e._v(" "),t("li",[e._v("Leave retryOptions.MaximumAttempts empty.")])]),e._v(" "),t("p",[t("strong",[e._v("Regular Activity with retry")]),e._v(":")]),e._v(" "),t("ol",[t("li",[e._v("Use ScheduleToClose as timeout of each attempt")]),e._v(" "),t("li",[e._v("Leave ScheduleToStart and StartToClose empty")]),e._v(" "),t("li",[e._v("If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.")]),e._v(" "),t("li",[e._v("Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.")]),e._v(" "),t("li",[e._v("Use retryOptions.ExperiationInterval as overall timeout of all attempts.")]),e._v(" "),t("li",[e._v("Leave retryOptions.MaximumAttempts empty.")])]),e._v(" "),t("h2",{attrs:{id:"activity-timeout-internals"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-timeout-internals"}},[e._v("#")]),e._v(" Activity Timeout Internals")]),e._v(" "),t("h3",{attrs:{id:"basics-without-retry"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#basics-without-retry"}},[e._v("#")]),e._v(" Basics without Retry")]),e._v(" "),t("p",[e._v("Things are easier to understand in the world without retry. Because Cadence started from it.")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("ScheduleToClose timeout is the overall end-to-end timeout from a workflow's perspective.")])]),e._v(" "),t("li",[t("p",[e._v("ScheduleToStart timeout is the time that activity worker needed to start an activity. Exceeding this timeout, activity will return an ScheduleToStart timeout error/exception to workflow")])]),e._v(" "),t("li",[t("p",[e._v("StartToClose timeout is the time that an activity needed to run. Exceeding this will return\nStartToClose to workflow.")])]),e._v(" "),t("li",[t("p",[t("strong",[e._v("Requirement and defaults:")])]),e._v(" "),t("ul",[t("li",[e._v("Either ScheduleToClose is provided or both of ScheduleToStart and StartToClose are provided.")]),e._v(" "),t("li",[e._v("If only ScheduleToClose, then ScheduleToStart and StartToClose are default to it.")]),e._v(" "),t("li",[e._v("If only ScheduleToStart and StartToClose are provided, then "),t("code",[e._v("ScheduleToClose = ScheduleToStart + StartToClose")]),e._v(".")]),e._v(" "),t("li",[e._v("All of them are capped by workflowTimeout. (e.g. if workflowTimeout is 1hour, set 2 hour for ScheduleToClose will still get 1 hour :"),t("code",[e._v("ScheduleToClose=Min(ScheduleToClose, workflowTimeout)")]),e._v(" )")])])])]),e._v(" "),t("p",[t("strong",[e._v("So why are they?")])]),e._v(" "),t("p",[e._v("You may notice that ScheduleToClose is only useful when\n"),t("code",[e._v("ScheduleToClose < ScheduleToStart + StartToClose")]),e._v(". Because if "),t("code",[e._v("ScheduleToClose >= ScheduleToStart+StartToClose")]),e._v(" the ScheduleToClose timeout is already enforced by the combination of the other two, and it become meaningless.")]),e._v(" "),t("p",[e._v("So the main use case of ScheduleToClose being less than the sum of two is that people want to limit the overall timeout of the activity but give more timeout for scheduleToStart or startToClose. "),t("strong",[e._v("This is extremely rare use case")]),e._v(".")]),e._v(" "),t("p",[e._v("Also the main use case that people want to distinguish ScheduleToStart and StartToClose is that the workflow may need to do some special handling for ScheduleToStart timeout error. "),t("strong",[e._v("This is also very rare use case")]),e._v(".")]),e._v(" "),t("p",[e._v("Therefore, you can understand why in TL;DR that I recommend only using "),t("strong",[e._v("ScheduleToClose")]),e._v(" but leave the other two empty. Because only in some rare cases you may need it. If you can't think of the use case, then you do not need it.")]),e._v(" "),t("p",[e._v("LocalActivity doesn't have ScheduleToStart/StartToClose because it's started directly inside workflow worker without server scheduling involved.")]),e._v(" "),t("h3",{attrs:{id:"heartbeat-timeout"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#heartbeat-timeout"}},[e._v("#")]),e._v(" Heartbeat timeout")]),e._v(" "),t("p",[e._v("Heartbeat is very important for long running activity, to prevent it from getting stuck. Not only bugs can cause activity getting stuck, regular deployment/host restart/failure could also cause it. Because without heartbeat, Cadence server couldn't know whether or not the activity is still being worked on. See more details about here https://stackoverflow.com/questions/65118584/solutions-to-stuck-timers-activities-in-cadence-swf-stepfunctions/65118585#65118585")]),e._v(" "),t("h3",{attrs:{id:"retryoptions-and-activity-with-retry"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#retryoptions-and-activity-with-retry"}},[e._v("#")]),e._v(" RetryOptions and Activity with Retry")]),e._v(" "),t("p",[e._v("First of all, here RetryOptions is for "),t("code",[e._v("server side")]),e._v(" backoff retry -- meaning that the retry is managed automatically by Cadence without interacting with workflows. Because retry is managed by Cadence, the activity has to be specially handled in Cadence history that the started event can not written until the activity is closed. Here is some reference: https://stackoverflow.com/questions/65113363/why-an-activity-task-is-scheduled-but-not-started/65113365#65113365")]),e._v(" "),t("p",[e._v("In fact, workflow can do "),t("code",[e._v("client side")]),e._v(" retry on their own. This means workflow will be managing the retry logic. You can write your own retry function, or there is some helper function in SDK, like "),t("code",[e._v("Workflow.retry")]),e._v(" in Cadence-java-client. Client side retry will show all start events immediately, but there will be many events in the history when retrying for a single activity. It's not recommended because of performance issue.")]),e._v(" "),t("p",[e._v("So what do the options mean:")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("ExpirationInterval:")]),e._v(" "),t("ul",[t("li",[e._v("It replaces the ScheduleToClose timeout to become the actual overall timeout of the activity for all attempts.")]),e._v(" "),t("li",[e._v("It's also capped to workflow timeout like other three timeout options. "),t("code",[e._v("ScheduleToClose = Min(ScheduleToClose, workflowTimeout)")])]),e._v(" "),t("li",[e._v("The timeout of each attempt is StartToClose, but StartToClose defaults to ScheduleToClose like explanation above.")]),e._v(" "),t("li",[e._v("ScheduleToClose will be extended to ExpirationInterval:\n"),t("code",[e._v("ScheduleToClose = Max(ScheduleToClose, ExpirationInterval)")]),e._v(", and this happens before ScheduleToClose is copied to ScheduleToClose and StartToClose.")])])]),e._v(" "),t("li",[t("p",[e._v("InitialInterval: the interval of first retry")])]),e._v(" "),t("li",[t("p",[e._v("BackoffCoefficient: self explained")])]),e._v(" "),t("li",[t("p",[e._v("MaximumInterval: maximum of the interval during retry")])]),e._v(" "),t("li",[t("p",[e._v("MaximumAttempts: the maximum attempts. If existing with ExpirationInterval, then retry stops when either one of them is exceeded.")])]),e._v(" "),t("li",[t("p",[t("strong",[e._v("Requirements and defaults")]),e._v(":")])]),e._v(" "),t("li",[t("p",[e._v("Either MaximumAttempts or ExpirationInterval is required. ExpirationInterval is set to workflowTimeout if not provided.")])])]),e._v(" "),t("p",[e._v("Since ExpirationInterval is always there, and in fact it's more useful. And I think it's quite confusing to use MaximumAttempts, so I would recommend just use ExpirationInterval. Unless you really need it.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[66],{370:function(e,t,o){"use strict";o.r(t);var i=o(0),a=Object(i.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"activity-and-workflow-retries"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-and-workflow-retries"}},[e._v("#")]),e._v(" Activity and workflow retries")]),e._v(" "),t("p",[t("Term",{attrs:{term:"activity",show:"Activities"}}),e._v(" and "),t("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" can fail due to various intermediate conditions. In those cases, we want\nto retry the failed "),t("Term",{attrs:{term:"activity"}}),e._v(" or child "),t("Term",{attrs:{term:"workflow"}}),e._v(" or even the parent "),t("Term",{attrs:{term:"workflow"}}),e._v(". This can be achieved\nby supplying an optional "),t("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/common/RetryOptions.Builder.html#setInitialInterval-java.time.Duration-",target:"_blank",rel:"noopener noreferrer"}},[e._v("retry options"),t("OutboundLink")],1),e._v(".")],1),e._v(" "),t("blockquote",[t("p",[e._v("Note that sometimes it's also referred as RetryPolicy")])]),e._v(" "),t("h2",{attrs:{id:"retryoptions"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#retryoptions"}},[e._v("#")]),e._v(" RetryOptions")]),e._v(" "),t("p",[e._v("A RetryOptions includes the following.")]),e._v(" "),t("h3",{attrs:{id:"initialinterval"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#initialinterval"}},[e._v("#")]),e._v(" InitialInterval")]),e._v(" "),t("p",[e._v("Backoff interval for the first retry. If coefficient is 1.0 then it is used for all retries.\nRequired, no default value.")]),e._v(" "),t("h3",{attrs:{id:"backoffcoefficient"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#backoffcoefficient"}},[e._v("#")]),e._v(" BackoffCoefficient")]),e._v(" "),t("p",[e._v("Coefficient used to calculate the next retry backoff interval.\nThe next retry interval is previous interval multiplied by this coefficient.\nMust be 1 or larger. Default is 2.0.")]),e._v(" "),t("h3",{attrs:{id:"maximuminterval"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#maximuminterval"}},[e._v("#")]),e._v(" MaximumInterval")]),e._v(" "),t("p",[e._v("Maximum backoff interval between retries. Exponential backoff leads to interval increase.\nThis value is the cap of the interval. Default is 100x of initial interval.")]),e._v(" "),t("h3",{attrs:{id:"expirationinterval"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#expirationinterval"}},[e._v("#")]),e._v(" ExpirationInterval")]),e._v(" "),t("p",[e._v("Maximum time to retry. Either ExpirationInterval or MaximumAttempts is required.\nWhen exceeded the retries stop even if maximum retries is not reached yet.\nFirst (non-retry) attempt is unaffected by this field and is guaranteed to run\nfor the entirety of the workflow timeout duration (ExecutionStartToCloseTimeoutSeconds).")]),e._v(" "),t("h3",{attrs:{id:"maximumattempts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#maximumattempts"}},[e._v("#")]),e._v(" MaximumAttempts")]),e._v(" "),t("p",[e._v("Maximum number of attempts. When exceeded the retries stop even if not expired yet.\nIf not set or set to 0, it means unlimited, and relies on ExpirationInterval to stop.\nEither MaximumAttempts or ExpirationInterval is required.")]),e._v(" "),t("h3",{attrs:{id:"nonretriableerrorreasons-via-setdonotretry"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#nonretriableerrorreasons-via-setdonotretry"}},[e._v("#")]),e._v(" NonRetriableErrorReasons(via setDoNotRetry)")]),e._v(" "),t("p",[e._v("Non-Retriable errors. This is optional. Cadence server will stop retry if error reason matches this list.\nWhen matching an exact match is used. So adding RuntimeException.class to this list is going to include only RuntimeException itself, not all of its subclasses. The reason for such behaviour is to be able to support server side retries without knowledge of Java exception hierarchy. When considering an exception type a cause of ActivityFailureException and ChildWorkflowFailureException is looked at.\nError and CancellationException are never retried and are not even passed to this filter.")]),e._v(" "),t("h2",{attrs:{id:"activity-timeout-usage"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-timeout-usage"}},[e._v("#")]),e._v(" Activity Timeout Usage")]),e._v(" "),t("p",[e._v("It's probably too complicated to learn how to set those timeouts by reading the above. There is an easy way to deal with it.")]),e._v(" "),t("p",[t("strong",[e._v("LocalActivity without retry")]),e._v(": Use ScheduleToClose for overall timeout")]),e._v(" "),t("p",[t("strong",[e._v("Regular Activity without retry")]),e._v(":")]),e._v(" "),t("ol",[t("li",[e._v("Use ScheduleToClose for overall timeout")]),e._v(" "),t("li",[e._v("Leave ScheduleToStart and StartToClose empty")]),e._v(" "),t("li",[e._v("If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.")])]),e._v(" "),t("p",[t("strong",[e._v("LocalActivity with retry")]),e._v(":")]),e._v(" "),t("ol",[t("li",[e._v("Use ScheduleToClose as timeout of each attempt.")]),e._v(" "),t("li",[e._v("Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.")]),e._v(" "),t("li",[e._v("Use retryOptions.ExperiationInterval as overall timeout of all attempts.")]),e._v(" "),t("li",[e._v("Leave retryOptions.MaximumAttempts empty.")])]),e._v(" "),t("p",[t("strong",[e._v("Regular Activity with retry")]),e._v(":")]),e._v(" "),t("ol",[t("li",[e._v("Use ScheduleToClose as timeout of each attempt")]),e._v(" "),t("li",[e._v("Leave ScheduleToStart and StartToClose empty")]),e._v(" "),t("li",[e._v("If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.")]),e._v(" "),t("li",[e._v("Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.")]),e._v(" "),t("li",[e._v("Use retryOptions.ExperiationInterval as overall timeout of all attempts.")]),e._v(" "),t("li",[e._v("Leave retryOptions.MaximumAttempts empty.")])]),e._v(" "),t("h2",{attrs:{id:"activity-timeout-internals"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-timeout-internals"}},[e._v("#")]),e._v(" Activity Timeout Internals")]),e._v(" "),t("h3",{attrs:{id:"basics-without-retry"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#basics-without-retry"}},[e._v("#")]),e._v(" Basics without Retry")]),e._v(" "),t("p",[e._v("Things are easier to understand in the world without retry. Because Cadence started from it.")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("ScheduleToClose timeout is the overall end-to-end timeout from a workflow's perspective.")])]),e._v(" "),t("li",[t("p",[e._v("ScheduleToStart timeout is the time that activity worker needed to start an activity. Exceeding this timeout, activity will return an ScheduleToStart timeout error/exception to workflow")])]),e._v(" "),t("li",[t("p",[e._v("StartToClose timeout is the time that an activity needed to run. Exceeding this will return\nStartToClose to workflow.")])]),e._v(" "),t("li",[t("p",[t("strong",[e._v("Requirement and defaults:")])]),e._v(" "),t("ul",[t("li",[e._v("Either ScheduleToClose is provided or both of ScheduleToStart and StartToClose are provided.")]),e._v(" "),t("li",[e._v("If only ScheduleToClose, then ScheduleToStart and StartToClose are default to it.")]),e._v(" "),t("li",[e._v("If only ScheduleToStart and StartToClose are provided, then "),t("code",[e._v("ScheduleToClose = ScheduleToStart + StartToClose")]),e._v(".")]),e._v(" "),t("li",[e._v("All of them are capped by workflowTimeout. (e.g. if workflowTimeout is 1hour, set 2 hour for ScheduleToClose will still get 1 hour :"),t("code",[e._v("ScheduleToClose=Min(ScheduleToClose, workflowTimeout)")]),e._v(" )")])])])]),e._v(" "),t("p",[t("strong",[e._v("So why are they?")])]),e._v(" "),t("p",[e._v("You may notice that ScheduleToClose is only useful when\n"),t("code",[e._v("ScheduleToClose < ScheduleToStart + StartToClose")]),e._v(". Because if "),t("code",[e._v("ScheduleToClose >= ScheduleToStart+StartToClose")]),e._v(" the ScheduleToClose timeout is already enforced by the combination of the other two, and it become meaningless.")]),e._v(" "),t("p",[e._v("So the main use case of ScheduleToClose being less than the sum of two is that people want to limit the overall timeout of the activity but give more timeout for scheduleToStart or startToClose. "),t("strong",[e._v("This is extremely rare use case")]),e._v(".")]),e._v(" "),t("p",[e._v("Also the main use case that people want to distinguish ScheduleToStart and StartToClose is that the workflow may need to do some special handling for ScheduleToStart timeout error. "),t("strong",[e._v("This is also very rare use case")]),e._v(".")]),e._v(" "),t("p",[e._v("Therefore, you can understand why in TL;DR that I recommend only using "),t("strong",[e._v("ScheduleToClose")]),e._v(" but leave the other two empty. Because only in some rare cases you may need it. If you can't think of the use case, then you do not need it.")]),e._v(" "),t("p",[e._v("LocalActivity doesn't have ScheduleToStart/StartToClose because it's started directly inside workflow worker without server scheduling involved.")]),e._v(" "),t("h3",{attrs:{id:"heartbeat-timeout"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#heartbeat-timeout"}},[e._v("#")]),e._v(" Heartbeat timeout")]),e._v(" "),t("p",[e._v("Heartbeat is very important for long running activity, to prevent it from getting stuck. Not only bugs can cause activity getting stuck, regular deployment/host restart/failure could also cause it. Because without heartbeat, Cadence server couldn't know whether or not the activity is still being worked on. See more details about here https://stackoverflow.com/questions/65118584/solutions-to-stuck-timers-activities-in-cadence-swf-stepfunctions/65118585#65118585")]),e._v(" "),t("h3",{attrs:{id:"retryoptions-and-activity-with-retry"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#retryoptions-and-activity-with-retry"}},[e._v("#")]),e._v(" RetryOptions and Activity with Retry")]),e._v(" "),t("p",[e._v("First of all, here RetryOptions is for "),t("code",[e._v("server side")]),e._v(" backoff retry -- meaning that the retry is managed automatically by Cadence without interacting with workflows. Because retry is managed by Cadence, the activity has to be specially handled in Cadence history that the started event can not written until the activity is closed. Here is some reference: https://stackoverflow.com/questions/65113363/why-an-activity-task-is-scheduled-but-not-started/65113365#65113365")]),e._v(" "),t("p",[e._v("In fact, workflow can do "),t("code",[e._v("client side")]),e._v(" retry on their own. This means workflow will be managing the retry logic. You can write your own retry function, or there is some helper function in SDK, like "),t("code",[e._v("Workflow.retry")]),e._v(" in Cadence-java-client. Client side retry will show all start events immediately, but there will be many events in the history when retrying for a single activity. It's not recommended because of performance issue.")]),e._v(" "),t("p",[e._v("So what do the options mean:")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("ExpirationInterval:")]),e._v(" "),t("ul",[t("li",[e._v("It replaces the ScheduleToClose timeout to become the actual overall timeout of the activity for all attempts.")]),e._v(" "),t("li",[e._v("It's also capped to workflow timeout like other three timeout options. "),t("code",[e._v("ScheduleToClose = Min(ScheduleToClose, workflowTimeout)")])]),e._v(" "),t("li",[e._v("The timeout of each attempt is StartToClose, but StartToClose defaults to ScheduleToClose like explanation above.")]),e._v(" "),t("li",[e._v("ScheduleToClose will be extended to ExpirationInterval:\n"),t("code",[e._v("ScheduleToClose = Max(ScheduleToClose, ExpirationInterval)")]),e._v(", and this happens before ScheduleToClose is copied to ScheduleToClose and StartToClose.")])])]),e._v(" "),t("li",[t("p",[e._v("InitialInterval: the interval of first retry")])]),e._v(" "),t("li",[t("p",[e._v("BackoffCoefficient: self explained")])]),e._v(" "),t("li",[t("p",[e._v("MaximumInterval: maximum of the interval during retry")])]),e._v(" "),t("li",[t("p",[e._v("MaximumAttempts: the maximum attempts. If existing with ExpirationInterval, then retry stops when either one of them is exceeded.")])]),e._v(" "),t("li",[t("p",[t("strong",[e._v("Requirements and defaults")]),e._v(":")])]),e._v(" "),t("li",[t("p",[e._v("Either MaximumAttempts or ExpirationInterval is required. ExpirationInterval is set to workflowTimeout if not provided.")])])]),e._v(" "),t("p",[e._v("Since ExpirationInterval is always there, and in fact it's more useful. And I think it's quite confusing to use MaximumAttempts, so I would recommend just use ExpirationInterval. Unless you really need it.")])])}),[],!1,null,null,null);t.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/67.e82812de.js b/assets/js/67.43fb7758.js similarity index 99% rename from assets/js/67.e82812de.js rename to assets/js/67.43fb7758.js index 1dcb70d6a..893ed04eb 100644 --- a/assets/js/67.e82812de.js +++ b/assets/js/67.43fb7758.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[67],{373:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"child-workflows"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#child-workflows"}},[t._v("#")]),t._v(" Child workflows")]),t._v(" "),s("p",[t._v("Besides "),s("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", a "),s("Term",{attrs:{term:"workflow"}}),t._v(" can also orchestrate other "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(".")],1),t._v(" "),s("p",[s("code",[t._v("workflow.ExecuteChildWorkflow")]),t._v(" enables the scheduling of other "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" from within a "),s("Term",{attrs:{term:"workflow",show:"workflow"}}),t._v("'s\nimplementation. The parent "),s("Term",{attrs:{term:"workflow"}}),t._v(" has the ability to monitor and impact the lifecycle of the child\n"),s("Term",{attrs:{term:"workflow"}}),t._v(", similar to the way it does for an "),s("Term",{attrs:{term:"activity"}}),t._v(" that it invoked.")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful. So a new stub must be created for each new child.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This is a non blocking call that returns immediately.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v('// Use child.composeGreeting("Hello", name) to call synchronously.')]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Do something else here.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// blocks waiting for the child to complete.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This example shows how parent workflow return right after starting a child workflow,")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// and let the child run itself.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("demoAsyncChildRun")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// non blocking call that initiated child workflow")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// instead of using greeting.get() to block till child complete,")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// sometimes we just want to return parent immediately and keep child running")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" childPromise "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getWorkflowExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n childPromise"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// block until child started,")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// otherwise child may not start because parent complete first.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"let child run, parent just return"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[s("code",[t._v("Workflow.newChildWorkflowStub")]),t._v(" returns a client-side stub that implements a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" interface.\nIt takes a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" type and optional child "),s("Term",{attrs:{term:"workflow"}}),t._v(" options as arguments. "),s("Term",{attrs:{term:"workflow",show:"Workflow"}}),t._v(" options may be needed to override\nthe timeouts and "),s("Term",{attrs:{term:"task_list"}}),t._v(" if they differ from the ones defined in the "),s("code",[t._v("@WorkflowMethod")]),t._v(" annotation or parent "),s("Term",{attrs:{term:"workflow"}}),t._v(".")],1),t._v(" "),s("p",[t._v("The first call to the child "),s("Term",{attrs:{term:"workflow"}}),t._v(" stub must always be to a method annotated with "),s("code",[t._v("@WorkflowMethod")]),t._v(". Similar to "),s("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", a call\ncan be made synchronous or asynchronous by using "),s("code",[t._v("Async#function")]),t._v(" or "),s("code",[t._v("Async#procedure")]),t._v(". The synchronous call blocks until a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" completes. The asynchronous call\nreturns a "),s("code",[t._v("Promise")]),t._v(" that can be used to wait for the completion. After an async call returns the stub, it can be used to send "),s("Term",{attrs:{term:"signal",show:"signals"}}),t._v(" to the child\nby calling methods annotated with "),s("code",[t._v("@SignalMethod")]),t._v(". "),s("Term",{attrs:{term:"query",show:"Querying"}}),t._v(" a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" by calling methods annotated with "),s("code",[t._v("@QueryMethod")]),t._v("\nfrom within "),s("Term",{attrs:{term:"workflow"}}),t._v(" code is not supported. However, "),s("Term",{attrs:{term:"query",show:"queries"}}),t._v(" can be done from "),s("Term",{attrs:{term:"activity",show:"activities"}}),t._v("\nusing the provided "),s("code",[t._v("WorkflowClient")]),t._v(" stub.")],1),t._v(" "),s("p",[t._v("Running two children in parallel:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful, so a new stub must be created for each new child.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child1 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting1 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child1"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Both children will run concurrently.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child2 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting2 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child2"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Bye"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Do something else here.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"First: "')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting1"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('", second: "')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting2"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("To send a "),s("Term",{attrs:{term:"signal"}}),t._v(" to a child, call a method annotated with "),s("code",[t._v("@SignalMethod")]),t._v(":")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@SignalMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n child"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Cadence"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("Calling methods annotated with "),s("code",[t._v("@QueryMethod")]),t._v(" is not allowed from within "),s("Term",{attrs:{term:"workflow"}}),t._v(" code.")],1)])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[67],{372:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"child-workflows"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#child-workflows"}},[t._v("#")]),t._v(" Child workflows")]),t._v(" "),s("p",[t._v("Besides "),s("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", a "),s("Term",{attrs:{term:"workflow"}}),t._v(" can also orchestrate other "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(".")],1),t._v(" "),s("p",[s("code",[t._v("workflow.ExecuteChildWorkflow")]),t._v(" enables the scheduling of other "),s("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" from within a "),s("Term",{attrs:{term:"workflow",show:"workflow"}}),t._v("'s\nimplementation. The parent "),s("Term",{attrs:{term:"workflow"}}),t._v(" has the ability to monitor and impact the lifecycle of the child\n"),s("Term",{attrs:{term:"workflow"}}),t._v(", similar to the way it does for an "),s("Term",{attrs:{term:"activity"}}),t._v(" that it invoked.")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful. So a new stub must be created for each new child.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This is a non blocking call that returns immediately.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v('// Use child.composeGreeting("Hello", name) to call synchronously.')]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Do something else here.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// blocks waiting for the child to complete.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This example shows how parent workflow return right after starting a child workflow,")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// and let the child run itself.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("demoAsyncChildRun")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// non blocking call that initiated child workflow")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// instead of using greeting.get() to block till child complete,")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// sometimes we just want to return parent immediately and keep child running")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" childPromise "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getWorkflowExecution")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n childPromise"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// block until child started,")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// otherwise child may not start because parent complete first.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"let child run, parent just return"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[s("code",[t._v("Workflow.newChildWorkflowStub")]),t._v(" returns a client-side stub that implements a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" interface.\nIt takes a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" type and optional child "),s("Term",{attrs:{term:"workflow"}}),t._v(" options as arguments. "),s("Term",{attrs:{term:"workflow",show:"Workflow"}}),t._v(" options may be needed to override\nthe timeouts and "),s("Term",{attrs:{term:"task_list"}}),t._v(" if they differ from the ones defined in the "),s("code",[t._v("@WorkflowMethod")]),t._v(" annotation or parent "),s("Term",{attrs:{term:"workflow"}}),t._v(".")],1),t._v(" "),s("p",[t._v("The first call to the child "),s("Term",{attrs:{term:"workflow"}}),t._v(" stub must always be to a method annotated with "),s("code",[t._v("@WorkflowMethod")]),t._v(". Similar to "),s("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", a call\ncan be made synchronous or asynchronous by using "),s("code",[t._v("Async#function")]),t._v(" or "),s("code",[t._v("Async#procedure")]),t._v(". The synchronous call blocks until a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" completes. The asynchronous call\nreturns a "),s("code",[t._v("Promise")]),t._v(" that can be used to wait for the completion. After an async call returns the stub, it can be used to send "),s("Term",{attrs:{term:"signal",show:"signals"}}),t._v(" to the child\nby calling methods annotated with "),s("code",[t._v("@SignalMethod")]),t._v(". "),s("Term",{attrs:{term:"query",show:"Querying"}}),t._v(" a child "),s("Term",{attrs:{term:"workflow"}}),t._v(" by calling methods annotated with "),s("code",[t._v("@QueryMethod")]),t._v("\nfrom within "),s("Term",{attrs:{term:"workflow"}}),t._v(" code is not supported. However, "),s("Term",{attrs:{term:"query",show:"queries"}}),t._v(" can be done from "),s("Term",{attrs:{term:"activity",show:"activities"}}),t._v("\nusing the provided "),s("code",[t._v("WorkflowClient")]),t._v(" stub.")],1),t._v(" "),s("p",[t._v("Running two children in parallel:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Workflows are stateful, so a new stub must be created for each new child.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child1 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting1 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child1"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Both children will run concurrently.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child2 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting2 "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child2"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Bye"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Do something else here.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"First: "')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting1"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('", second: "')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" greeting2"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("To send a "),s("Term",{attrs:{term:"signal"}}),t._v(" to a child, call a method annotated with "),s("code",[t._v("@SignalMethod")]),t._v(":")],1),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@SignalMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Promise")]),s("span",{pre:!0,attrs:{class:"token generics"}},[s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("<")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(">")])]),t._v(" greeting "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Async")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("function")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("child"),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("::")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n child"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("updateName")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Cadence"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("Calling methods annotated with "),s("code",[t._v("@QueryMethod")]),t._v(" is not allowed from within "),s("Term",{attrs:{term:"workflow"}}),t._v(" code.")],1)])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/68.d29689ec.js b/assets/js/68.83175e2b.js similarity index 99% rename from assets/js/68.d29689ec.js rename to assets/js/68.83175e2b.js index ef0abf487..77623e7be 100644 --- a/assets/js/68.d29689ec.js +++ b/assets/js/68.83175e2b.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[68],{374:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"exception-handling"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#exception-handling"}},[t._v("#")]),t._v(" Exception Handling")]),t._v(" "),s("p",[t._v("By default, Exceptions thrown by an activity are received by the workflow wrapped into an "),s("code",[t._v("com.uber.cadence.workflow.ActivityFailureException")]),t._v(",")]),t._v(" "),s("p",[t._v("Exceptions thrown by a child workflow are received by a parent workflow wrapped into a "),s("code",[t._v("com.uber.cadence.workflow.ChildWorkflowFailureException")])]),t._v(" "),s("p",[t._v("Exceptions thrown by a workflow are received by a workflow client wrapped into "),s("code",[t._v("com.uber.cadence.client.WorkflowFailureException")]),t._v(".")]),t._v(" "),s("p",[t._v("In this "),s("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloException.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("example"),s("OutboundLink")],1),t._v(" a Workflow Client executes a workflow which executes a child workflow which\nexecutes an activity which throws an IOException. The resulting exception stack trace is:")]),t._v(" "),s("div",{staticClass:"language- extra-class"},[s("pre",{pre:!0,attrs:{class:"language-text"}},[s("code",[t._v(' com.uber.cadence.client.WorkflowFailureException: WorkflowType="GreetingWorkflow::getGreeting", WorkflowID="38b9ce7a-e370-4cd8-a9f3-35e7295f7b3d", RunID="37ceb58c-9271-4fca-b5aa-ba06c5495214\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:139)\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:111)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.startWorkflow(WorkflowExternalInvocationHandler.java:187)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.invoke(WorkflowExternalInvocationHandler.java:113)\n at com.sun.proxy.$Proxy2.getGreeting(Unknown Source)\n at com.uber.cadence.samples.hello.HelloException.main(HelloException.java:117)\n Caused by: com.uber.cadence.workflow.ChildWorkflowFailureException: WorkflowType="GreetingChild::composeGreeting", ID="37ceb58c-9271-4fca-b5aa-ba06c5495214:1", RunID="47859b47-da4c-4225-876a-462421c98c72, EventID=10\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.executeChildWorkflow(ChildWorkflowInvocationHandler.java:114)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.invoke(ChildWorkflowInvocationHandler.java:71)\n at com.sun.proxy.$Proxy5.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingWorkflowImpl.getGreeting(HelloException.java:70)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOWorkflowImplementationFactory$POJOWorkflowImplementation.execute(POJOWorkflowImplementationFactory.java:160)\n Caused by: com.uber.cadence.workflow.ActivityFailureException: ActivityType="GreetingActivities::composeGreeting" ActivityID="1", EventID=7\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ActivityInvocationHandler.invoke(ActivityInvocationHandler.java:75)\n at com.sun.proxy.$Proxy6.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingChildImpl.composeGreeting(HelloException.java:85)\n ... 5 more\n Caused by: java.io.IOException: Hello World!\n at com.uber.cadence.samples.hello.HelloException$GreetingActivitiesImpl.composeGreeting(HelloException.java:93)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOActivityImplementationFactory$POJOActivityImplementation.execute(POJOActivityImplementationFactory.java:162)\n')])])]),s("p",[t._v("Note that IOException is a checked exception. The standard Java way of adding\nthrows IOException to method signature of activity, child and workflow interfaces is not going to help. It is\nbecause at all levels it is never received directly, but in wrapped form. Propagating it without\nwrapping would not allow adding additional context information like activity, child workflow and\nparent workflow types and IDs. The Cadence library solution is to provide a special wrapper\nmethod "),s("code",[t._v("Workflow.wrap(Exception)")]),t._v(" which wraps a checked exception in a special runtime\nexception. It is special because the framework strips it when chaining exceptions across logical\nprocess boundaries. In this example IOException is directly attached to ActivityFailureException\nbesides being wrapped when rethrown.")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloException")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("final")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloException"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("/** Parent implementation that calls GreetingChild#composeGreeting.**/")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" child"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("/** Child workflow implementation.**/")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChildImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("final")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),t._v(" activities "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newActivityStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ActivityOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setScheduleToCloseTimeout")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Duration")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ofSeconds")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IOException")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("catch")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IOException")]),t._v(" e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Wrapping the exception as checked exceptions in activity and workflow interface methods")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// are prohibited.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// It will be unwrapped and attached as a cause to the ActivityFailureException.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("wrap")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Get a new client")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// NOTE: to set a different options, you can do like this:")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ClientOptions.newBuilder().setRpcTimeout(5 * 1000).build();")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClientOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Get worker to poll the task list.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChildImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions")]),t._v(" workflowOptions "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setTaskList")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExecutionStartToCloseTimeout")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Duration")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ofSeconds")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("30")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" workflow "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" workflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"World"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IllegalStateException")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"unreachable"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("catch")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowException")]),t._v(" e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwable")]),t._v(" cause "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getRootCause")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v('// prints "Hello World!"')]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("cause"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getMessage")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"\\nStack Trace:\\n"')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getStackTraceAsString")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("exit")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("The code is slightly different if you are using client version prior to 3.0.0:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChildImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions")]),t._v(" workflowOptions "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setTaskList")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExecutionStartToCloseTimeout")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Duration")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ofSeconds")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("30")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" workflow "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" workflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"World"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IllegalStateException")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"unreachable"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("catch")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowException")]),t._v(" e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwable")]),t._v(" cause "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getRootCause")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v('// prints "Hello World!"')]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("cause"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getMessage")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"\\nStack Trace:\\n"')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getStackTraceAsString")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("exit")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[68],{373:function(t,s,a){"use strict";a.r(s);var n=a(0),e=Object(n.a)({},(function(){var t=this,s=t._self._c;return s("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[s("h1",{attrs:{id:"exception-handling"}},[s("a",{staticClass:"header-anchor",attrs:{href:"#exception-handling"}},[t._v("#")]),t._v(" Exception Handling")]),t._v(" "),s("p",[t._v("By default, Exceptions thrown by an activity are received by the workflow wrapped into an "),s("code",[t._v("com.uber.cadence.workflow.ActivityFailureException")]),t._v(",")]),t._v(" "),s("p",[t._v("Exceptions thrown by a child workflow are received by a parent workflow wrapped into a "),s("code",[t._v("com.uber.cadence.workflow.ChildWorkflowFailureException")])]),t._v(" "),s("p",[t._v("Exceptions thrown by a workflow are received by a workflow client wrapped into "),s("code",[t._v("com.uber.cadence.client.WorkflowFailureException")]),t._v(".")]),t._v(" "),s("p",[t._v("In this "),s("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloException.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("example"),s("OutboundLink")],1),t._v(" a Workflow Client executes a workflow which executes a child workflow which\nexecutes an activity which throws an IOException. The resulting exception stack trace is:")]),t._v(" "),s("div",{staticClass:"language- extra-class"},[s("pre",{pre:!0,attrs:{class:"language-text"}},[s("code",[t._v(' com.uber.cadence.client.WorkflowFailureException: WorkflowType="GreetingWorkflow::getGreeting", WorkflowID="38b9ce7a-e370-4cd8-a9f3-35e7295f7b3d", RunID="37ceb58c-9271-4fca-b5aa-ba06c5495214\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:139)\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:111)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.startWorkflow(WorkflowExternalInvocationHandler.java:187)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.invoke(WorkflowExternalInvocationHandler.java:113)\n at com.sun.proxy.$Proxy2.getGreeting(Unknown Source)\n at com.uber.cadence.samples.hello.HelloException.main(HelloException.java:117)\n Caused by: com.uber.cadence.workflow.ChildWorkflowFailureException: WorkflowType="GreetingChild::composeGreeting", ID="37ceb58c-9271-4fca-b5aa-ba06c5495214:1", RunID="47859b47-da4c-4225-876a-462421c98c72, EventID=10\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.executeChildWorkflow(ChildWorkflowInvocationHandler.java:114)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.invoke(ChildWorkflowInvocationHandler.java:71)\n at com.sun.proxy.$Proxy5.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingWorkflowImpl.getGreeting(HelloException.java:70)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOWorkflowImplementationFactory$POJOWorkflowImplementation.execute(POJOWorkflowImplementationFactory.java:160)\n Caused by: com.uber.cadence.workflow.ActivityFailureException: ActivityType="GreetingActivities::composeGreeting" ActivityID="1", EventID=7\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ActivityInvocationHandler.invoke(ActivityInvocationHandler.java:75)\n at com.sun.proxy.$Proxy6.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingChildImpl.composeGreeting(HelloException.java:85)\n ... 5 more\n Caused by: java.io.IOException: Hello World!\n at com.uber.cadence.samples.hello.HelloException$GreetingActivitiesImpl.composeGreeting(HelloException.java:93)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOActivityImplementationFactory$POJOActivityImplementation.execute(POJOActivityImplementationFactory.java:162)\n')])])]),s("p",[t._v("Note that IOException is a checked exception. The standard Java way of adding\nthrows IOException to method signature of activity, child and workflow interfaces is not going to help. It is\nbecause at all levels it is never received directly, but in wrapped form. Propagating it without\nwrapping would not allow adding additional context information like activity, child workflow and\nparent workflow types and IDs. The Cadence library solution is to provide a special wrapper\nmethod "),s("code",[t._v("Workflow.wrap(Exception)")]),t._v(" which wraps a checked exception in a special runtime\nexception. It is special because the framework strips it when chaining exceptions across logical\nprocess boundaries. In this example IOException is directly attached to ActivityFailureException\nbesides being wrapped when rethrown.")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloException")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("final")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloException"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@WorkflowMethod")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("/** Parent implementation that calls GreetingChild#composeGreeting.**/")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" child "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newChildWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" child"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("/** Child workflow implementation.**/")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChildImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChild")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("private")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("final")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),t._v(" activities "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newActivityStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ActivityOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setScheduleToCloseTimeout")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Duration")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ofSeconds")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" activities"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("implements")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivities")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("composeGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" greeting"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IOException")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("greeting "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('" "')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("catch")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IOException")]),t._v(" e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Wrapping the exception as checked exceptions in activity and workflow interface methods")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// are prohibited.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// It will be unwrapped and attached as a cause to the ActivityFailureException.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("wrap")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Get a new client")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// NOTE: to set a different options, you can do like this:")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ClientOptions.newBuilder().setRpcTimeout(5 * 1000).build();")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClientOptions")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Get worker to poll the task list.")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerFactory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChildImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions")]),t._v(" workflowOptions "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setTaskList")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExecutionStartToCloseTimeout")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Duration")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ofSeconds")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("30")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" workflow "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" workflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"World"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IllegalStateException")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"unreachable"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("catch")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowException")]),t._v(" e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwable")]),t._v(" cause "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getRootCause")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v('// prints "Hello World!"')]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("cause"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getMessage")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"\\nStack Trace:\\n"')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getStackTraceAsString")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("exit")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),s("p",[t._v("The code is slightly different if you are using client version prior to 3.0.0:")]),t._v(" "),s("div",{staticClass:"language-java extra-class"},[s("pre",{pre:!0,attrs:{class:"language-java"}},[s("code",[s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("static")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v(" args"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),t._v(" factory "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Factory")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Worker")]),t._v(" worker "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorker")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflowImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingChildImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n worker"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerActivitiesImplementations")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingActivitiesImpl")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n factory"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions")]),t._v(" workflowOptions "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Builder")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setTaskList")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExecutionStartToCloseTimeout")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Duration")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("ofSeconds")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("30")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),t._v(" workflow "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n workflowClient"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("newWorkflowStub")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("GreetingWorkflow")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" workflowOptions"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("try")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n workflow"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getGreeting")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"World"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throw")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IllegalStateException")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"unreachable"')]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("catch")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowException")]),t._v(" e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwable")]),t._v(" cause "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getRootCause")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token comment"}},[t._v('// prints "Hello World!"')]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("cause"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getMessage")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("out"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("println")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token string"}},[t._v('"\\nStack Trace:\\n"')]),t._v(" "),s("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwables")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("getStackTraceAsString")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("e"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),s("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("System")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),s("span",{pre:!0,attrs:{class:"token function"}},[t._v("exit")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),s("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),s("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);s.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/69.19e3a3fc.js b/assets/js/69.026f8c64.js similarity index 98% rename from assets/js/69.19e3a3fc.js rename to assets/js/69.026f8c64.js index 8cbb7ab29..4f9bca9d6 100644 --- a/assets/js/69.19e3a3fc.js +++ b/assets/js/69.026f8c64.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[69],{376:function(t,e,n){"use strict";n.r(e);var a=n(0),s=Object(a.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"continue-as-new"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#continue-as-new"}},[t._v("#")]),t._v(" Continue as new")]),t._v(" "),e("p",[e("Term",{attrs:{term:"workflow",show:"Workflows"}}),t._v(" that need to rerun periodically could naively be implemented as a big "),e("strong",[t._v("for")]),t._v(" loop with\na sleep where the entire logic of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" is inside the body of the "),e("strong",[t._v("for")]),t._v(" loop. The problem\nwith this approach is that the history for that "),e("Term",{attrs:{term:"workflow"}}),t._v(" will keep growing to a point where it\nreaches the maximum size enforced by the service.")],1),t._v(" "),e("p",[e("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/workflow/Workflow.html#continueAsNew-java.lang.Object...-",target:"_blank",rel:"noopener noreferrer"}},[e("strong",[t._v("ContinueAsNew")]),e("OutboundLink")],1),t._v("\nis the low level construct that enables implementing such "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without the\nrisk of failures down the road. The operation atomically completes the current execution and starts\na new execution of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" with the same "),e("strong",[e("Term",{attrs:{term:"workflow_ID"}})],1),t._v(". The new execution will not carry\nover any history from the old execution.")],1),t._v(" "),e("div",{staticClass:"language-java extra-class"},[e("pre",{pre:!0,attrs:{class:"language-java"}},[e("code",[e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("greet")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activities"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("greet")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("continueAsNew")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n")])])])])}),[],!1,null,null,null);e.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[69],{375:function(t,e,n){"use strict";n.r(e);var a=n(0),s=Object(a.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"continue-as-new"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#continue-as-new"}},[t._v("#")]),t._v(" Continue as new")]),t._v(" "),e("p",[e("Term",{attrs:{term:"workflow",show:"Workflows"}}),t._v(" that need to rerun periodically could naively be implemented as a big "),e("strong",[t._v("for")]),t._v(" loop with\na sleep where the entire logic of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" is inside the body of the "),e("strong",[t._v("for")]),t._v(" loop. The problem\nwith this approach is that the history for that "),e("Term",{attrs:{term:"workflow"}}),t._v(" will keep growing to a point where it\nreaches the maximum size enforced by the service.")],1),t._v(" "),e("p",[e("a",{attrs:{href:"https://www.javadoc.io/static/com.uber.cadence/cadence-client/2.7.9-alpha/com/uber/cadence/workflow/Workflow.html#continueAsNew-java.lang.Object...-",target:"_blank",rel:"noopener noreferrer"}},[e("strong",[t._v("ContinueAsNew")]),e("OutboundLink")],1),t._v("\nis the low level construct that enables implementing such "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without the\nrisk of failures down the road. The operation atomically completes the current execution and starts\na new execution of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" with the same "),e("strong",[e("Term",{attrs:{term:"workflow_ID"}})],1),t._v(". The new execution will not carry\nover any history from the old execution.")],1),t._v(" "),e("div",{staticClass:"language-java extra-class"},[e("pre",{pre:!0,attrs:{class:"language-java"}},[e("code",[e("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Override")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("greet")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("String")]),t._v(" name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activities"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("greet")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Hello "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" name "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"!"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("continueAsNew")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("name"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n")])])])])}),[],!1,null,null,null);e.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/70.8193f6ef.js b/assets/js/70.0cd076ca.js similarity index 99% rename from assets/js/70.8193f6ef.js rename to assets/js/70.0cd076ca.js index db9488512..617a9b560 100644 --- a/assets/js/70.8193f6ef.js +++ b/assets/js/70.0cd076ca.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[70],{375:function(t,n,e){"use strict";e.r(n);var a=e(0),s=Object(a.a)({},(function(){var t=this,n=t._self._c;return n("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[n("h1",{attrs:{id:"side-effect"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#side-effect"}},[t._v("#")]),t._v(" Side Effect")]),t._v(" "),n("p",[t._v("Side Effect allow workflow executes the provided function once, records its result into the workflow history.\nThe recorded result on history will be returned without executing the provided function during replay. This\nguarantees the deterministic requirement for workflow as the exact same result will be returned\nin replay. Common use case is to run some short non-deterministic code in workflow, like\ngetting random number. The only way to fail SideEffect is to panic which causes decision task\nfailure. The decision task after timeout is rescheduled and re-executed giving SideEffect\nanother chance to succeed.")]),t._v(" "),n("p",[t._v("!!Caution: do not use sideEffect function to modify any workflow state. Only use the\nSideEffect's return value. For example this code is BROKEN:")]),t._v(" "),n("p",[t._v("Bad example:")]),t._v(" "),n("div",{staticClass:"language-java extra-class"},[n("pre",{pre:!0,attrs:{class:"language-java"}},[n("code",[t._v(" "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("AtomicInteger")]),t._v(" random "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("AtomicInteger")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("sideEffect")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("set")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("nextInt")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// random will always be 0 in replay, thus this code is non-deterministic")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("<")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("50")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("On replay the provided function is not executed, the random will always be 0, and the workflow\ncould takes a different path breaking the determinism.")]),t._v(" "),n("p",[t._v("Here is the correct way to use sideEffect:")]),t._v(" "),n("p",[t._v("Good example:")]),t._v(" "),n("div",{staticClass:"language-java extra-class"},[n("pre",{pre:!0,attrs:{class:"language-java"}},[n("code",[t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" random "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("sideEffect")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Integer")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("nextInt")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" random "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("<")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("50")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("If function throws any exception it is not delivered to the workflow code. It is wrapped in\nan Error causing failure of the current decision.")]),t._v(" "),n("h2",{attrs:{id:"mutable-side-effect"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#mutable-side-effect"}},[t._v("#")]),t._v(" Mutable Side Effect")]),t._v(" "),n("p",[t._v("MutableSideEffect is similar to sideEffect, in allowing\ncalls of non-deterministic functions from workflow code.\nThe difference is that every sideEffect call in non-replay mode results in a new\nmarker event recorded into the history. However, mutableSideEffect only records a new\nmarker if a value has changed. During the replay, mutableSideEffect will not execute\nthe function again, but it will return the exact same value as it was returning during the\nnon-replay run.")]),t._v(" "),n("p",[t._v("One good use case of mutableSideEffect is to access a dynamically changing config\nwithout breaking determinism. Even if called very frequently the config value is recorded only\nwhen it changes not causing any performance degradation due to a large history size.")]),t._v(" "),n("p",[t._v("!!Caution: do not use mutableSideEffect function to modify any workflow sate. Only use\nthe mutableSideEffect's return value.")]),t._v(" "),n("p",[t._v("If function throws any exception it is not delivered to the workflow code. It is wrapped in\nan Error causing failure of the current decision.")])])}),[],!1,null,null,null);n.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[70],{378:function(t,n,e){"use strict";e.r(n);var a=e(0),s=Object(a.a)({},(function(){var t=this,n=t._self._c;return n("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[n("h1",{attrs:{id:"side-effect"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#side-effect"}},[t._v("#")]),t._v(" Side Effect")]),t._v(" "),n("p",[t._v("Side Effect allow workflow executes the provided function once, records its result into the workflow history.\nThe recorded result on history will be returned without executing the provided function during replay. This\nguarantees the deterministic requirement for workflow as the exact same result will be returned\nin replay. Common use case is to run some short non-deterministic code in workflow, like\ngetting random number. The only way to fail SideEffect is to panic which causes decision task\nfailure. The decision task after timeout is rescheduled and re-executed giving SideEffect\nanother chance to succeed.")]),t._v(" "),n("p",[t._v("!!Caution: do not use sideEffect function to modify any workflow state. Only use the\nSideEffect's return value. For example this code is BROKEN:")]),t._v(" "),n("p",[t._v("Bad example:")]),t._v(" "),n("div",{staticClass:"language-java extra-class"},[n("pre",{pre:!0,attrs:{class:"language-java"}},[n("code",[t._v(" "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("AtomicInteger")]),t._v(" random "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("AtomicInteger")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("sideEffect")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("set")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("nextInt")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("null")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// random will always be 0 in replay, thus this code is non-deterministic")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("get")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("<")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("50")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("On replay the provided function is not executed, the random will always be 0, and the workflow\ncould takes a different path breaking the determinism.")]),t._v(" "),n("p",[t._v("Here is the correct way to use sideEffect:")]),t._v(" "),n("p",[t._v("Good example:")]),t._v(" "),n("div",{staticClass:"language-java extra-class"},[n("pre",{pre:!0,attrs:{class:"language-java"}},[n("code",[t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("int")]),t._v(" random "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Workflow")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("sideEffect")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Integer")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("->")]),t._v(" random"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("nextInt")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" random "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("<")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token number"}},[t._v("50")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("If function throws any exception it is not delivered to the workflow code. It is wrapped in\nan Error causing failure of the current decision.")]),t._v(" "),n("h2",{attrs:{id:"mutable-side-effect"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#mutable-side-effect"}},[t._v("#")]),t._v(" Mutable Side Effect")]),t._v(" "),n("p",[t._v("MutableSideEffect is similar to sideEffect, in allowing\ncalls of non-deterministic functions from workflow code.\nThe difference is that every sideEffect call in non-replay mode results in a new\nmarker event recorded into the history. However, mutableSideEffect only records a new\nmarker if a value has changed. During the replay, mutableSideEffect will not execute\nthe function again, but it will return the exact same value as it was returning during the\nnon-replay run.")]),t._v(" "),n("p",[t._v("One good use case of mutableSideEffect is to access a dynamically changing config\nwithout breaking determinism. Even if called very frequently the config value is recorded only\nwhen it changes not causing any performance degradation due to a large history size.")]),t._v(" "),n("p",[t._v("!!Caution: do not use mutableSideEffect function to modify any workflow sate. Only use\nthe mutableSideEffect's return value.")]),t._v(" "),n("p",[t._v("If function throws any exception it is not delivered to the workflow code. It is wrapped in\nan Error causing failure of the current decision.")])])}),[],!1,null,null,null);n.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/72.a4f2be6d.js b/assets/js/72.cc63b603.js similarity index 99% rename from assets/js/72.a4f2be6d.js rename to assets/js/72.cc63b603.js index 36f4ea54e..60a0fa4d5 100644 --- a/assets/js/72.a4f2be6d.js +++ b/assets/js/72.cc63b603.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[72],{378:function(t,a,s){"use strict";s.r(a);var n=s(0),e=Object(n.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"workflow-replay-and-shadowing"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-replay-and-shadowing"}},[t._v("#")]),t._v(" Workflow Replay and Shadowing")]),t._v(" "),a("p",[t._v("In the Versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. However, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. To help you identify incompatible changes and catch them before production traffic is impacted, we implemented Workflow Replayer and Workflow Shadower.")]),t._v(" "),a("h2",{attrs:{id:"workflow-replayer"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-replayer"}},[t._v("#")]),t._v(" Workflow Replayer")]),t._v(" "),a("p",[t._v("Workflow Replayer is a testing component for replaying existing workflow histories against a workflow definition. The replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.")]),t._v(" "),a("h3",{attrs:{id:"write-a-replay-test"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#write-a-replay-test"}},[t._v("#")]),t._v(" Write a Replay Test")]),t._v(" "),a("h4",{attrs:{id:"step-1-prepare-workflow-histories"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#step-1-prepare-workflow-histories"}},[t._v("#")]),t._v(" Step 1: Prepare workflow histories")]),t._v(" "),a("p",[t._v("Replayer can read workflow history from a local json file or fetch it directly from the Cadence server. If you would like to use the first method, you can use the following CLI command, otherwise you can skip to the next step.")]),t._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[t._v("cadence --do workflow show --wid --rid --of \n")])])]),a("p",[t._v("The dumped workflow history will be stored in the file at the path you specified in json format.")]),t._v(" "),a("h4",{attrs:{id:"step-2-call-the-replay-method"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#step-2-call-the-replay-method"}},[t._v("#")]),t._v(" Step 2: Call the replay method")]),t._v(" "),a("p",[t._v("Once you have the workflow history or have the connection to Cadence server for fetching history, call one of the four replay methods to start the replay test.")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// if workflow history has been loaded into memory")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecution")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("history"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("MyWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// if workflow history is stored in a json file")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecutionFromResource")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflowHistory.json"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("MyWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// if workflow history is read from a File")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecution")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("historyFileObject"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("MyWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),a("h4",{attrs:{id:"step-3-catch-returned-exception"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#step-3-catch-returned-exception"}},[t._v("#")]),t._v(" Step 3: Catch returned exception")]),t._v(" "),a("p",[t._v("If an exception is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.")]),t._v(" "),a("h3",{attrs:{id:"sample-replay-test"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#sample-replay-test"}},[t._v("#")]),t._v(" Sample Replay Test")]),t._v(" "),a("p",[t._v("This sample is also available in our samples repo at "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/test/java/com/uber/cadence/samples/hello/HelloActivityReplayTest.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivityReplayTest")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Test")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("testReplay")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throws")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Exception")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecutionFromResource")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloActivity.json"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivity"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("GreetingWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("h2",{attrs:{id:"workflow-shadower"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-shadower"}},[t._v("#")]),t._v(" Workflow Shadower")]),t._v(" "),a("p",[t._v("Workflow Replayer works well when verifying the compatibility against a small number of workflows histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.")]),t._v(" "),a("p",[t._v("Workflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.")]),t._v(" "),a("h3",{attrs:{id:"shadow-options"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-options"}},[t._v("#")]),t._v(" Shadow Options")]),t._v(" "),a("p",[t._v("Complete documentation on shadow options which includes default values, accepted values, etc. can be found "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-client/blob/master/src/main/java/com/uber/cadence/worker/ShadowingOptions.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(". The following sections are just a brief description of each option.")]),t._v(" "),a("h4",{attrs:{id:"scan-filters"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#scan-filters"}},[t._v("#")]),t._v(" Scan Filters")]),t._v(" "),a("ul",[a("li",[t._v("WorkflowQuery: If you are familiar with our advanced visibility query syntax, you can specify a query directly. If specified, all other scan filters must be left empty.")]),t._v(" "),a("li",[t._v("WorkflowTypes: A list of workflow Type names.")]),t._v(" "),a("li",[t._v("WorkflowStatuses: A list of workflow status.")]),t._v(" "),a("li",[t._v("WorkflowStartTimeFilter: Min and max timestamp for workflow start time.")]),t._v(" "),a("li",[t._v("WorkflowSamplingRate: Sampling workflows from the scan result before executing the replay test.")])]),t._v(" "),a("h4",{attrs:{id:"shadow-exit-condition"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-exit-condition"}},[t._v("#")]),t._v(" Shadow Exit Condition")]),t._v(" "),a("ul",[a("li",[t._v("ExpirationInterval: Shadowing will exit when the specified interval has passed.")]),t._v(" "),a("li",[t._v("ShadowCount: Shadowing will exit after this number of workflow has been replayed. Note: replay maybe skipped due to errors like can't fetch history, history too short, etc. Skipped workflows won't be taken into account for ShadowCount.")])]),t._v(" "),a("h4",{attrs:{id:"shadow-mode"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-mode"}},[t._v("#")]),t._v(" Shadow Mode")]),t._v(" "),a("ul",[a("li",[t._v("Normal: Shadowing will complete after all workflows matches WorkflowQuery (after sampling) have been replayed or when exit condition is met.")]),t._v(" "),a("li",[t._v("Continuous: A new round of shadowing will be started after all workflows matches WorkflowQuery have been replayed. There will be a 5 min wait period between each round, and currently this wait period is not configurable. Shadowing will complete only when ExitCondition is met. ExitCondition must be specified when using this mode.")])]),t._v(" "),a("h4",{attrs:{id:"shadow-concurrency"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-concurrency"}},[t._v("#")]),t._v(" Shadow Concurrency")]),t._v(" "),a("ul",[a("li",[t._v("Concurrency: workflow replay concurrency. If not specified, it will default to 1. For local shadowing, an error will be returned if a value higher than 1 is specified.")])]),t._v(" "),a("h3",{attrs:{id:"local-shadowing-test"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#local-shadowing-test"}},[t._v("#")]),t._v(" Local Shadowing Test")]),t._v(" "),a("p",[t._v("Local shadowing test is similar to the replay test. First create a workflow shadower with optional shadow and replay options, then register the workflow that needs to be shadowed. Finally, call the "),a("code",[t._v("Run")]),t._v(" method to start the shadowing. The method will return if shadowing has finished or any non-deterministic error is found.")]),t._v(" "),a("p",[t._v("Here's a simple example. The example is also available "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/test/java/com/uber/cadence/samples/hello/HelloWorkflowShadowingTest.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("testShadowing")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throws")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwable")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IWorkflowService")]),t._v(" service "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v(" options "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setShadowMode")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Mode"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Normal")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"GreetingWorkflow::getGreeting"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowStatuses")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("OPEN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("CLOSED")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExpirationIntervalInSeconds")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("60")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowShadower")]),t._v(" shadower "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowShadower")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("service"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" options"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n shadower"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivity"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("GreetingWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n shadower"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("run")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("h3",{attrs:{id:"shadowing-worker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadowing-worker"}},[t._v("#")]),t._v(" Shadowing Worker")]),t._v(" "),a("p",[t._v("NOTE:")]),t._v(" "),a("ul",[a("li",[a("strong",[t._v("All shadow workflows are running in one Cadence system domain, and right now, every user domain can only have one shadow workflow at a time.")])]),t._v(" "),a("li",[a("strong",[t._v("The Cadence server used for scanning and getting workflow history will also be the Cadence server for running your shadow workflow.")]),t._v(" Currently, there's no way to specify different Cadence servers for hosting the shadowing workflow and scanning/fetching workflow.")])]),t._v(" "),a("p",[t._v("Your worker can also be configured to run in shadow mode to run shadow tests as a workflow. This is useful if there's a number of workflows that need to be replayed. Using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. It can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.")]),t._v(" "),a("p",[t._v("When running in shadow mode, the normal decision worker will be disabled so that it won't update any production workflows. A special shadow activity worker will be started to execute activities for scanning and replaying workflows. The actual shadow workflow logic is controlled by Cadence server and your worker is only responsible for scanning and replaying workflows.")]),t._v(" "),a("p",[a("a",{attrs:{href:"https://github.com/uber/cadence-java-client/blob/master/src/main/java/com/uber/cadence/internal/metrics/MetricsType.java#L169-L172",target:"_blank",rel:"noopener noreferrer"}},[t._v("Replay succeed, skipped and failed metrics"),a("OutboundLink")],1),t._v(" will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.")]),t._v(" "),a("p",[t._v("To enable the shadow mode, you can initialize a shadowing worker and pass in the shadowing options.")]),t._v(" "),a("p",[t._v("To enable the shadowing worker, here is a example. The example is also available "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/shadowing/ShadowTraffic.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(":")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v(" options "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setShadowMode")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Mode"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Normal")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"GreetingWorkflow::getGreeting"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowStatuses")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("OPEN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("CLOSED")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExpirationIntervalInSeconds")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("60")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingWorker")]),t._v(" shadowingWorker "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingWorker")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n workflowClient"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloActivity"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n options"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n shadowingWorker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivity"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("GreetingWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\tshadowingWorker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),a("p",[t._v("Registered workflows will be forwarded to the underlying WorkflowReplayer. DataConverter, WorkflowInterceptorChainFactories, ContextPropagators, and Tracer specified in the "),a("code",[t._v("worker.Options")]),t._v(" will also be used as ReplayOptions. Since all shadow workflows are running in one system domain, to avoid conflict, "),a("strong",[t._v("the actual task list name used will be "),a("code",[t._v("domain-tasklist")]),t._v(".")])])])}),[],!1,null,null,null);a.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[72],{376:function(t,a,s){"use strict";s.r(a);var n=s(0),e=Object(n.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"workflow-replay-and-shadowing"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-replay-and-shadowing"}},[t._v("#")]),t._v(" Workflow Replay and Shadowing")]),t._v(" "),a("p",[t._v("In the Versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. However, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. To help you identify incompatible changes and catch them before production traffic is impacted, we implemented Workflow Replayer and Workflow Shadower.")]),t._v(" "),a("h2",{attrs:{id:"workflow-replayer"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-replayer"}},[t._v("#")]),t._v(" Workflow Replayer")]),t._v(" "),a("p",[t._v("Workflow Replayer is a testing component for replaying existing workflow histories against a workflow definition. The replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.")]),t._v(" "),a("h3",{attrs:{id:"write-a-replay-test"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#write-a-replay-test"}},[t._v("#")]),t._v(" Write a Replay Test")]),t._v(" "),a("h4",{attrs:{id:"step-1-prepare-workflow-histories"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#step-1-prepare-workflow-histories"}},[t._v("#")]),t._v(" Step 1: Prepare workflow histories")]),t._v(" "),a("p",[t._v("Replayer can read workflow history from a local json file or fetch it directly from the Cadence server. If you would like to use the first method, you can use the following CLI command, otherwise you can skip to the next step.")]),t._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[t._v("cadence --do workflow show --wid --rid --of \n")])])]),a("p",[t._v("The dumped workflow history will be stored in the file at the path you specified in json format.")]),t._v(" "),a("h4",{attrs:{id:"step-2-call-the-replay-method"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#step-2-call-the-replay-method"}},[t._v("#")]),t._v(" Step 2: Call the replay method")]),t._v(" "),a("p",[t._v("Once you have the workflow history or have the connection to Cadence server for fetching history, call one of the four replay methods to start the replay test.")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// if workflow history has been loaded into memory")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecution")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("history"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("MyWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// if workflow history is stored in a json file")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecutionFromResource")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflowHistory.json"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("MyWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// if workflow history is read from a File")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecution")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("historyFileObject"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("MyWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),a("h4",{attrs:{id:"step-3-catch-returned-exception"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#step-3-catch-returned-exception"}},[t._v("#")]),t._v(" Step 3: Catch returned exception")]),t._v(" "),a("p",[t._v("If an exception is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.")]),t._v(" "),a("h3",{attrs:{id:"sample-replay-test"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#sample-replay-test"}},[t._v("#")]),t._v(" Sample Replay Test")]),t._v(" "),a("p",[t._v("This sample is also available in our samples repo at "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/test/java/com/uber/cadence/samples/hello/HelloActivityReplayTest.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivityReplayTest")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token annotation punctuation"}},[t._v("@Test")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("testReplay")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throws")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Exception")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowReplayer")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("replayWorkflowExecutionFromResource")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloActivity.json"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivity"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("GreetingWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("h2",{attrs:{id:"workflow-shadower"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-shadower"}},[t._v("#")]),t._v(" Workflow Shadower")]),t._v(" "),a("p",[t._v("Workflow Replayer works well when verifying the compatibility against a small number of workflows histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.")]),t._v(" "),a("p",[t._v("Workflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.")]),t._v(" "),a("h3",{attrs:{id:"shadow-options"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-options"}},[t._v("#")]),t._v(" Shadow Options")]),t._v(" "),a("p",[t._v("Complete documentation on shadow options which includes default values, accepted values, etc. can be found "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-client/blob/master/src/main/java/com/uber/cadence/worker/ShadowingOptions.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(". The following sections are just a brief description of each option.")]),t._v(" "),a("h4",{attrs:{id:"scan-filters"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#scan-filters"}},[t._v("#")]),t._v(" Scan Filters")]),t._v(" "),a("ul",[a("li",[t._v("WorkflowQuery: If you are familiar with our advanced visibility query syntax, you can specify a query directly. If specified, all other scan filters must be left empty.")]),t._v(" "),a("li",[t._v("WorkflowTypes: A list of workflow Type names.")]),t._v(" "),a("li",[t._v("WorkflowStatuses: A list of workflow status.")]),t._v(" "),a("li",[t._v("WorkflowStartTimeFilter: Min and max timestamp for workflow start time.")]),t._v(" "),a("li",[t._v("WorkflowSamplingRate: Sampling workflows from the scan result before executing the replay test.")])]),t._v(" "),a("h4",{attrs:{id:"shadow-exit-condition"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-exit-condition"}},[t._v("#")]),t._v(" Shadow Exit Condition")]),t._v(" "),a("ul",[a("li",[t._v("ExpirationInterval: Shadowing will exit when the specified interval has passed.")]),t._v(" "),a("li",[t._v("ShadowCount: Shadowing will exit after this number of workflow has been replayed. Note: replay maybe skipped due to errors like can't fetch history, history too short, etc. Skipped workflows won't be taken into account for ShadowCount.")])]),t._v(" "),a("h4",{attrs:{id:"shadow-mode"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-mode"}},[t._v("#")]),t._v(" Shadow Mode")]),t._v(" "),a("ul",[a("li",[t._v("Normal: Shadowing will complete after all workflows matches WorkflowQuery (after sampling) have been replayed or when exit condition is met.")]),t._v(" "),a("li",[t._v("Continuous: A new round of shadowing will be started after all workflows matches WorkflowQuery have been replayed. There will be a 5 min wait period between each round, and currently this wait period is not configurable. Shadowing will complete only when ExitCondition is met. ExitCondition must be specified when using this mode.")])]),t._v(" "),a("h4",{attrs:{id:"shadow-concurrency"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadow-concurrency"}},[t._v("#")]),t._v(" Shadow Concurrency")]),t._v(" "),a("ul",[a("li",[t._v("Concurrency: workflow replay concurrency. If not specified, it will default to 1. For local shadowing, an error will be returned if a value higher than 1 is specified.")])]),t._v(" "),a("h3",{attrs:{id:"local-shadowing-test"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#local-shadowing-test"}},[t._v("#")]),t._v(" Local Shadowing Test")]),t._v(" "),a("p",[t._v("Local shadowing test is similar to the replay test. First create a workflow shadower with optional shadow and replay options, then register the workflow that needs to be shadowed. Finally, call the "),a("code",[t._v("Run")]),t._v(" method to start the shadowing. The method will return if shadowing has finished or any non-deterministic error is found.")]),t._v(" "),a("p",[t._v("Here's a simple example. The example is also available "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/test/java/com/uber/cadence/samples/hello/HelloWorkflowShadowingTest.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(".")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("public")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("void")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("testShadowing")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("throws")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Throwable")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("IWorkflowService")]),t._v(" service "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v(" options "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setShadowMode")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Mode"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Normal")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"GreetingWorkflow::getGreeting"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowStatuses")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("OPEN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("CLOSED")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExpirationIntervalInSeconds")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("60")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowShadower")]),t._v(" shadower "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowShadower")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("service"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" options"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("TASK_LIST")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n shadower"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivity"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("GreetingWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n shadower"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("run")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("h3",{attrs:{id:"shadowing-worker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#shadowing-worker"}},[t._v("#")]),t._v(" Shadowing Worker")]),t._v(" "),a("p",[t._v("NOTE:")]),t._v(" "),a("ul",[a("li",[a("strong",[t._v("All shadow workflows are running in one Cadence system domain, and right now, every user domain can only have one shadow workflow at a time.")])]),t._v(" "),a("li",[a("strong",[t._v("The Cadence server used for scanning and getting workflow history will also be the Cadence server for running your shadow workflow.")]),t._v(" Currently, there's no way to specify different Cadence servers for hosting the shadowing workflow and scanning/fetching workflow.")])]),t._v(" "),a("p",[t._v("Your worker can also be configured to run in shadow mode to run shadow tests as a workflow. This is useful if there's a number of workflows that need to be replayed. Using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. It can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.")]),t._v(" "),a("p",[t._v("When running in shadow mode, the normal decision worker will be disabled so that it won't update any production workflows. A special shadow activity worker will be started to execute activities for scanning and replaying workflows. The actual shadow workflow logic is controlled by Cadence server and your worker is only responsible for scanning and replaying workflows.")]),t._v(" "),a("p",[a("a",{attrs:{href:"https://github.com/uber/cadence-java-client/blob/master/src/main/java/com/uber/cadence/internal/metrics/MetricsType.java#L169-L172",target:"_blank",rel:"noopener noreferrer"}},[t._v("Replay succeed, skipped and failed metrics"),a("OutboundLink")],1),t._v(" will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.")]),t._v(" "),a("p",[t._v("To enable the shadow mode, you can initialize a shadowing worker and pass in the shadowing options.")]),t._v(" "),a("p",[t._v("To enable the shadowing worker, here is a example. The example is also available "),a("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/shadowing/ShadowTraffic.java",target:"_blank",rel:"noopener noreferrer"}},[t._v("here"),a("OutboundLink")],1),t._v(":")]),t._v(" "),a("div",{staticClass:"language-java extra-class"},[a("pre",{pre:!0,attrs:{class:"language-java"}},[a("code",[a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),t._v(" workflowClient "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClient")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowServiceTChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowClientOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v(" options "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingOptions")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newBuilder")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setDomain")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("DOMAIN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setShadowMode")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Mode"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Normal")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"GreetingWorkflow::getGreeting"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setWorkflowStatuses")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("Lists")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("newArrayList")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("OPEN")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkflowStatus")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token constant"}},[t._v("CLOSED")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ExitCondition")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("setExpirationIntervalInSeconds")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("60")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("build")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingWorker")]),t._v(" shadowingWorker "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("new")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("ShadowingWorker")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n workflowClient"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"HelloActivity"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("WorkerOptions")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("defaultInstance")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n options"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n shadowingWorker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("registerWorkflowImplementationTypes")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token class-name"}},[t._v("HelloActivity"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("GreetingWorkflowImpl")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("class")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n\tshadowingWorker"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("start")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v("\n")])])]),a("p",[t._v("Registered workflows will be forwarded to the underlying WorkflowReplayer. DataConverter, WorkflowInterceptorChainFactories, ContextPropagators, and Tracer specified in the "),a("code",[t._v("worker.Options")]),t._v(" will also be used as ReplayOptions. Since all shadow workflows are running in one system domain, to avoid conflict, "),a("strong",[t._v("the actual task list name used will be "),a("code",[t._v("domain-tasklist")]),t._v(".")])])])}),[],!1,null,null,null);a.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/74.257b922b.js b/assets/js/74.61219546.js similarity index 99% rename from assets/js/74.257b922b.js rename to assets/js/74.61219546.js index 8f92d55e0..624f91d30 100644 --- a/assets/js/74.257b922b.js +++ b/assets/js/74.61219546.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[74],{382:function(t,n,s){"use strict";s.r(n);var a=s(0),e=Object(a.a)({},(function(){var t=this,n=t._self._c;return n("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[n("h1",{attrs:{id:"worker-service"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#worker-service"}},[t._v("#")]),t._v(" Worker service")]),t._v(" "),n("p",[t._v("A "),n("Term",{attrs:{term:"worker"}}),t._v(" or "),n("em",[n("Term",{attrs:{term:"worker"}}),t._v(" service")],1),t._v(" is a service that hosts the "),n("Term",{attrs:{term:"workflow"}}),t._v(" and "),n("Term",{attrs:{term:"activity"}}),t._v(" implementations. The "),n("Term",{attrs:{term:"worker"}}),t._v(" polls the "),n("em",[t._v("Cadence service")]),t._v(" for "),n("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", performs those "),n("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", and communicates "),n("Term",{attrs:{term:"task"}}),t._v(" execution results back to the "),n("em",[t._v("Cadence service")]),t._v(". "),n("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" services are developed, deployed, and operated by Cadence customers.")],1),t._v(" "),n("p",[t._v("You can run a Cadence "),n("Term",{attrs:{term:"worker"}}),t._v(" in a new or an existing service. Use the framework APIs to start the Cadence "),n("Term",{attrs:{term:"worker"}}),t._v(" and link in all "),n("Term",{attrs:{term:"activity"}}),t._v(" and "),n("Term",{attrs:{term:"workflow"}}),t._v(" implementations that you require the service to execute.")],1),t._v(" "),n("p",[t._v("The following is an example worker service utilising tchannel, one of the two transport protocols supported by Cadence.")]),t._v(" "),n("div",{staticClass:"language-go extra-class"},[n("pre",{pre:!0,attrs:{class:"language-go"}},[n("code",[n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("package")]),t._v(" main\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/worker"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber-go/tally"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap/zapcore"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/api/transport"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/transport/tchannel"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" HostPort "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"127.0.0.1:7933"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" Domain "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleDomain"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" TaskListName "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleWorker"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" ClientName "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleWorker"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" CadenceService "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"cadence-frontend"')]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("startWorker")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildLogger")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildLogger")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Logger "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n config "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDevelopmentConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Level"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("SetLevel")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("zapcore"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("InfoLevel"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v("\n logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Build")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to setup logger"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" logger\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ch"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" tchannel"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewChannelTransport")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("tchannel"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ServiceName")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to setup tchannel"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n dispatcher "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDispatcher")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Name"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Unary"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ch"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSingleOutbound")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("HostPort"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start dispatcher"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("New")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ClientConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("startWorker")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("logger "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" service workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// TaskListName identifies set of client workflows, activities, and workers.")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// It could be your group or client or application name.")]),t._v("\n workerOptions "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" worker"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Options"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n MetricsScope"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" tally"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTestScope")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("TaskListName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),n("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),n("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n worker "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" worker"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("New")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n service"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Domain"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n TaskListName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workerOptions"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" worker"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start worker"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Info")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Started Worker."')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"worker"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" TaskListName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("The other supported transport protocol is gRPC. A worker service using gRPC can be set up in similar fashion, but the "),n("code",[t._v("buildCadenceClient")]),t._v(" function will need the following alterations, and some of the imported packages need to change.")]),t._v(" "),n("div",{staticClass:"language-go extra-class"},[n("pre",{pre:!0,attrs:{class:"language-go"}},[n("code",[t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/compatibility"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/worker"')]),t._v("\n\n apiv1 "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber/cadence-idl/go/proto/api/v1"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber-go/tally"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap/zapcore"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/transport/grpc"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n dispatcher "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDispatcher")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Name"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Unary"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTransport")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSingleOutbound")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("HostPort"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start dispatcher"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n clientConfig "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ClientConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" compatibility"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewThrift2ProtoAdapter")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDomainAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkflowAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkerAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewVisibilityAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("Note also that the "),n("code",[t._v("HostPort")]),t._v(" variable must be changed to target the gRPC listener port of the Cadence cluster (typically, 7833).")]),t._v(" "),n("p",[t._v("Finally, gRPC can also support TLS connections between Go clients and the Cadence server. This requires the following alterations to the imported packages, and the "),n("code",[t._v("buildCadenceClient")]),t._v(" function. Note that this also requires you replace "),n("code",[t._v('"path/to/cert/file"')]),t._v(" in the function with a path to a valid certificate file matching the TLS configuration of the Cadence server.")]),t._v(" "),n("div",{staticClass:"language-go extra-class"},[n("pre",{pre:!0,attrs:{class:"language-go"}},[n("code",[t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"fmt"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/compatibility"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/worker"')]),t._v("\n\n apiv1 "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber/cadence-idl/go/proto/api/v1"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber-go/tally"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap/zapcore"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/transport/grpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/peer"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/peer/hostport"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"crypto/tls"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"crypto/x509"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"io/ioutil"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"google.golang.org/grpc/credentials"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n grpcTransport "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTransport")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" dialOptions "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v("grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("DialOption\n \n caCert"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" ioutil"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ReadFile")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"/path/to/cert/file"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n fmt"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Printf")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to load server CA certificate: %v"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Error")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("err"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n caCertPool "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" x509"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewCertPool")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("caCertPool"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("AppendCertsFromPEM")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("caCert"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n fmt"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Errorf")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to add server CA\'s certificate"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n tlsConfig "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" tls"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n RootCAs"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" caCertPool"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n creds "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" credentials"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTLS")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("tlsConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n dialOptions "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("append")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("dialOptions"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("DialerCredentials")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("creds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n \n dialer "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" grpcTransport"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDialer")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("dialOptions"),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n outbound "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" grpcTransport"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewOutbound")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n peer"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSingle")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("hostport"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("PeerIdentifier")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("HostPort"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" dialer"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n \n dispatcher "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDispatcher")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Name"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Unary"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" outbound"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start dispatcher"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n clientConfig "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ClientConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n \n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" compatibility"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewThrift2ProtoAdapter")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDomainAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkflowAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkerAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewVisibilityAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);n.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[74],{381:function(t,n,s){"use strict";s.r(n);var a=s(0),e=Object(a.a)({},(function(){var t=this,n=t._self._c;return n("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[n("h1",{attrs:{id:"worker-service"}},[n("a",{staticClass:"header-anchor",attrs:{href:"#worker-service"}},[t._v("#")]),t._v(" Worker service")]),t._v(" "),n("p",[t._v("A "),n("Term",{attrs:{term:"worker"}}),t._v(" or "),n("em",[n("Term",{attrs:{term:"worker"}}),t._v(" service")],1),t._v(" is a service that hosts the "),n("Term",{attrs:{term:"workflow"}}),t._v(" and "),n("Term",{attrs:{term:"activity"}}),t._v(" implementations. The "),n("Term",{attrs:{term:"worker"}}),t._v(" polls the "),n("em",[t._v("Cadence service")]),t._v(" for "),n("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", performs those "),n("Term",{attrs:{term:"task",show:"tasks"}}),t._v(", and communicates "),n("Term",{attrs:{term:"task"}}),t._v(" execution results back to the "),n("em",[t._v("Cadence service")]),t._v(". "),n("Term",{attrs:{term:"worker",show:"Worker"}}),t._v(" services are developed, deployed, and operated by Cadence customers.")],1),t._v(" "),n("p",[t._v("You can run a Cadence "),n("Term",{attrs:{term:"worker"}}),t._v(" in a new or an existing service. Use the framework APIs to start the Cadence "),n("Term",{attrs:{term:"worker"}}),t._v(" and link in all "),n("Term",{attrs:{term:"activity"}}),t._v(" and "),n("Term",{attrs:{term:"workflow"}}),t._v(" implementations that you require the service to execute.")],1),t._v(" "),n("p",[t._v("The following is an example worker service utilising tchannel, one of the two transport protocols supported by Cadence.")]),t._v(" "),n("div",{staticClass:"language-go extra-class"},[n("pre",{pre:!0,attrs:{class:"language-go"}},[n("code",[n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("package")]),t._v(" main\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/worker"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber-go/tally"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap/zapcore"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/api/transport"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/transport/tchannel"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" HostPort "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"127.0.0.1:7933"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" Domain "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleDomain"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" TaskListName "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleWorker"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" ClientName "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleWorker"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" CadenceService "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"cadence-frontend"')]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("main")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("startWorker")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildLogger")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildLogger")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Logger "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n config "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDevelopmentConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Level"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("SetLevel")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("zapcore"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("InfoLevel"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v("\n logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Build")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to setup logger"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" logger\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ch"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" tchannel"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewChannelTransport")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("tchannel"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ServiceName")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to setup tchannel"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n dispatcher "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDispatcher")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Name"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Unary"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ch"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSingleOutbound")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("HostPort"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start dispatcher"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("New")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ClientConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("startWorker")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("logger "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" service workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// TaskListName identifies set of client workflows, activities, and workers.")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// It could be your group or client or application name.")]),t._v("\n workerOptions "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" worker"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Options"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n MetricsScope"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" tally"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTestScope")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("TaskListName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),n("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),n("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n worker "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" worker"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("New")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n service"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Domain"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n TaskListName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workerOptions"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" worker"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start worker"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n logger"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Info")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Started Worker."')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"worker"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" TaskListName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("The other supported transport protocol is gRPC. A worker service using gRPC can be set up in similar fashion, but the "),n("code",[t._v("buildCadenceClient")]),t._v(" function will need the following alterations, and some of the imported packages need to change.")]),t._v(" "),n("div",{staticClass:"language-go extra-class"},[n("pre",{pre:!0,attrs:{class:"language-go"}},[n("code",[t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/compatibility"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/worker"')]),t._v("\n\n apiv1 "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber/cadence-idl/go/proto/api/v1"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber-go/tally"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap/zapcore"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/transport/grpc"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\n dispatcher "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDispatcher")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Name"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Unary"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTransport")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSingleOutbound")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("HostPort"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start dispatcher"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n clientConfig "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ClientConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" compatibility"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewThrift2ProtoAdapter")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDomainAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkflowAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkerAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewVisibilityAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),n("p",[t._v("Note also that the "),n("code",[t._v("HostPort")]),t._v(" variable must be changed to target the gRPC listener port of the Cadence cluster (typically, 7833).")]),t._v(" "),n("p",[t._v("Finally, gRPC can also support TLS connections between Go clients and the Cadence server. This requires the following alterations to the imported packages, and the "),n("code",[t._v("buildCadenceClient")]),t._v(" function. Note that this also requires you replace "),n("code",[t._v('"path/to/cert/file"')]),t._v(" in the function with a path to a valid certificate file matching the TLS configuration of the Cadence server.")]),t._v(" "),n("div",{staticClass:"language-go extra-class"},[n("pre",{pre:!0,attrs:{class:"language-go"}},[n("code",[t._v("\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"fmt"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/compatibility"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/worker"')]),t._v("\n\n apiv1 "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber/cadence-idl/go/proto/api/v1"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"github.com/uber-go/tally"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap/zapcore"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/transport/grpc"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/peer"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/yarpc/peer/hostport"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"crypto/tls"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"crypto/x509"')]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"io/ioutil"')]),t._v("\n\n "),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"google.golang.org/grpc/credentials"')]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("\n\n"),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("buildCadenceClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" workflowserviceclient"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Interface "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n grpcTransport "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTransport")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" dialOptions "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),t._v("grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("DialOption\n \n caCert"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" ioutil"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ReadFile")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"/path/to/cert/file"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n fmt"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Printf")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to load server CA certificate: %v"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Error")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("err"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n caCertPool "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" x509"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewCertPool")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!")]),t._v("caCertPool"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("AppendCertsFromPEM")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("caCert"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n fmt"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Errorf")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to add server CA\'s certificate"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n tlsConfig "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" tls"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n RootCAs"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" caCertPool"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n creds "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" credentials"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTLS")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("tlsConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n dialOptions "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("append")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("dialOptions"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" grpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("DialerCredentials")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("creds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n \n dialer "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" grpcTransport"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDialer")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("dialOptions"),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n outbound "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" grpcTransport"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewOutbound")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n peer"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSingle")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("hostport"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("PeerIdentifier")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("HostPort"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" dialer"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n \n dispatcher "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDispatcher")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Config"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n Name"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" ClientName"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" yarpc"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Outbounds"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("Unary"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" outbound"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("Start")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("panic")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),n("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Failed to start dispatcher"')]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n \n clientConfig "),n("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" dispatcher"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("ClientConfig")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CadenceService"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n \n "),n("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" compatibility"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewThrift2ProtoAdapter")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewDomainAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkflowAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewWorkerAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n apiv1"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),n("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewVisibilityAPIYARPCClient")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("clientConfig"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),n("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);n.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/76.437c2c65.js b/assets/js/76.90dcf880.js similarity index 99% rename from assets/js/76.437c2c65.js rename to assets/js/76.90dcf880.js index 8e80799df..8d2969680 100644 --- a/assets/js/76.437c2c65.js +++ b/assets/js/76.90dcf880.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[76],{381:function(t,e,n){"use strict";n.r(e);var s=n(0),a=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"starting-workflows"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#starting-workflows"}},[t._v("#")]),t._v(" Starting workflows")]),t._v(" "),e("p",[t._v("Starting workflows can be done from any service that can send requests to\nthe Cadence server. There is no requirement for workflows to be started from the\nworker services.")]),t._v(" "),e("p",[t._v("Generally workflows can either be started using a direct reference to the\nworkflow code, or by referring to the registered name of the function. In\n"),e("RouterLink",{attrs:{to:"/docs/go-client/create-workflows/#registration"}},[t._v("Workflow Registration")]),t._v(" we show\nhow to register the workflows.")],1),t._v(" "),e("h2",{attrs:{id:"starting-a-workflow"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#starting-a-workflow"}},[t._v("#")]),t._v(" Starting a workflow")]),t._v(" "),e("p",[t._v("After "),e("a",{attrs:{href:"/docs/go-client/create-workflows"}},[t._v("creating a workflow")]),t._v(" we can start it.\nThis can be done "),e("RouterLink",{attrs:{to:"/docs/cli/#start-workflow"}},[t._v("from the cli")]),t._v(", but typically\nwe want to start workflow programmatically e.g. from an http handler. We can do\nthis using the\n"),e("a",{attrs:{href:"https://pkg.go.dev/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[e("code",[t._v("client.StartWorkflow")]),e("OutboundLink")],1),t._v("\nfunction:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/client"')]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" cadenceClient client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Client \n# Initialize cadenceClient\n\ncadenceClient"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("StartWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("StartWorkflowOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n TaskList"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflow-task-list"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n ExecutionStartToCloseTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Second"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n WorkflowFunc"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg1"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg2"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg3"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("The will start the workflow defined in the function "),e("code",[t._v("WorkflowFunc")]),t._v(", note that\nfor named workflows "),e("code",[t._v("WorkflowFunc")]),t._v(" could be replaced by the name e.g.\n"),e("code",[t._v('"WorkflowFuncName"')]),t._v(".")]),t._v(" "),e("p",[e("code",[t._v("workflowArg1")]),t._v(", "),e("code",[t._v("workflowArg2")]),t._v(", "),e("code",[t._v("workflowArg3")]),t._v(" are arguments to the workflow, as\nspecified in "),e("code",[t._v("WorkflowFunc")]),t._v(", note that the arguments needs to be "),e("em",[t._v("serializable")]),t._v(".")]),t._v(" "),e("h2",{attrs:{id:"jitter-start-and-batches-of-workflows"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#jitter-start-and-batches-of-workflows"}},[t._v("#")]),t._v(" Jitter Start and Batches of Workflows")]),t._v(" "),e("p",[t._v("Below we list all the "),e("code",[t._v("startWorkflowOptions")]),t._v(", however a particularly useful option is\n"),e("code",[t._v("JitterStart")]),t._v(".")]),t._v(" "),e("p",[t._v("Starting many workflows at the same time will have Cadence trying to schedule\nall the workflows immediately. This can result in overloading Cadence and the\ndatabase backing Cadence, as well as the workers processing the workflows.")]),t._v(" "),e("p",[t._v("This is especially bad when the workflow starts comes in batches, such as an end\nof month load. These sudden loads can lead to both Cadence and the workers\nneeding to immediately scale up. Scaling up often takes some time, causing\nqueues in Cadence, delaying the execution of all workflows, potentially causing\nworkflows to timeout.")]),t._v(" "),e("p",[t._v("To solve this we can start our workflows with "),e("code",[t._v("JitterStart")]),t._v(". "),e("code",[t._v("JitterStart")]),t._v(" will start\nthe workflow at a random point between "),e("code",[t._v("now")]),t._v(" and "),e("code",[t._v("now + JitterStart")]),t._v(", so if we\ne.g. start 1000 workflows at 12:00 AM with a "),e("code",[t._v("JitterStart")]),t._v(" of 6 hours, the\nworkflows will be randomly started between 12:00 AM and 6:00 PM.")]),t._v(" "),e("p",[t._v("This makes the sudden load of 1000 workflows much more manageable.")]),t._v(" "),e("p",[t._v("For many batch-like workloads a random delay is completely acceptable as the\nbatch just needs to be processed e.g. before the end of the day.")]),t._v(" "),e("p",[t._v("Adding a JitterStart of 6 hours in the example above is as simple as adding")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("JitterStart"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("6")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Hour"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n")])])]),e("p",[t._v("to the options like so,")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/client"')]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" cadenceClient client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Client\n# Initialize cadenceClient\n\ncadenceClient"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("StartWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("StartWorkflowOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n TaskList"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflow-task-list"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n ExecutionStartToCloseTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Second"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n JitterStart"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("6")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Hour"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Added JitterStart")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n WorkflowFunc"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg1"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg2"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg3"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("now the workflow will start at a random point between now and six hours from now.")]),t._v(" "),e("h2",{attrs:{id:"startworkflowoptions"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#startworkflowoptions"}},[t._v("#")]),t._v(" StartWorkflowOptions")]),t._v(" "),e("p",[t._v("The\n"),e("a",{attrs:{href:"https://pkg.go.dev/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("client.StartWorkflowOptions"),e("OutboundLink")],1),t._v("\nspecifies the behavior of this particular workflow. The invocation above only\nspecifies the two mandatory options; "),e("code",[t._v("TaskList")]),t._v(" and\n"),e("code",[t._v("ExecutionStartToCloseTimeout")]),t._v(", all the options are described in the "),e("a",{attrs:{href:"https://pkg.go.dev/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("inline\ndocumentation"),e("OutboundLink")],1),t._v(":")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" StartWorkflowOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ID - The business identifier of the workflow execution.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to a uuid.")]),t._v("\n\tID "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// TaskList - The decisions of the workflow are scheduled on this queue.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This is also the default task list on which activities are scheduled. The workflow author can choose")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// to override this using activity options.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Mandatory: No default.")]),t._v("\n\tTaskList "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ExecutionStartToCloseTimeout - The timeout for duration of workflow execution.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The resolution is seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Mandatory: No default.")]),t._v("\n\tExecutionStartToCloseTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// DecisionTaskStartToCloseTimeout - The timeout for processing decision task from the time the worker")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// pulled this task. If a decision task is lost, it is retried after this timeout.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The resolution is seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to 10 secs.")]),t._v("\n\tDecisionTaskStartToCloseTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// WorkflowIDReusePolicy - Whether server allow reuse of workflow ID, can be useful")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// for dedup logic if set to WorkflowIdReusePolicyRejectDuplicate.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to WorkflowIDReusePolicyAllowDuplicateFailedOnly.")]),t._v("\n\tWorkflowIDReusePolicy WorkflowIDReusePolicy\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// RetryPolicy - Optional retry policy for workflow. If a retry policy is specified, in case of workflow failure")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// server will start new workflow execution if needed based on the retry policy.")]),t._v("\n\tRetryPolicy "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("RetryPolicy\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// as a cron based on the schedule. The scheduling will be based on UTC time. Schedule for next run only happen")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// or timeout, the workflow will be retried based on the retry policy. While the workflow is retrying, it won't")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule its next run. If next schedule is due while workflow is running (or retrying), then it will skip that")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The cron spec is as following:")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ┌───────────── minute (0 - 59)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ ┌───────────── hour (0 - 23)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ ┌───────────── day of the month (1 - 31)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ ┌───────────── month (1 - 12)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// * * * * *")]),t._v("\n\tCronSchedule "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Memo - Optional non-indexed info that will be shown in list workflow.")]),t._v("\n\tMemo "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// supported when Cadence server is using ElasticSearch). The key and value type must be registered on Cadence server side.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Use GetSearchAttributes API to get valid key and corresponding value type.")]),t._v("\n\tSearchAttributes "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// DelayStartSeconds - Seconds to delay the workflow start")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The resolution is seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to 0 seconds")]),t._v("\n\tDelayStart time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// JitterStart - Seconds to jitter the workflow start. For example, if set to 10, the workflow will start some time between 0-10 seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This works with CronSchedule and with DelayStart.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to 0 seconds")]),t._v("\n\tJitterStart time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[76],{384:function(t,e,n){"use strict";n.r(e);var s=n(0),a=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"starting-workflows"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#starting-workflows"}},[t._v("#")]),t._v(" Starting workflows")]),t._v(" "),e("p",[t._v("Starting workflows can be done from any service that can send requests to\nthe Cadence server. There is no requirement for workflows to be started from the\nworker services.")]),t._v(" "),e("p",[t._v("Generally workflows can either be started using a direct reference to the\nworkflow code, or by referring to the registered name of the function. In\n"),e("RouterLink",{attrs:{to:"/docs/go-client/create-workflows/#registration"}},[t._v("Workflow Registration")]),t._v(" we show\nhow to register the workflows.")],1),t._v(" "),e("h2",{attrs:{id:"starting-a-workflow"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#starting-a-workflow"}},[t._v("#")]),t._v(" Starting a workflow")]),t._v(" "),e("p",[t._v("After "),e("a",{attrs:{href:"/docs/go-client/create-workflows"}},[t._v("creating a workflow")]),t._v(" we can start it.\nThis can be done "),e("RouterLink",{attrs:{to:"/docs/cli/#start-workflow"}},[t._v("from the cli")]),t._v(", but typically\nwe want to start workflow programmatically e.g. from an http handler. We can do\nthis using the\n"),e("a",{attrs:{href:"https://pkg.go.dev/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[e("code",[t._v("client.StartWorkflow")]),e("OutboundLink")],1),t._v("\nfunction:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/client"')]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" cadenceClient client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Client \n# Initialize cadenceClient\n\ncadenceClient"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("StartWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("StartWorkflowOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n TaskList"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflow-task-list"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n ExecutionStartToCloseTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Second"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n WorkflowFunc"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg1"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg2"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg3"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("The will start the workflow defined in the function "),e("code",[t._v("WorkflowFunc")]),t._v(", note that\nfor named workflows "),e("code",[t._v("WorkflowFunc")]),t._v(" could be replaced by the name e.g.\n"),e("code",[t._v('"WorkflowFuncName"')]),t._v(".")]),t._v(" "),e("p",[e("code",[t._v("workflowArg1")]),t._v(", "),e("code",[t._v("workflowArg2")]),t._v(", "),e("code",[t._v("workflowArg3")]),t._v(" are arguments to the workflow, as\nspecified in "),e("code",[t._v("WorkflowFunc")]),t._v(", note that the arguments needs to be "),e("em",[t._v("serializable")]),t._v(".")]),t._v(" "),e("h2",{attrs:{id:"jitter-start-and-batches-of-workflows"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#jitter-start-and-batches-of-workflows"}},[t._v("#")]),t._v(" Jitter Start and Batches of Workflows")]),t._v(" "),e("p",[t._v("Below we list all the "),e("code",[t._v("startWorkflowOptions")]),t._v(", however a particularly useful option is\n"),e("code",[t._v("JitterStart")]),t._v(".")]),t._v(" "),e("p",[t._v("Starting many workflows at the same time will have Cadence trying to schedule\nall the workflows immediately. This can result in overloading Cadence and the\ndatabase backing Cadence, as well as the workers processing the workflows.")]),t._v(" "),e("p",[t._v("This is especially bad when the workflow starts comes in batches, such as an end\nof month load. These sudden loads can lead to both Cadence and the workers\nneeding to immediately scale up. Scaling up often takes some time, causing\nqueues in Cadence, delaying the execution of all workflows, potentially causing\nworkflows to timeout.")]),t._v(" "),e("p",[t._v("To solve this we can start our workflows with "),e("code",[t._v("JitterStart")]),t._v(". "),e("code",[t._v("JitterStart")]),t._v(" will start\nthe workflow at a random point between "),e("code",[t._v("now")]),t._v(" and "),e("code",[t._v("now + JitterStart")]),t._v(", so if we\ne.g. start 1000 workflows at 12:00 AM with a "),e("code",[t._v("JitterStart")]),t._v(" of 6 hours, the\nworkflows will be randomly started between 12:00 AM and 6:00 PM.")]),t._v(" "),e("p",[t._v("This makes the sudden load of 1000 workflows much more manageable.")]),t._v(" "),e("p",[t._v("For many batch-like workloads a random delay is completely acceptable as the\nbatch just needs to be processed e.g. before the end of the day.")]),t._v(" "),e("p",[t._v("Adding a JitterStart of 6 hours in the example above is as simple as adding")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("JitterStart"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("6")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Hour"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n")])])]),e("p",[t._v("to the options like so,")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/client"')]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" cadenceClient client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Client\n# Initialize cadenceClient\n\ncadenceClient"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("StartWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("StartWorkflowOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n TaskList"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"workflow-task-list"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n ExecutionStartToCloseTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("10")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Second"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n JitterStart"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("6")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Hour"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Added JitterStart")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n WorkflowFunc"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg1"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg2"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n workflowArg3"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("now the workflow will start at a random point between now and six hours from now.")]),t._v(" "),e("h2",{attrs:{id:"startworkflowoptions"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#startworkflowoptions"}},[t._v("#")]),t._v(" StartWorkflowOptions")]),t._v(" "),e("p",[t._v("The\n"),e("a",{attrs:{href:"https://pkg.go.dev/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("client.StartWorkflowOptions"),e("OutboundLink")],1),t._v("\nspecifies the behavior of this particular workflow. The invocation above only\nspecifies the two mandatory options; "),e("code",[t._v("TaskList")]),t._v(" and\n"),e("code",[t._v("ExecutionStartToCloseTimeout")]),t._v(", all the options are described in the "),e("a",{attrs:{href:"https://pkg.go.dev/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("inline\ndocumentation"),e("OutboundLink")],1),t._v(":")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" StartWorkflowOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ID - The business identifier of the workflow execution.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to a uuid.")]),t._v("\n\tID "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// TaskList - The decisions of the workflow are scheduled on this queue.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This is also the default task list on which activities are scheduled. The workflow author can choose")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// to override this using activity options.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Mandatory: No default.")]),t._v("\n\tTaskList "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ExecutionStartToCloseTimeout - The timeout for duration of workflow execution.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The resolution is seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Mandatory: No default.")]),t._v("\n\tExecutionStartToCloseTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// DecisionTaskStartToCloseTimeout - The timeout for processing decision task from the time the worker")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// pulled this task. If a decision task is lost, it is retried after this timeout.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The resolution is seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to 10 secs.")]),t._v("\n\tDecisionTaskStartToCloseTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// WorkflowIDReusePolicy - Whether server allow reuse of workflow ID, can be useful")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// for dedup logic if set to WorkflowIdReusePolicyRejectDuplicate.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to WorkflowIDReusePolicyAllowDuplicateFailedOnly.")]),t._v("\n\tWorkflowIDReusePolicy WorkflowIDReusePolicy\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// RetryPolicy - Optional retry policy for workflow. If a retry policy is specified, in case of workflow failure")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// server will start new workflow execution if needed based on the retry policy.")]),t._v("\n\tRetryPolicy "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("RetryPolicy\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// as a cron based on the schedule. The scheduling will be based on UTC time. Schedule for next run only happen")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// or timeout, the workflow will be retried based on the retry policy. While the workflow is retrying, it won't")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule its next run. If next schedule is due while workflow is running (or retrying), then it will skip that")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The cron spec is as following:")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ┌───────────── minute (0 - 59)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ ┌───────────── hour (0 - 23)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ ┌───────────── day of the month (1 - 31)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ ┌───────────── month (1 - 12)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// * * * * *")]),t._v("\n\tCronSchedule "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Memo - Optional non-indexed info that will be shown in list workflow.")]),t._v("\n\tMemo "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// supported when Cadence server is using ElasticSearch). The key and value type must be registered on Cadence server side.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Use GetSearchAttributes API to get valid key and corresponding value type.")]),t._v("\n\tSearchAttributes "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("map")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// DelayStartSeconds - Seconds to delay the workflow start")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The resolution is seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to 0 seconds")]),t._v("\n\tDelayStart time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// JitterStart - Seconds to jitter the workflow start. For example, if set to 10, the workflow will start some time between 0-10 seconds.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This works with CronSchedule and with DelayStart.")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Optional: defaulted to 0 seconds")]),t._v("\n\tJitterStart time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/77.b314bdd9.js b/assets/js/77.817c7735.js similarity index 99% rename from assets/js/77.b314bdd9.js rename to assets/js/77.817c7735.js index 5fb10845b..f929e12df 100644 --- a/assets/js/77.b314bdd9.js +++ b/assets/js/77.817c7735.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[77],{384:function(t,e,a){"use strict";a.r(e);var s=a(0),r=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"activity-overview"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#activity-overview"}},[t._v("#")]),t._v(" Activity overview")]),t._v(" "),e("p",[t._v("An "),e("Term",{attrs:{term:"activity"}}),t._v(" is the implementation of a particular "),e("Term",{attrs:{term:"task"}}),t._v(" in the business logic.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"activity",show:"Activities"}}),t._v(" are implemented as functions. Data can be passed directly to an "),e("Term",{attrs:{term:"activity"}}),t._v(" via function\nparameters. The parameters can be either basic types or structs, with the only requirement being that\nthe parameters must be serializable. Though it is not required, we recommend that the first parameter\nof an "),e("Term",{attrs:{term:"activity"}}),t._v(" function is of type "),e("code",[t._v("context.Context")]),t._v(", in order to allow the "),e("Term",{attrs:{term:"activity"}}),t._v(" to interact with\nother framework methods. The function must return an "),e("code",[t._v("error")]),t._v(" value, and can optionally return a result\nvalue. The result value can be either a basic type or a struct with the only requirement being that\nit is serializable.")],1),t._v(" "),e("p",[t._v("The values passed to "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" through invocation parameters or returned through the result value\nare recorded in the execution history. The entire execution history is transferred from the Cadence\nservice to "),e("Term",{attrs:{term:"workflow_worker",show:"workflow_workers"}}),t._v(" with every "),e("Term",{attrs:{term:"event"}}),t._v(" that the "),e("Term",{attrs:{term:"workflow"}}),t._v(" logic needs to process. A large execution\nhistory can thus adversely impact the performance of your "),e("Term",{attrs:{term:"workflow"}}),t._v(". Therefore, be mindful of the amount\nof data you transfer via "),e("Term",{attrs:{term:"activity"}}),t._v(" invocation parameters or return values. Otherwise, no additional\nlimitations exist on "),e("Term",{attrs:{term:"activity"}}),t._v(" implementations.")],1),t._v(" "),e("h2",{attrs:{id:"overview"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#overview"}},[t._v("#")]),t._v(" Overview")]),t._v(" "),e("p",[t._v("The following example demonstrates a simple "),e("Term",{attrs:{term:"activity"}}),t._v(" that accepts a string parameter, appends a word\nto it, and then returns a result.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("package")]),t._v(" simple\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"context"')]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/activity"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("init")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Register")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("SimpleActivity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// SimpleActivity is a sample Cadence activity function that takes one parameter and")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// returns a string containing the parameter value.")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SimpleActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetLogger")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Info")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleActivity called."')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Value"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Processed: "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" value"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("Let's take a look at each component of this activity.")]),t._v(" "),e("h3",{attrs:{id:"declaration"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#declaration"}},[t._v("#")]),t._v(" Declaration")]),t._v(" "),e("p",[t._v("In the Cadence programing model, an "),e("Term",{attrs:{term:"activity"}}),t._v(" is implemented with a function. The function declaration specifies the parameters the "),e("Term",{attrs:{term:"activity"}}),t._v(" accepts as well as any values it might return. An "),e("Term",{attrs:{term:"activity"}}),t._v(" function can take zero or many "),e("Term",{attrs:{term:"activity"}}),t._v(" specific parameters and can return one or two values. It must always at least return an error value. The "),e("Term",{attrs:{term:"activity"}}),t._v(" function can accept as parameters and return as results any serializable type.")],1),t._v(" "),e("p",[e("code",[t._v("func SimpleActivity(ctx context.Context, value string) (string, error)")])]),t._v(" "),e("p",[t._v("The first parameter to the function is context.Context. This is an optional parameter and can be omitted. This parameter is the standard Go context.\nThe second string parameter is a custom "),e("Term",{attrs:{term:"activity"}}),t._v(" specific parameter that can be used to pass data into the "),e("Term",{attrs:{term:"activity"}}),t._v(" on start. An "),e("Term",{attrs:{term:"activity"}}),t._v(" can have one or more such parameters. All parameters to an "),e("Term",{attrs:{term:"activity"}}),t._v(" function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers.\nThe "),e("Term",{attrs:{term:"activity"}}),t._v(" declares two return values: string and error. The string return value is used to return the result of the "),e("Term",{attrs:{term:"activity"}}),t._v(". The error return value is used to indicate that an error was encountered during execution.")],1),t._v(" "),e("h3",{attrs:{id:"implementation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#implementation"}},[t._v("#")]),t._v(" Implementation")]),t._v(" "),e("p",[t._v("You can write "),e("Term",{attrs:{term:"activity"}}),t._v(" implementation code in the same way that you would any other Go service code.\nAdditionally, you can use the usual loggers and metrics controllers, and the standard Go concurrency\nconstructs.")],1),t._v(" "),e("h4",{attrs:{id:"heart-beating"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#heart-beating"}},[t._v("#")]),t._v(" Heart Beating")]),t._v(" "),e("p",[t._v("For long-running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", Cadence provides an API for the "),e("Term",{attrs:{term:"activity"}}),t._v(" code to report both liveness and\nprogress back to the Cadence managed service.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("progress "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" hasWork "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Send heartbeat message to the server.")]),t._v("\n cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("RecordActivityHeartbeat")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" progress"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Do some work.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n progress"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("When an "),e("Term",{attrs:{term:"activity"}}),t._v(" times out due to a missed heartbeat, the last value of the details ("),e("code",[t._v("progress")]),t._v(" in the\nabove sample) is returned from the "),e("code",[t._v("cadence.ExecuteActivity")]),t._v(" function as the details field of "),e("code",[t._v("TimeoutError")]),t._v("\nwith "),e("code",[t._v("TimeoutType_HEARTBEAT")]),t._v(".")],1),t._v(" "),e("p",[t._v("New "),e("strong",[t._v("auto heartbeat")]),t._v(" option in "),e("a",{attrs:{href:"https://github.com/uber-go/cadence-client/releases/tag/v0.17.0",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence Go Client 0.17.0 release"),e("OutboundLink")],1),t._v(":\nIn case you don't need to report progress, but still want to report liveness of your worker through heartbeating for your long running activities, there's a new auto-heartbeat option that you can enable when you register your activity. When this option is enabled Cadence library will do the heartbeat for you in the background.")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("\tRegisterActivityOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Automatically send heartbeats for this activity at an interval that is less than the HeartbeatTimeout.")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This option has no effect if the activity is executed with a HeartbeatTimeout of 0.")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Default: false")]),t._v("\n\t\tEnableAutoHeartbeat "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("bool")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("You can also heartbeat an "),e("Term",{attrs:{term:"activity"}}),t._v(" from an external source:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Instantiate a Cadence service client.")]),t._v("\ncadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Client client "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewClient")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Record heartbeat.")]),t._v("\nerr "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("RecordActivityHeartbeat")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("taskToken"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" details"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("The parameters of the "),e("code",[t._v("RecordActivityHeartbeat")]),t._v(" function are:")]),t._v(" "),e("ul",[e("li",[e("code",[t._v("taskToken")]),t._v(": The value of the binary "),e("code",[t._v("TaskToken")]),t._v(" field of the "),e("code",[t._v("ActivityInfo")]),t._v(" struct retrieved inside\nthe "),e("Term",{attrs:{term:"activity"}}),t._v(".")],1),t._v(" "),e("li",[e("code",[t._v("details")]),t._v(": The serializable payload containing progress information.")])]),t._v(" "),e("h4",{attrs:{id:"cancellation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#cancellation"}},[t._v("#")]),t._v(" Cancellation")]),t._v(" "),e("p",[t._v("When an "),e("Term",{attrs:{term:"activity"}}),t._v(" is cancelled, or its "),e("Term",{attrs:{term:"workflow_execution"}}),t._v(" has completed or failed, the context passed\ninto its function is cancelled, which sets its channel’s closed state to "),e("code",[t._v("Done")]),t._v(". An "),e("Term",{attrs:{term:"activity"}}),t._v(" can use that\nto perform any necessary cleanup and abort its execution. Cancellation is only delivered to "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v("\nthat call "),e("code",[t._v("RecordActivityHeartbeat")]),t._v(".")],1),t._v(" "),e("h3",{attrs:{id:"registration"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#registration"}},[t._v("#")]),t._v(" Registration")]),t._v(" "),e("p",[t._v("To make the "),e("Term",{attrs:{term:"activity"}}),t._v(" visible to the "),e("Term",{attrs:{term:"worker"}}),t._v(" process hosting it, the "),e("Term",{attrs:{term:"activity"}}),t._v(" must be registered via a\ncall to "),e("code",[t._v("activity.Register")]),t._v(".")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("init")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Register")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("SimpleActivity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("This call creates an in-memory mapping inside the "),e("Term",{attrs:{term:"worker"}}),t._v(" process between the fully qualified function\nname and the implementation. If a "),e("Term",{attrs:{term:"worker"}}),t._v(" receives a request to start an "),e("Term",{attrs:{term:"activity"}}),t._v(" execution for an\n"),e("Term",{attrs:{term:"activity"}}),t._v(" type it does not know, it will fail that request.")],1),t._v(" "),e("h2",{attrs:{id:"failing-an-activity"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#failing-an-activity"}},[t._v("#")]),t._v(" Failing an Activity")]),t._v(" "),e("p",[t._v("To mark an "),e("Term",{attrs:{term:"activity"}}),t._v(" as failed, the "),e("Term",{attrs:{term:"activity"}}),t._v(" function must return an error via the "),e("code",[t._v("error")]),t._v(" return value.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[77],{382:function(t,e,a){"use strict";a.r(e);var s=a(0),r=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"activity-overview"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#activity-overview"}},[t._v("#")]),t._v(" Activity overview")]),t._v(" "),e("p",[t._v("An "),e("Term",{attrs:{term:"activity"}}),t._v(" is the implementation of a particular "),e("Term",{attrs:{term:"task"}}),t._v(" in the business logic.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"activity",show:"Activities"}}),t._v(" are implemented as functions. Data can be passed directly to an "),e("Term",{attrs:{term:"activity"}}),t._v(" via function\nparameters. The parameters can be either basic types or structs, with the only requirement being that\nthe parameters must be serializable. Though it is not required, we recommend that the first parameter\nof an "),e("Term",{attrs:{term:"activity"}}),t._v(" function is of type "),e("code",[t._v("context.Context")]),t._v(", in order to allow the "),e("Term",{attrs:{term:"activity"}}),t._v(" to interact with\nother framework methods. The function must return an "),e("code",[t._v("error")]),t._v(" value, and can optionally return a result\nvalue. The result value can be either a basic type or a struct with the only requirement being that\nit is serializable.")],1),t._v(" "),e("p",[t._v("The values passed to "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" through invocation parameters or returned through the result value\nare recorded in the execution history. The entire execution history is transferred from the Cadence\nservice to "),e("Term",{attrs:{term:"workflow_worker",show:"workflow_workers"}}),t._v(" with every "),e("Term",{attrs:{term:"event"}}),t._v(" that the "),e("Term",{attrs:{term:"workflow"}}),t._v(" logic needs to process. A large execution\nhistory can thus adversely impact the performance of your "),e("Term",{attrs:{term:"workflow"}}),t._v(". Therefore, be mindful of the amount\nof data you transfer via "),e("Term",{attrs:{term:"activity"}}),t._v(" invocation parameters or return values. Otherwise, no additional\nlimitations exist on "),e("Term",{attrs:{term:"activity"}}),t._v(" implementations.")],1),t._v(" "),e("h2",{attrs:{id:"overview"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#overview"}},[t._v("#")]),t._v(" Overview")]),t._v(" "),e("p",[t._v("The following example demonstrates a simple "),e("Term",{attrs:{term:"activity"}}),t._v(" that accepts a string parameter, appends a word\nto it, and then returns a result.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("package")]),t._v(" simple\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("import")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"context"')]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/cadence/activity"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"go.uber.org/zap"')]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("init")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Register")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("SimpleActivity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// SimpleActivity is a sample Cadence activity function that takes one parameter and")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// returns a string containing the parameter value.")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SimpleActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetLogger")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Info")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SimpleActivity called."')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Value"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Processed: "')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("+")]),t._v(" value"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("Let's take a look at each component of this activity.")]),t._v(" "),e("h3",{attrs:{id:"declaration"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#declaration"}},[t._v("#")]),t._v(" Declaration")]),t._v(" "),e("p",[t._v("In the Cadence programing model, an "),e("Term",{attrs:{term:"activity"}}),t._v(" is implemented with a function. The function declaration specifies the parameters the "),e("Term",{attrs:{term:"activity"}}),t._v(" accepts as well as any values it might return. An "),e("Term",{attrs:{term:"activity"}}),t._v(" function can take zero or many "),e("Term",{attrs:{term:"activity"}}),t._v(" specific parameters and can return one or two values. It must always at least return an error value. The "),e("Term",{attrs:{term:"activity"}}),t._v(" function can accept as parameters and return as results any serializable type.")],1),t._v(" "),e("p",[e("code",[t._v("func SimpleActivity(ctx context.Context, value string) (string, error)")])]),t._v(" "),e("p",[t._v("The first parameter to the function is context.Context. This is an optional parameter and can be omitted. This parameter is the standard Go context.\nThe second string parameter is a custom "),e("Term",{attrs:{term:"activity"}}),t._v(" specific parameter that can be used to pass data into the "),e("Term",{attrs:{term:"activity"}}),t._v(" on start. An "),e("Term",{attrs:{term:"activity"}}),t._v(" can have one or more such parameters. All parameters to an "),e("Term",{attrs:{term:"activity"}}),t._v(" function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers.\nThe "),e("Term",{attrs:{term:"activity"}}),t._v(" declares two return values: string and error. The string return value is used to return the result of the "),e("Term",{attrs:{term:"activity"}}),t._v(". The error return value is used to indicate that an error was encountered during execution.")],1),t._v(" "),e("h3",{attrs:{id:"implementation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#implementation"}},[t._v("#")]),t._v(" Implementation")]),t._v(" "),e("p",[t._v("You can write "),e("Term",{attrs:{term:"activity"}}),t._v(" implementation code in the same way that you would any other Go service code.\nAdditionally, you can use the usual loggers and metrics controllers, and the standard Go concurrency\nconstructs.")],1),t._v(" "),e("h4",{attrs:{id:"heart-beating"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#heart-beating"}},[t._v("#")]),t._v(" Heart Beating")]),t._v(" "),e("p",[t._v("For long-running "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", Cadence provides an API for the "),e("Term",{attrs:{term:"activity"}}),t._v(" code to report both liveness and\nprogress back to the Cadence managed service.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("progress "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("for")]),t._v(" hasWork "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Send heartbeat message to the server.")]),t._v("\n cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("RecordActivityHeartbeat")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" progress"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Do some work.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n progress"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("++")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("When an "),e("Term",{attrs:{term:"activity"}}),t._v(" times out due to a missed heartbeat, the last value of the details ("),e("code",[t._v("progress")]),t._v(" in the\nabove sample) is returned from the "),e("code",[t._v("cadence.ExecuteActivity")]),t._v(" function as the details field of "),e("code",[t._v("TimeoutError")]),t._v("\nwith "),e("code",[t._v("TimeoutType_HEARTBEAT")]),t._v(".")],1),t._v(" "),e("p",[t._v("New "),e("strong",[t._v("auto heartbeat")]),t._v(" option in "),e("a",{attrs:{href:"https://github.com/uber-go/cadence-client/releases/tag/v0.17.0",target:"_blank",rel:"noopener noreferrer"}},[t._v("Cadence Go Client 0.17.0 release"),e("OutboundLink")],1),t._v(":\nIn case you don't need to report progress, but still want to report liveness of your worker through heartbeating for your long running activities, there's a new auto-heartbeat option that you can enable when you register your activity. When this option is enabled Cadence library will do the heartbeat for you in the background.")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("\tRegisterActivityOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Automatically send heartbeats for this activity at an interval that is less than the HeartbeatTimeout.")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This option has no effect if the activity is executed with a HeartbeatTimeout of 0.")]),t._v("\n\t\t"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Default: false")]),t._v("\n\t\tEnableAutoHeartbeat "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("bool")]),t._v("\n\t"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("You can also heartbeat an "),e("Term",{attrs:{term:"activity"}}),t._v(" from an external source:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Instantiate a Cadence service client.")]),t._v("\ncadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Client client "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewClient")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Record heartbeat.")]),t._v("\nerr "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("RecordActivityHeartbeat")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("taskToken"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" details"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("The parameters of the "),e("code",[t._v("RecordActivityHeartbeat")]),t._v(" function are:")]),t._v(" "),e("ul",[e("li",[e("code",[t._v("taskToken")]),t._v(": The value of the binary "),e("code",[t._v("TaskToken")]),t._v(" field of the "),e("code",[t._v("ActivityInfo")]),t._v(" struct retrieved inside\nthe "),e("Term",{attrs:{term:"activity"}}),t._v(".")],1),t._v(" "),e("li",[e("code",[t._v("details")]),t._v(": The serializable payload containing progress information.")])]),t._v(" "),e("h4",{attrs:{id:"cancellation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#cancellation"}},[t._v("#")]),t._v(" Cancellation")]),t._v(" "),e("p",[t._v("When an "),e("Term",{attrs:{term:"activity"}}),t._v(" is cancelled, or its "),e("Term",{attrs:{term:"workflow_execution"}}),t._v(" has completed or failed, the context passed\ninto its function is cancelled, which sets its channel’s closed state to "),e("code",[t._v("Done")]),t._v(". An "),e("Term",{attrs:{term:"activity"}}),t._v(" can use that\nto perform any necessary cleanup and abort its execution. Cancellation is only delivered to "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v("\nthat call "),e("code",[t._v("RecordActivityHeartbeat")]),t._v(".")],1),t._v(" "),e("h3",{attrs:{id:"registration"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#registration"}},[t._v("#")]),t._v(" Registration")]),t._v(" "),e("p",[t._v("To make the "),e("Term",{attrs:{term:"activity"}}),t._v(" visible to the "),e("Term",{attrs:{term:"worker"}}),t._v(" process hosting it, the "),e("Term",{attrs:{term:"activity"}}),t._v(" must be registered via a\ncall to "),e("code",[t._v("activity.Register")]),t._v(".")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("init")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n activity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Register")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("SimpleActivity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("This call creates an in-memory mapping inside the "),e("Term",{attrs:{term:"worker"}}),t._v(" process between the fully qualified function\nname and the implementation. If a "),e("Term",{attrs:{term:"worker"}}),t._v(" receives a request to start an "),e("Term",{attrs:{term:"activity"}}),t._v(" execution for an\n"),e("Term",{attrs:{term:"activity"}}),t._v(" type it does not know, it will fail that request.")],1),t._v(" "),e("h2",{attrs:{id:"failing-an-activity"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#failing-an-activity"}},[t._v("#")]),t._v(" Failing an Activity")]),t._v(" "),e("p",[t._v("To mark an "),e("Term",{attrs:{term:"activity"}}),t._v(" as failed, the "),e("Term",{attrs:{term:"activity"}}),t._v(" function must return an error via the "),e("code",[t._v("error")]),t._v(" return value.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/82.6b6762d6.js b/assets/js/82.dc2d1182.js similarity index 99% rename from assets/js/82.6b6762d6.js rename to assets/js/82.dc2d1182.js index 0b28bf5ce..114625bbe 100644 --- a/assets/js/82.6b6762d6.js +++ b/assets/js/82.dc2d1182.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[82],{388:function(t,a,s){"use strict";s.r(a);var n=s(0),e=Object(n.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"signals"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signals"}},[t._v("#")]),t._v(" Signals")]),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(" provide a mechanism to send data directly to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(". Previously, you had\ntwo options for passing data to the "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation:")],1),t._v(" "),a("ul",[a("li",[t._v("Via start parameters")]),t._v(" "),a("li",[t._v("As return values from "),a("Term",{attrs:{term:"activity",show:"activities"}})],1)]),t._v(" "),a("p",[t._v("With start parameters, we could only pass in values before "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" began.")],1),t._v(" "),a("p",[t._v("Return values from "),a("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" allowed us to pass information to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", but this\napproach comes with its own complications. One major drawback is reliance on polling. This means\nthat the data needs to be stored in a third-party location until it's ready to be picked up by\nthe "),a("Term",{attrs:{term:"activity"}}),t._v(". Further, the lifecycle of this "),a("Term",{attrs:{term:"activity"}}),t._v(" requires management, and the "),a("Term",{attrs:{term:"activity"}}),t._v("\nrequires manual restart if it fails before acquiring the data.")],1),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(", on the other hand, provide a fully asynchronous and durable mechanism for providing data to\na running "),a("Term",{attrs:{term:"workflow"}}),t._v(". When a "),a("Term",{attrs:{term:"signal"}}),t._v(" is received for a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", Cadence persists the "),a("Term",{attrs:{term:"event"}}),t._v("\nand the payload in the "),a("Term",{attrs:{term:"workflow"}}),t._v(" history. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" can then process the "),a("Term",{attrs:{term:"signal"}}),t._v(" at any time\nafterwards without the risk of losing the information. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" also has the option to stop\nexecution by blocking on a "),a("Term",{attrs:{term:"signal"}}),t._v(" channel.")],1),t._v(" "),a("div",{staticClass:"language-go extra-class"},[a("pre",{pre:!0,attrs:{class:"language-go"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" signalVal "),a("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\nsignalChan "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetSignalChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" signalName"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\ns "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSelector")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\ns"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("AddReceive")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("signalChan"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("c workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Channel"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" more "),a("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("bool")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n c"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("Receive")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("signalVal"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetLogger")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("Info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Received signal!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"signal"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" signalName"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"value"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" signalVal"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\ns"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("Select")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("len")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("signalVal"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&&")]),t._v(" signalVal "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SOME_VALUE"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" errors"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("New")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"signalVal"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("In the example above, the "),a("Term",{attrs:{term:"workflow"}}),t._v(" code uses "),a("strong",[t._v("workflow.GetSignalChannel")]),t._v(" to open a\n"),a("strong",[t._v("workflow.Channel")]),t._v(" for the named "),a("Term",{attrs:{term:"signal"}}),t._v(". We then use a "),a("strong",[t._v("workflow.Selector")]),t._v(" to wait on this\nchannel and process the payload received with the "),a("Term",{attrs:{term:"signal"}}),t._v(".")],1),t._v(" "),a("h2",{attrs:{id:"signalwithstart"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signalwithstart"}},[t._v("#")]),t._v(" SignalWithStart")]),t._v(" "),a("p",[t._v("You may not know if a "),a("Term",{attrs:{term:"workflow"}}),t._v(" is running and can accept a "),a("Term",{attrs:{term:"signal"}}),t._v(". The\n"),a("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[t._v("client.SignalWithStartWorkflow"),a("OutboundLink")],1),t._v(" API\nallows you to send a "),a("Term",{attrs:{term:"signal"}}),t._v(" to the current "),a("Term",{attrs:{term:"workflow"}}),t._v(" instance if one exists or to create a new\nrun and then send the "),a("Term",{attrs:{term:"signal"}}),t._v(". "),a("code",[t._v("SignalWithStartWorkflow")]),t._v(" therefore doesn't take a "),a("Term",{attrs:{term:"run_ID"}}),t._v(" as a\nparameter.")],1)])}),[],!1,null,null,null);a.default=e.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[82],{389:function(t,a,s){"use strict";s.r(a);var n=s(0),e=Object(n.a)({},(function(){var t=this,a=t._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[a("h1",{attrs:{id:"signals"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signals"}},[t._v("#")]),t._v(" Signals")]),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(" provide a mechanism to send data directly to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(". Previously, you had\ntwo options for passing data to the "),a("Term",{attrs:{term:"workflow"}}),t._v(" implementation:")],1),t._v(" "),a("ul",[a("li",[t._v("Via start parameters")]),t._v(" "),a("li",[t._v("As return values from "),a("Term",{attrs:{term:"activity",show:"activities"}})],1)]),t._v(" "),a("p",[t._v("With start parameters, we could only pass in values before "),a("Term",{attrs:{term:"workflow_execution"}}),t._v(" began.")],1),t._v(" "),a("p",[t._v("Return values from "),a("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" allowed us to pass information to a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", but this\napproach comes with its own complications. One major drawback is reliance on polling. This means\nthat the data needs to be stored in a third-party location until it's ready to be picked up by\nthe "),a("Term",{attrs:{term:"activity"}}),t._v(". Further, the lifecycle of this "),a("Term",{attrs:{term:"activity"}}),t._v(" requires management, and the "),a("Term",{attrs:{term:"activity"}}),t._v("\nrequires manual restart if it fails before acquiring the data.")],1),t._v(" "),a("p",[a("Term",{attrs:{term:"signal",show:"Signals"}}),t._v(", on the other hand, provide a fully asynchronous and durable mechanism for providing data to\na running "),a("Term",{attrs:{term:"workflow"}}),t._v(". When a "),a("Term",{attrs:{term:"signal"}}),t._v(" is received for a running "),a("Term",{attrs:{term:"workflow"}}),t._v(", Cadence persists the "),a("Term",{attrs:{term:"event"}}),t._v("\nand the payload in the "),a("Term",{attrs:{term:"workflow"}}),t._v(" history. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" can then process the "),a("Term",{attrs:{term:"signal"}}),t._v(" at any time\nafterwards without the risk of losing the information. The "),a("Term",{attrs:{term:"workflow"}}),t._v(" also has the option to stop\nexecution by blocking on a "),a("Term",{attrs:{term:"signal"}}),t._v(" channel.")],1),t._v(" "),a("div",{staticClass:"language-go extra-class"},[a("pre",{pre:!0,attrs:{class:"language-go"}},[a("code",[a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" signalVal "),a("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\nsignalChan "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetSignalChannel")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" signalName"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\ns "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewSelector")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\ns"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("AddReceive")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("signalChan"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("c workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Channel"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" more "),a("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("bool")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n c"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("Receive")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("signalVal"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n workflow"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetLogger")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("Info")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"Received signal!"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"signal"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" signalName"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" zap"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("String")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"value"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" signalVal"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\ns"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("Select")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("len")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("signalVal"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v(">")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[t._v("0")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&&")]),t._v(" signalVal "),a("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"SOME_VALUE"')]),t._v(" "),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),a("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" errors"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),a("span",{pre:!0,attrs:{class:"token function"}},[t._v("New")]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),a("span",{pre:!0,attrs:{class:"token string"}},[t._v('"signalVal"')]),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),a("p",[t._v("In the example above, the "),a("Term",{attrs:{term:"workflow"}}),t._v(" code uses "),a("strong",[t._v("workflow.GetSignalChannel")]),t._v(" to open a\n"),a("strong",[t._v("workflow.Channel")]),t._v(" for the named "),a("Term",{attrs:{term:"signal"}}),t._v(". We then use a "),a("strong",[t._v("workflow.Selector")]),t._v(" to wait on this\nchannel and process the payload received with the "),a("Term",{attrs:{term:"signal"}}),t._v(".")],1),t._v(" "),a("h2",{attrs:{id:"signalwithstart"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signalwithstart"}},[t._v("#")]),t._v(" SignalWithStart")]),t._v(" "),a("p",[t._v("You may not know if a "),a("Term",{attrs:{term:"workflow"}}),t._v(" is running and can accept a "),a("Term",{attrs:{term:"signal"}}),t._v(". The\n"),a("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/client#Client",target:"_blank",rel:"noopener noreferrer"}},[t._v("client.SignalWithStartWorkflow"),a("OutboundLink")],1),t._v(" API\nallows you to send a "),a("Term",{attrs:{term:"signal"}}),t._v(" to the current "),a("Term",{attrs:{term:"workflow"}}),t._v(" instance if one exists or to create a new\nrun and then send the "),a("Term",{attrs:{term:"signal"}}),t._v(". "),a("code",[t._v("SignalWithStartWorkflow")]),t._v(" therefore doesn't take a "),a("Term",{attrs:{term:"run_ID"}}),t._v(" as a\nparameter.")],1)])}),[],!1,null,null,null);a.default=e.exports}}]); \ No newline at end of file diff --git a/assets/js/83.aa755769.js b/assets/js/83.055cb231.js similarity index 98% rename from assets/js/83.aa755769.js rename to assets/js/83.055cb231.js index adb536208..9dd888556 100644 --- a/assets/js/83.aa755769.js +++ b/assets/js/83.055cb231.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[83],{391:function(t,e,r){"use strict";r.r(e);var n=r(0),s=Object(n.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"continue-as-new"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#continue-as-new"}},[t._v("#")]),t._v(" Continue as new")]),t._v(" "),e("p",[e("Term",{attrs:{term:"workflow",show:"Workflows"}}),t._v(" that need to rerun periodically could naively be implemented as a big "),e("strong",[t._v("for")]),t._v(" loop with\na sleep where the entire logic of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" is inside the body of the "),e("strong",[t._v("for")]),t._v(" loop. The problem\nwith this approach is that the history for that "),e("Term",{attrs:{term:"workflow"}}),t._v(" will keep growing to a point where it\nreaches the maximum size enforced by the service.")],1),t._v(" "),e("p",[e("strong",[t._v("ContinueAsNew")]),t._v(" is the low level construct that enables implementing such "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without the\nrisk of failures down the road. The operation atomically completes the current execution and starts\na new execution of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" with the same "),e("strong",[e("Term",{attrs:{term:"workflow_ID"}})],1),t._v(". The new execution will not carry\nover any history from the old execution. To trigger this behavior, the "),e("Term",{attrs:{term:"workflow"}}),t._v(" function should\nterminate by returning the special "),e("strong",[t._v("ContinueAsNewError")]),t._v(" error:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SimpleWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewContinueAsNewError")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" SimpleWorkflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);e.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[83],{388:function(t,e,r){"use strict";r.r(e);var n=r(0),s=Object(n.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"continue-as-new"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#continue-as-new"}},[t._v("#")]),t._v(" Continue as new")]),t._v(" "),e("p",[e("Term",{attrs:{term:"workflow",show:"Workflows"}}),t._v(" that need to rerun periodically could naively be implemented as a big "),e("strong",[t._v("for")]),t._v(" loop with\na sleep where the entire logic of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" is inside the body of the "),e("strong",[t._v("for")]),t._v(" loop. The problem\nwith this approach is that the history for that "),e("Term",{attrs:{term:"workflow"}}),t._v(" will keep growing to a point where it\nreaches the maximum size enforced by the service.")],1),t._v(" "),e("p",[e("strong",[t._v("ContinueAsNew")]),t._v(" is the low level construct that enables implementing such "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" without the\nrisk of failures down the road. The operation atomically completes the current execution and starts\na new execution of the "),e("Term",{attrs:{term:"workflow"}}),t._v(" with the same "),e("strong",[e("Term",{attrs:{term:"workflow_ID"}})],1),t._v(". The new execution will not carry\nover any history from the old execution. To trigger this behavior, the "),e("Term",{attrs:{term:"workflow"}}),t._v(" function should\nterminate by returning the special "),e("strong",[t._v("ContinueAsNewError")]),t._v(" error:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SimpleWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewContinueAsNewError")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" SimpleWorkflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" value"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);e.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/84.1793ee96.js b/assets/js/84.ab91ec58.js similarity index 98% rename from assets/js/84.1793ee96.js rename to assets/js/84.ab91ec58.js index c0f845bed..32ffa53bf 100644 --- a/assets/js/84.1793ee96.js +++ b/assets/js/84.ab91ec58.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[84],{389:function(t,e,n){"use strict";n.r(e);var s=n(0),a=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"side-effect"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#side-effect"}},[t._v("#")]),t._v(" Side effect")]),t._v(" "),e("p",[e("code",[t._v("workflow.SideEffect")]),t._v(" is useful for short, nondeterministic code snippets, such as getting a random\nvalue or generating a UUID. It executes the provided function once and records its result into the\n"),e("Term",{attrs:{term:"workflow"}}),t._v(" history. "),e("code",[t._v("workflow.SideEffect")]),t._v(' does not re-execute upon replay, but instead returns the\nrecorded result. It can be seen as an "inline" '),e("Term",{attrs:{term:"activity"}}),t._v(". Something to note about "),e("code",[t._v("workflow.SideEffect")]),t._v("\nis that, unlike the Cadence guarantee of at-most-once execution for "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", there is no such\nguarantee with "),e("code",[t._v("workflow.SideEffect")]),t._v(". Under certain failure conditions, "),e("code",[t._v("workflow.SideEffect")]),t._v(" can\nend up executing a function more than once.")],1),t._v(" "),e("p",[t._v("The only way to fail "),e("code",[t._v("SideEffect")]),t._v(" is to panic, which causes a "),e("Term",{attrs:{term:"decision_task"}}),t._v(" failure. After the\ntimeout, Cadence reschedules and then re-executes the "),e("Term",{attrs:{term:"decision_task"}}),t._v(", giving "),e("code",[t._v("SideEffect")]),t._v(" another chance\nto succeed. Do not return any data from "),e("code",[t._v("SideEffect")]),t._v(" other than through its recorded return value.")],1),t._v(" "),e("p",[t._v("The following sample demonstrates how to use "),e("code",[t._v("SideEffect")]),t._v(":")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("encodedRandom "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SideEffect")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" rand"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Intn")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" random "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("int")]),t._v("\nencodedRandom"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("random"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" random "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("<")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("50")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[84],{390:function(t,e,n){"use strict";n.r(e);var s=n(0),a=Object(s.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"side-effect"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#side-effect"}},[t._v("#")]),t._v(" Side effect")]),t._v(" "),e("p",[e("code",[t._v("workflow.SideEffect")]),t._v(" is useful for short, nondeterministic code snippets, such as getting a random\nvalue or generating a UUID. It executes the provided function once and records its result into the\n"),e("Term",{attrs:{term:"workflow"}}),t._v(" history. "),e("code",[t._v("workflow.SideEffect")]),t._v(' does not re-execute upon replay, but instead returns the\nrecorded result. It can be seen as an "inline" '),e("Term",{attrs:{term:"activity"}}),t._v(". Something to note about "),e("code",[t._v("workflow.SideEffect")]),t._v("\nis that, unlike the Cadence guarantee of at-most-once execution for "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", there is no such\nguarantee with "),e("code",[t._v("workflow.SideEffect")]),t._v(". Under certain failure conditions, "),e("code",[t._v("workflow.SideEffect")]),t._v(" can\nend up executing a function more than once.")],1),t._v(" "),e("p",[t._v("The only way to fail "),e("code",[t._v("SideEffect")]),t._v(" is to panic, which causes a "),e("Term",{attrs:{term:"decision_task"}}),t._v(" failure. After the\ntimeout, Cadence reschedules and then re-executes the "),e("Term",{attrs:{term:"decision_task"}}),t._v(", giving "),e("code",[t._v("SideEffect")]),t._v(" another chance\nto succeed. Do not return any data from "),e("code",[t._v("SideEffect")]),t._v(" other than through its recorded return value.")],1),t._v(" "),e("p",[t._v("The following sample demonstrates how to use "),e("code",[t._v("SideEffect")]),t._v(":")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("encodedRandom "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SideEffect")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx cadence"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("interface")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" rand"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Intn")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("100")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" random "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("int")]),t._v("\nencodedRandom"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("random"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" random "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("<")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("50")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("else")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("...")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])])])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/85.bf464ec7.js b/assets/js/85.ffeb9852.js similarity index 99% rename from assets/js/85.bf464ec7.js rename to assets/js/85.ffeb9852.js index 53fccdc26..40d37c848 100644 --- a/assets/js/85.bf464ec7.js +++ b/assets/js/85.ffeb9852.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[85],{390:function(t,e,s){"use strict";s.r(e);var r=s(0),a=Object(r.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"queries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#queries"}},[t._v("#")]),t._v(" Queries")]),t._v(" "),e("p",[t._v("If a "),e("Term",{attrs:{term:"workflow_execution"}}),t._v(" has been stuck at a state for longer than an expected period of time, you\nmight want to "),e("Term",{attrs:{term:"query"}}),t._v(" the current call stack. You can use the Cadence "),e("Term",{attrs:{term:"CLI"}}),t._v(" to perform this "),e("Term",{attrs:{term:"query"}}),t._v(". For\nexample:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace")])]),t._v(" "),e("p",[t._v("This command uses "),e("code",[t._v("__stack_trace")]),t._v(", which is a built-in "),e("Term",{attrs:{term:"query"}}),t._v(" type supported by the Cadence client\nlibrary. You can add custom "),e("Term",{attrs:{term:"query"}}),t._v(" types to handle "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" such as "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" the current state of a\n"),e("Term",{attrs:{term:"workflow"}}),t._v(", or "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" how many "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" the "),e("Term",{attrs:{term:"workflow"}}),t._v(" has completed. To do this, you need to set\nup a "),e("Term",{attrs:{term:"query"}}),t._v(" handler using "),e("code",[t._v("workflow.SetQueryHandler")]),t._v(".")],1),t._v(" "),e("p",[t._v("The handler must be a function that returns two values:")]),t._v(" "),e("ol",[e("li",[t._v("A serializable result")]),t._v(" "),e("li",[t._v("An error")])]),t._v(" "),e("p",[t._v("The handler function can receive any number of input parameters, but all input parameters must be\nserializable. The following sample code sets up a "),e("Term",{attrs:{term:"query"}}),t._v(" handler that handles the "),e("Term",{attrs:{term:"query"}}),t._v(" type of\n"),e("code",[t._v("current_state")]),t._v(":")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("MyWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" input "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"started"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This could be any serializable struct.")]),t._v("\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SetQueryHandler")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"current_state"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" currentState"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"failed to register query handler"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Your normal workflow code begins here, and you update the currentState as the code makes progress.")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"waiting timer"')]),t._v("\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTimer")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Hour"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"timer failed"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"waiting activity"')]),t._v("\n ctx "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("WithActivityOptions")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" myActivityOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" MyActivity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"my_input"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"activity failed"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"done"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("You can now "),e("Term",{attrs:{term:"query"}}),t._v(" "),e("code",[t._v("current_state")]),t._v(" by using the "),e("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state")])]),t._v(" "),e("p",[t._v("You can also issue a "),e("Term",{attrs:{term:"query"}}),t._v(" from code using the "),e("code",[t._v("QueryWorkflow()")]),t._v(" API on a Cadence client object.")],1),t._v(" "),e("h2",{attrs:{id:"consistent-query"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#consistent-query"}},[t._v("#")]),t._v(" Consistent Query")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has two consistency levels, eventual and strong. Consider if you were to "),e("Term",{attrs:{term:"signal"}}),t._v(" a "),e("Term",{attrs:{term:"workflow"}}),t._v(" and then\nimmediately "),e("Term",{attrs:{term:"query"}}),t._v(" the "),e("Term",{attrs:{term:"workflow",show:""}})],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json")])]),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state")])]),t._v(" "),e("p",[t._v("In this example if "),e("Term",{attrs:{term:"signal"}}),t._v(" were to change "),e("Term",{attrs:{term:"workflow"}}),t._v(" state, "),e("Term",{attrs:{term:"query"}}),t._v(" may or may not see that state update reflected\nin the "),e("Term",{attrs:{term:"query"}}),t._v(" result. This is what it means for "),e("Term",{attrs:{term:"query"}}),t._v(" to be eventually consistent.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has another consistency level called strong consistency. A strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" is guaranteed\nto be based on "),e("Term",{attrs:{term:"workflow"}}),t._v(" state which includes all "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" that came before the "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. An "),e("Term",{attrs:{term:"event"}}),t._v("\nis considered to have come before a "),e("Term",{attrs:{term:"query"}}),t._v(" if the call creating the external "),e("Term",{attrs:{term:"event"}}),t._v(" returned success before\nthe "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. External "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" which are created while the "),e("Term",{attrs:{term:"query"}}),t._v(" is outstanding may or may not\nbe reflected in the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state the "),e("Term",{attrs:{term:"query"}}),t._v(" result is based on.")],1),t._v(" "),e("p",[t._v("In order to run consistent "),e("Term",{attrs:{term:"query"}}),t._v(" through the "),e("Term",{attrs:{term:"CLI"}}),t._v(" do the following:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong")])]),t._v(" "),e("p",[t._v("In order to run a "),e("Term",{attrs:{term:"query"}}),t._v(" using the go client do the following:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("resp"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" cadenceClient"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("QueryWorkflowWithOptions")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("QueryWorkflowWithOptionsRequest"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n WorkflowID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" workflowID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n RunID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" runID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n QueryType"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" queryType"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n QueryConsistencyLevel"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" shared"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("QueryConsistencyLevelStrong"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Ptr")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("When using strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" you should expect higher latency than eventually consistent "),e("Term",{attrs:{term:"query"}}),t._v(".")],1)])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[85],{391:function(t,e,s){"use strict";s.r(e);var r=s(0),a=Object(r.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"queries"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#queries"}},[t._v("#")]),t._v(" Queries")]),t._v(" "),e("p",[t._v("If a "),e("Term",{attrs:{term:"workflow_execution"}}),t._v(" has been stuck at a state for longer than an expected period of time, you\nmight want to "),e("Term",{attrs:{term:"query"}}),t._v(" the current call stack. You can use the Cadence "),e("Term",{attrs:{term:"CLI"}}),t._v(" to perform this "),e("Term",{attrs:{term:"query"}}),t._v(". For\nexample:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace")])]),t._v(" "),e("p",[t._v("This command uses "),e("code",[t._v("__stack_trace")]),t._v(", which is a built-in "),e("Term",{attrs:{term:"query"}}),t._v(" type supported by the Cadence client\nlibrary. You can add custom "),e("Term",{attrs:{term:"query"}}),t._v(" types to handle "),e("Term",{attrs:{term:"query",show:"queries"}}),t._v(" such as "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" the current state of a\n"),e("Term",{attrs:{term:"workflow"}}),t._v(", or "),e("Term",{attrs:{term:"query",show:"querying"}}),t._v(" how many "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" the "),e("Term",{attrs:{term:"workflow"}}),t._v(" has completed. To do this, you need to set\nup a "),e("Term",{attrs:{term:"query"}}),t._v(" handler using "),e("code",[t._v("workflow.SetQueryHandler")]),t._v(".")],1),t._v(" "),e("p",[t._v("The handler must be a function that returns two values:")]),t._v(" "),e("ol",[e("li",[t._v("A serializable result")]),t._v(" "),e("li",[t._v("An error")])]),t._v(" "),e("p",[t._v("The handler function can receive any number of input parameters, but all input parameters must be\nserializable. The following sample code sets up a "),e("Term",{attrs:{term:"query"}}),t._v(" handler that handles the "),e("Term",{attrs:{term:"query"}}),t._v(" type of\n"),e("code",[t._v("current_state")]),t._v(":")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("MyWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" input "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"started"')]),t._v(" "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// This could be any serializable struct.")]),t._v("\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("SetQueryHandler")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"current_state"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" currentState"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"failed to register query handler"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Your normal workflow code begins here, and you update the currentState as the code makes progress.")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"waiting timer"')]),t._v("\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("NewTimer")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Hour"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"timer failed"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"waiting activity"')]),t._v("\n ctx "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("WithActivityOptions")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" myActivityOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" MyActivity"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"my_input"')]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"activity failed"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n currentState "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token string"}},[t._v('"done"')]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("You can now "),e("Term",{attrs:{term:"query"}}),t._v(" "),e("code",[t._v("current_state")]),t._v(" by using the "),e("Term",{attrs:{term:"CLI",show:""}})],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state")])]),t._v(" "),e("p",[t._v("You can also issue a "),e("Term",{attrs:{term:"query"}}),t._v(" from code using the "),e("code",[t._v("QueryWorkflow()")]),t._v(" API on a Cadence client object.")],1),t._v(" "),e("h2",{attrs:{id:"consistent-query"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#consistent-query"}},[t._v("#")]),t._v(" Consistent Query")]),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has two consistency levels, eventual and strong. Consider if you were to "),e("Term",{attrs:{term:"signal"}}),t._v(" a "),e("Term",{attrs:{term:"workflow"}}),t._v(" and then\nimmediately "),e("Term",{attrs:{term:"query"}}),t._v(" the "),e("Term",{attrs:{term:"workflow",show:""}})],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json")])]),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state")])]),t._v(" "),e("p",[t._v("In this example if "),e("Term",{attrs:{term:"signal"}}),t._v(" were to change "),e("Term",{attrs:{term:"workflow"}}),t._v(" state, "),e("Term",{attrs:{term:"query"}}),t._v(" may or may not see that state update reflected\nin the "),e("Term",{attrs:{term:"query"}}),t._v(" result. This is what it means for "),e("Term",{attrs:{term:"query"}}),t._v(" to be eventually consistent.")],1),t._v(" "),e("p",[e("Term",{attrs:{term:"query",show:"Query"}}),t._v(" has another consistency level called strong consistency. A strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" is guaranteed\nto be based on "),e("Term",{attrs:{term:"workflow"}}),t._v(" state which includes all "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" that came before the "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. An "),e("Term",{attrs:{term:"event"}}),t._v("\nis considered to have come before a "),e("Term",{attrs:{term:"query"}}),t._v(" if the call creating the external "),e("Term",{attrs:{term:"event"}}),t._v(" returned success before\nthe "),e("Term",{attrs:{term:"query"}}),t._v(" was issued. External "),e("Term",{attrs:{term:"event",show:"events"}}),t._v(" which are created while the "),e("Term",{attrs:{term:"query"}}),t._v(" is outstanding may or may not\nbe reflected in the "),e("Term",{attrs:{term:"workflow"}}),t._v(" state the "),e("Term",{attrs:{term:"query"}}),t._v(" result is based on.")],1),t._v(" "),e("p",[t._v("In order to run consistent "),e("Term",{attrs:{term:"query"}}),t._v(" through the "),e("Term",{attrs:{term:"CLI"}}),t._v(" do the following:")],1),t._v(" "),e("p",[e("code",[t._v("cadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong")])]),t._v(" "),e("p",[t._v("In order to run a "),e("Term",{attrs:{term:"query"}}),t._v(" using the go client do the following:")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("resp"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" cadenceClient"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("QueryWorkflowWithOptions")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("client"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("QueryWorkflowWithOptionsRequest"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n WorkflowID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" workflowID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n RunID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" runID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n QueryType"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" queryType"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n QueryConsistencyLevel"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" shared"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("QueryConsistencyLevelStrong"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Ptr")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("When using strongly consistent "),e("Term",{attrs:{term:"query"}}),t._v(" you should expect higher latency than eventually consistent "),e("Term",{attrs:{term:"query"}}),t._v(".")],1)])}),[],!1,null,null,null);e.default=a.exports}}]); \ No newline at end of file diff --git a/assets/js/89.79bca4f3.js b/assets/js/89.ce44b27c.js similarity index 99% rename from assets/js/89.79bca4f3.js rename to assets/js/89.ce44b27c.js index ef4b53d18..2d41725c2 100644 --- a/assets/js/89.79bca4f3.js +++ b/assets/js/89.ce44b27c.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[89],{396:function(t,e,s){"use strict";s.r(e);var a=s(0),n=Object(a.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"sessions"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#sessions"}},[t._v("#")]),t._v(" Sessions")]),t._v(" "),e("p",[t._v("The session framework provides a straightforward interface for scheduling multiple "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" on a single "),e("Term",{attrs:{term:"worker"}}),t._v(" without requiring you to manually specify the "),e("Term",{attrs:{term:"task_list"}}),t._v(" name. It also includes features like "),e("strong",[t._v("concurrent session limitation")]),t._v(" and "),e("strong",[t._v("worker failure detection")]),t._v(".")],1),t._v(" "),e("h2",{attrs:{id:"use-cases"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#use-cases"}},[t._v("#")]),t._v(" Use Cases")]),t._v(" "),e("ul",[e("li",[e("p",[e("strong",[t._v("File Processing")]),t._v(": You may want to implement a "),e("Term",{attrs:{term:"workflow"}}),t._v(" that can download a file, process it, and then upload the modified version. If these three steps are implemented as three different "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", all of them should be executed by the same "),e("Term",{attrs:{term:"worker"}}),t._v(".")],1)]),t._v(" "),e("li",[e("p",[e("strong",[t._v("Machine Learning Model Training")]),t._v(": Training a machine learning model typically involves three stages: download the data set, optimize the model, and upload the trained parameter. Since the models may consume a large amount of resources (GPU memory for example), the number of models processed on a host needs to be limited.")])])]),t._v(" "),e("h2",{attrs:{id:"basic-usage"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#basic-usage"}},[t._v("#")]),t._v(" Basic Usage")]),t._v(" "),e("p",[t._v("Before using the session framework to write your "),e("Term",{attrs:{term:"workflow"}}),t._v(" code, you need to configure your "),e("Term",{attrs:{term:"worker"}}),t._v(" to process sessions. To do that, set the "),e("code",[t._v("EnableSessionWorker")]),t._v(" field of "),e("code",[t._v("worker.Options")]),t._v(" to "),e("code",[t._v("true")]),t._v(" when starting your "),e("Term",{attrs:{term:"worker"}}),t._v(".")],1),t._v(" "),e("p",[t._v("The most important APIs provided by the session framework are "),e("code",[t._v("workflow.CreateSession()")]),t._v(" and "),e("code",[t._v("workflow.CompleteSession()")]),t._v(". The basic idea is that all the "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" executed within a session will be processed by the same "),e("Term",{attrs:{term:"worker"}}),t._v(" and these two APIs allow you to create new sessions and close them after all "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" finish executing.")],1),t._v(" "),e("p",[t._v("Here's a more detailed description of these two APIs:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" SessionOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ExecutionTimeout: required, no default.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Specifies the maximum amount of time the session can run.")]),t._v("\n ExecutionTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// CreationTimeout: required, no default.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Specifies how long session creation can take before returning an error.")]),t._v("\n CreationTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CreateSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" sessionOptions "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("SessionOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[e("code",[t._v("CreateSession()")]),t._v(" takes in "),e("code",[t._v("workflow.Context")]),t._v(", "),e("code",[t._v("sessionOptions")]),t._v(" and returns a new context which contains metadata information of the created session (referred to as the "),e("strong",[t._v("session context")]),t._v(" below). When it's called, it will check the "),e("Term",{attrs:{term:"task_list"}}),t._v(" name specified in the "),e("code",[t._v("ActivityOptions")]),t._v(" (or in the "),e("code",[t._v("StartWorkflowOptions")]),t._v(" if the "),e("Term",{attrs:{term:"task_list"}}),t._v(" name is not specified in "),e("code",[t._v("ActivityOptions")]),t._v("), and create the session on one of the "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" which is polling that "),e("Term",{attrs:{term:"task_list"}}),t._v(".")],1),t._v(" "),e("p",[t._v("The returned session context should be used to execute all "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" belonging to the session. The context will be cancelled if the "),e("Term",{attrs:{term:"worker"}}),t._v(" executing this session dies or "),e("code",[t._v("CompleteSession()")]),t._v(" is called. When using the returned session context to execute "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", a "),e("code",[t._v("workflow.ErrSessionFailed")]),t._v(" error may be returned if the session framework detects that the "),e("Term",{attrs:{term:"worker"}}),t._v(" executing this session has died. The failure of your "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" won't affect the state of the session, so you still need to handle the errors returned from your "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" and call "),e("code",[t._v("CompleteSession()")]),t._v(" if necessary.")],1),t._v(" "),e("p",[e("code",[t._v("CreateSession()")]),t._v(" will return an error if the context passed in already contains an open session. If all the "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" are currently busy and unable to handle new sessions, the framework will keep retrying until the "),e("code",[t._v("CreationTimeout")]),t._v(" you specified in "),e("code",[t._v("SessionOptions")]),t._v(" has passed before returning an error (check the "),e("strong",[t._v("Concurrent Session Limitation")]),t._v(" section for more details).")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CompleteSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[e("code",[t._v("CompleteSession()")]),t._v(" releases the resources reserved on the "),e("Term",{attrs:{term:"worker"}}),t._v(", so it's important to call it as soon as you no longer need the session. It will cancel the session context and therefore all the "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" using that session context. Note that it's safe to call "),e("code",[t._v("CompleteSession()")]),t._v(" on a failed session, meaning that you can call it from a "),e("code",[t._v("defer")]),t._v(" function after the session is successfully created.")],1),t._v(" "),e("h3",{attrs:{id:"sample-code"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#sample-code"}},[t._v("#")]),t._v(" Sample Code")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("FileProcessingWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" fileID "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("err "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ao "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("ActivityOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ScheduleToStartTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Second "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("5")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n StartToCloseTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n ctx "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("WithActivityOptions")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ao"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n so "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("SessionOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CreationTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n ExecutionTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CreateSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" so"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("defer")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CompleteSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" fInfo "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fileInfo\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" downloadFileActivityName"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" fileID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("fInfo"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" fInfoProcessed "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fileInfo\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processFileActivityName"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fInfo"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("fInfoProcessed"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" uploadFileActivityName"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fInfoProcessed"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("h2",{attrs:{id:"session-metadata"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#session-metadata"}},[t._v("#")]),t._v(" Session Metadata")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" SessionInfo "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// A unique ID for the session")]),t._v("\n SessionID "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The hostname of the worker that is executing the session")]),t._v("\n HostName "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ... other unexported fields")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetSessionInfo")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("SessionInfo\n")])])]),e("p",[t._v("The session context also stores some session metadata, which can be retrieved by the "),e("code",[t._v("GetSessionInfo()")]),t._v(" API. If the context passed in doesn't contain any session metadata, this API will return a "),e("code",[t._v("nil")]),t._v(" pointer.")]),t._v(" "),e("h2",{attrs:{id:"concurrent-session-limitation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#concurrent-session-limitation"}},[t._v("#")]),t._v(" Concurrent Session Limitation")]),t._v(" "),e("p",[t._v("To limit the number of concurrent sessions running on a "),e("Term",{attrs:{term:"worker"}}),t._v(", set the "),e("code",[t._v("MaxConcurrentSessionExecutionSize")]),t._v(" field of "),e("code",[t._v("worker.Options")]),t._v(" to the desired value. By default this field is set to a very large value, so there's no need to manually set it if no limitation is needed.")],1),t._v(" "),e("p",[t._v("If a "),e("Term",{attrs:{term:"worker"}}),t._v(" hits this limitation, it won't accept any new "),e("code",[t._v("CreateSession()")]),t._v(" requests until one of the existing sessions is completed. "),e("code",[t._v("CreateSession()")]),t._v(" will return an error if the session can't be created within "),e("code",[t._v("CreationTimeout")]),t._v(".")],1),t._v(" "),e("h2",{attrs:{id:"recreate-session"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#recreate-session"}},[t._v("#")]),t._v(" Recreate Session")]),t._v(" "),e("p",[t._v("For long-running sessions, you may want to use the "),e("code",[t._v("ContinueAsNew")]),t._v(" feature to split the "),e("Term",{attrs:{term:"workflow"}}),t._v(" into multiple runs when all "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" need to be executed by the same "),e("Term",{attrs:{term:"worker"}}),t._v(". The "),e("code",[t._v("RecreateSession()")]),t._v(" API is designed for such a use case.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("RecreateSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" recreateToken "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("byte")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" sessionOptions "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("SessionOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("Its usage is the same as "),e("code",[t._v("CreateSession()")]),t._v(" except that it also takes in a "),e("code",[t._v("recreateToken")]),t._v(", which is needed to create a new session on the same "),e("Term",{attrs:{term:"worker"}}),t._v(" as the previous one. You can get the token by calling the "),e("code",[t._v("GetRecreateToken()")]),t._v(" method of the "),e("code",[t._v("SessionInfo")]),t._v(" object.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("token "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetSessionInfo")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetRecreateToken")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("h2",{attrs:{id:"q-a"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#q-a"}},[t._v("#")]),t._v(" Q & A")]),t._v(" "),e("h3",{attrs:{id:"is-there-a-complete-example"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#is-there-a-complete-example"}},[t._v("#")]),t._v(" Is there a complete example?")]),t._v(" "),e("p",[t._v("Yes, the "),e("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/fileprocessing/workflow.go",target:"_blank",rel:"noopener noreferrer"}},[t._v("file processing example"),e("OutboundLink")],1),t._v(" in the cadence-sample repo has been updated to use the session framework.")]),t._v(" "),e("h3",{attrs:{id:"what-happens-to-my-activity-if-the-worker-dies"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#what-happens-to-my-activity-if-the-worker-dies"}},[t._v("#")]),t._v(" What happens to my activity if the worker dies?")]),t._v(" "),e("p",[t._v("If your "),e("Term",{attrs:{term:"activity"}}),t._v(" has already been scheduled, it will be cancelled. If not, you will get a "),e("code",[t._v("workflow.ErrSessionFailed")]),t._v(" error when you call "),e("code",[t._v("workflow.ExecuteActivity()")]),t._v(".")],1),t._v(" "),e("h3",{attrs:{id:"is-the-concurrent-session-limitation-per-process-or-per-host"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#is-the-concurrent-session-limitation-per-process-or-per-host"}},[t._v("#")]),t._v(" Is the concurrent session limitation per process or per host?")]),t._v(" "),e("p",[t._v("It's per "),e("Term",{attrs:{term:"worker"}}),t._v(" process, so make sure there's only one "),e("Term",{attrs:{term:"worker"}}),t._v(" process running on the host if you plan to use that feature.")],1),t._v(" "),e("h2",{attrs:{id:"future-work"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#future-work"}},[t._v("#")]),t._v(" Future Work")]),t._v(" "),e("ul",[e("li",[e("p",[e("strong",[e("a",{attrs:{href:"https://github.com/uber-go/cadence-client/issues/775",target:"_blank",rel:"noopener noreferrer"}},[t._v("Support automatic session re-establishing"),e("OutboundLink")],1)]),t._v("\nRight now a session is considered failed if the "),e("Term",{attrs:{term:"worker"}}),t._v(" process dies. However, for some use cases, you may only care whether "),e("Term",{attrs:{term:"worker"}}),t._v(" host is alive or not. For these uses cases, the session should be automatically re-established if the "),e("Term",{attrs:{term:"worker"}}),t._v(" process is restarted.")],1)]),t._v(" "),e("li",[e("p",[e("strong",[e("a",{attrs:{href:"https://github.com/uber-go/cadence-client/issues/776",target:"_blank",rel:"noopener noreferrer"}},[t._v("Support fine-grained concurrent session limitation"),e("OutboundLink")],1)]),t._v("\nThe current implementation assumes that all sessions are consuming the same type of resource and there's only one global limitation. Our plan is to allow you to specify what type of resource your session will consume and enforce different limitations on different types of resources.")])])])])}),[],!1,null,null,null);e.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[89],{395:function(t,e,s){"use strict";s.r(e);var a=s(0),n=Object(a.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"sessions"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#sessions"}},[t._v("#")]),t._v(" Sessions")]),t._v(" "),e("p",[t._v("The session framework provides a straightforward interface for scheduling multiple "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" on a single "),e("Term",{attrs:{term:"worker"}}),t._v(" without requiring you to manually specify the "),e("Term",{attrs:{term:"task_list"}}),t._v(" name. It also includes features like "),e("strong",[t._v("concurrent session limitation")]),t._v(" and "),e("strong",[t._v("worker failure detection")]),t._v(".")],1),t._v(" "),e("h2",{attrs:{id:"use-cases"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#use-cases"}},[t._v("#")]),t._v(" Use Cases")]),t._v(" "),e("ul",[e("li",[e("p",[e("strong",[t._v("File Processing")]),t._v(": You may want to implement a "),e("Term",{attrs:{term:"workflow"}}),t._v(" that can download a file, process it, and then upload the modified version. If these three steps are implemented as three different "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", all of them should be executed by the same "),e("Term",{attrs:{term:"worker"}}),t._v(".")],1)]),t._v(" "),e("li",[e("p",[e("strong",[t._v("Machine Learning Model Training")]),t._v(": Training a machine learning model typically involves three stages: download the data set, optimize the model, and upload the trained parameter. Since the models may consume a large amount of resources (GPU memory for example), the number of models processed on a host needs to be limited.")])])]),t._v(" "),e("h2",{attrs:{id:"basic-usage"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#basic-usage"}},[t._v("#")]),t._v(" Basic Usage")]),t._v(" "),e("p",[t._v("Before using the session framework to write your "),e("Term",{attrs:{term:"workflow"}}),t._v(" code, you need to configure your "),e("Term",{attrs:{term:"worker"}}),t._v(" to process sessions. To do that, set the "),e("code",[t._v("EnableSessionWorker")]),t._v(" field of "),e("code",[t._v("worker.Options")]),t._v(" to "),e("code",[t._v("true")]),t._v(" when starting your "),e("Term",{attrs:{term:"worker"}}),t._v(".")],1),t._v(" "),e("p",[t._v("The most important APIs provided by the session framework are "),e("code",[t._v("workflow.CreateSession()")]),t._v(" and "),e("code",[t._v("workflow.CompleteSession()")]),t._v(". The basic idea is that all the "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" executed within a session will be processed by the same "),e("Term",{attrs:{term:"worker"}}),t._v(" and these two APIs allow you to create new sessions and close them after all "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" finish executing.")],1),t._v(" "),e("p",[t._v("Here's a more detailed description of these two APIs:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" SessionOptions "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ExecutionTimeout: required, no default.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Specifies the maximum amount of time the session can run.")]),t._v("\n ExecutionTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// CreationTimeout: required, no default.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Specifies how long session creation can take before returning an error.")]),t._v("\n CreationTimeout time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Duration\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CreateSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" sessionOptions "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("SessionOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[e("code",[t._v("CreateSession()")]),t._v(" takes in "),e("code",[t._v("workflow.Context")]),t._v(", "),e("code",[t._v("sessionOptions")]),t._v(" and returns a new context which contains metadata information of the created session (referred to as the "),e("strong",[t._v("session context")]),t._v(" below). When it's called, it will check the "),e("Term",{attrs:{term:"task_list"}}),t._v(" name specified in the "),e("code",[t._v("ActivityOptions")]),t._v(" (or in the "),e("code",[t._v("StartWorkflowOptions")]),t._v(" if the "),e("Term",{attrs:{term:"task_list"}}),t._v(" name is not specified in "),e("code",[t._v("ActivityOptions")]),t._v("), and create the session on one of the "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" which is polling that "),e("Term",{attrs:{term:"task_list"}}),t._v(".")],1),t._v(" "),e("p",[t._v("The returned session context should be used to execute all "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" belonging to the session. The context will be cancelled if the "),e("Term",{attrs:{term:"worker"}}),t._v(" executing this session dies or "),e("code",[t._v("CompleteSession()")]),t._v(" is called. When using the returned session context to execute "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(", a "),e("code",[t._v("workflow.ErrSessionFailed")]),t._v(" error may be returned if the session framework detects that the "),e("Term",{attrs:{term:"worker"}}),t._v(" executing this session has died. The failure of your "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" won't affect the state of the session, so you still need to handle the errors returned from your "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" and call "),e("code",[t._v("CompleteSession()")]),t._v(" if necessary.")],1),t._v(" "),e("p",[e("code",[t._v("CreateSession()")]),t._v(" will return an error if the context passed in already contains an open session. If all the "),e("Term",{attrs:{term:"worker",show:"workers"}}),t._v(" are currently busy and unable to handle new sessions, the framework will keep retrying until the "),e("code",[t._v("CreationTimeout")]),t._v(" you specified in "),e("code",[t._v("SessionOptions")]),t._v(" has passed before returning an error (check the "),e("strong",[t._v("Concurrent Session Limitation")]),t._v(" section for more details).")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CompleteSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[e("code",[t._v("CompleteSession()")]),t._v(" releases the resources reserved on the "),e("Term",{attrs:{term:"worker"}}),t._v(", so it's important to call it as soon as you no longer need the session. It will cancel the session context and therefore all the "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" using that session context. Note that it's safe to call "),e("code",[t._v("CompleteSession()")]),t._v(" on a failed session, meaning that you can call it from a "),e("code",[t._v("defer")]),t._v(" function after the session is successfully created.")],1),t._v(" "),e("h3",{attrs:{id:"sample-code"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#sample-code"}},[t._v("#")]),t._v(" Sample Code")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("FileProcessingWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" fileID "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("err "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ao "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("ActivityOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n ScheduleToStartTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Second "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token number"}},[t._v("5")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n StartToCloseTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n ctx "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("WithActivityOptions")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" ao"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n so "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("SessionOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n CreationTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n ExecutionTimeout"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Minute"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CreateSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" so"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("defer")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CompleteSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" fInfo "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fileInfo\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" downloadFileActivityName"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" fileID"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("fInfo"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" fInfoProcessed "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fileInfo\n err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" processFileActivityName"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fInfo"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("fInfoProcessed"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("!=")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" err\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("ExecuteActivity")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" uploadFileActivityName"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("fInfoProcessed"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Get")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("h2",{attrs:{id:"session-metadata"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#session-metadata"}},[t._v("#")]),t._v(" Session Metadata")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("type")]),t._v(" SessionInfo "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("struct")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// A unique ID for the session")]),t._v("\n SessionID "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The hostname of the worker that is executing the session")]),t._v("\n HostName "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ... other unexported fields")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n\n"),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetSessionInfo")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("SessionInfo\n")])])]),e("p",[t._v("The session context also stores some session metadata, which can be retrieved by the "),e("code",[t._v("GetSessionInfo()")]),t._v(" API. If the context passed in doesn't contain any session metadata, this API will return a "),e("code",[t._v("nil")]),t._v(" pointer.")]),t._v(" "),e("h2",{attrs:{id:"concurrent-session-limitation"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#concurrent-session-limitation"}},[t._v("#")]),t._v(" Concurrent Session Limitation")]),t._v(" "),e("p",[t._v("To limit the number of concurrent sessions running on a "),e("Term",{attrs:{term:"worker"}}),t._v(", set the "),e("code",[t._v("MaxConcurrentSessionExecutionSize")]),t._v(" field of "),e("code",[t._v("worker.Options")]),t._v(" to the desired value. By default this field is set to a very large value, so there's no need to manually set it if no limitation is needed.")],1),t._v(" "),e("p",[t._v("If a "),e("Term",{attrs:{term:"worker"}}),t._v(" hits this limitation, it won't accept any new "),e("code",[t._v("CreateSession()")]),t._v(" requests until one of the existing sessions is completed. "),e("code",[t._v("CreateSession()")]),t._v(" will return an error if the session can't be created within "),e("code",[t._v("CreationTimeout")]),t._v(".")],1),t._v(" "),e("h2",{attrs:{id:"recreate-session"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#recreate-session"}},[t._v("#")]),t._v(" Recreate Session")]),t._v(" "),e("p",[t._v("For long-running sessions, you may want to use the "),e("code",[t._v("ContinueAsNew")]),t._v(" feature to split the "),e("Term",{attrs:{term:"workflow"}}),t._v(" into multiple runs when all "),e("Term",{attrs:{term:"activity",show:"activities"}}),t._v(" need to be executed by the same "),e("Term",{attrs:{term:"worker"}}),t._v(". The "),e("code",[t._v("RecreateSession()")]),t._v(" API is designed for such a use case.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("RecreateSession")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" recreateToken "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("[")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("]")]),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("byte")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" sessionOptions "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("*")]),t._v("SessionOptions"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("p",[t._v("Its usage is the same as "),e("code",[t._v("CreateSession()")]),t._v(" except that it also takes in a "),e("code",[t._v("recreateToken")]),t._v(", which is needed to create a new session on the same "),e("Term",{attrs:{term:"worker"}}),t._v(" as the previous one. You can get the token by calling the "),e("code",[t._v("GetRecreateToken()")]),t._v(" method of the "),e("code",[t._v("SessionInfo")]),t._v(" object.")],1),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[t._v("token "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetSessionInfo")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("sessionCtx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetRecreateToken")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n")])])]),e("h2",{attrs:{id:"q-a"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#q-a"}},[t._v("#")]),t._v(" Q & A")]),t._v(" "),e("h3",{attrs:{id:"is-there-a-complete-example"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#is-there-a-complete-example"}},[t._v("#")]),t._v(" Is there a complete example?")]),t._v(" "),e("p",[t._v("Yes, the "),e("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/fileprocessing/workflow.go",target:"_blank",rel:"noopener noreferrer"}},[t._v("file processing example"),e("OutboundLink")],1),t._v(" in the cadence-sample repo has been updated to use the session framework.")]),t._v(" "),e("h3",{attrs:{id:"what-happens-to-my-activity-if-the-worker-dies"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#what-happens-to-my-activity-if-the-worker-dies"}},[t._v("#")]),t._v(" What happens to my activity if the worker dies?")]),t._v(" "),e("p",[t._v("If your "),e("Term",{attrs:{term:"activity"}}),t._v(" has already been scheduled, it will be cancelled. If not, you will get a "),e("code",[t._v("workflow.ErrSessionFailed")]),t._v(" error when you call "),e("code",[t._v("workflow.ExecuteActivity()")]),t._v(".")],1),t._v(" "),e("h3",{attrs:{id:"is-the-concurrent-session-limitation-per-process-or-per-host"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#is-the-concurrent-session-limitation-per-process-or-per-host"}},[t._v("#")]),t._v(" Is the concurrent session limitation per process or per host?")]),t._v(" "),e("p",[t._v("It's per "),e("Term",{attrs:{term:"worker"}}),t._v(" process, so make sure there's only one "),e("Term",{attrs:{term:"worker"}}),t._v(" process running on the host if you plan to use that feature.")],1),t._v(" "),e("h2",{attrs:{id:"future-work"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#future-work"}},[t._v("#")]),t._v(" Future Work")]),t._v(" "),e("ul",[e("li",[e("p",[e("strong",[e("a",{attrs:{href:"https://github.com/uber-go/cadence-client/issues/775",target:"_blank",rel:"noopener noreferrer"}},[t._v("Support automatic session re-establishing"),e("OutboundLink")],1)]),t._v("\nRight now a session is considered failed if the "),e("Term",{attrs:{term:"worker"}}),t._v(" process dies. However, for some use cases, you may only care whether "),e("Term",{attrs:{term:"worker"}}),t._v(" host is alive or not. For these uses cases, the session should be automatically re-established if the "),e("Term",{attrs:{term:"worker"}}),t._v(" process is restarted.")],1)]),t._v(" "),e("li",[e("p",[e("strong",[e("a",{attrs:{href:"https://github.com/uber-go/cadence-client/issues/776",target:"_blank",rel:"noopener noreferrer"}},[t._v("Support fine-grained concurrent session limitation"),e("OutboundLink")],1)]),t._v("\nThe current implementation assumes that all sessions are consuming the same type of resource and there's only one global limitation. Our plan is to allow you to specify what type of resource your session will consume and enforce different limitations on different types of resources.")])])])])}),[],!1,null,null,null);e.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/90.3aad2522.js b/assets/js/90.f1c2c5e5.js similarity index 99% rename from assets/js/90.3aad2522.js rename to assets/js/90.f1c2c5e5.js index 5689ff064..6686c6668 100644 --- a/assets/js/90.3aad2522.js +++ b/assets/js/90.f1c2c5e5.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[90],{395:function(t,e,s){"use strict";s.r(e);var n=s(0),r=Object(n.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"distributed-cron"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#distributed-cron"}},[t._v("#")]),t._v(" Distributed CRON")]),t._v(" "),e("p",[t._v("It is relatively straightforward to turn any Cadence "),e("Term",{attrs:{term:"workflow"}}),t._v(" into a Cron "),e("Term",{attrs:{term:"workflow"}}),t._v(". All you need\nis to supply a cron schedule when starting the "),e("Term",{attrs:{term:"workflow"}}),t._v(" using the CronSchedule\nparameter of\n"),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("StartWorkflowOptions"),e("OutboundLink")],1),t._v(".")],1),t._v(" "),e("p",[t._v("You can also start a "),e("Term",{attrs:{term:"workflow"}}),t._v(" using the Cadence "),e("Term",{attrs:{term:"CLI"}}),t._v(" with an optional cron schedule using the "),e("code",[t._v("--cron")]),t._v(" argument.")],1),t._v(" "),e("p",[t._v("For "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with CronSchedule:")],1),t._v(" "),e("ul",[e("li",[t._v('Cron schedule is based on UTC time. For example cron schedule "15 8 * * *"\nwill run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays\nand saturdays.')]),t._v(" "),e("li",[t._v("If a "),e("Term",{attrs:{term:"workflow"}}),t._v(" failed and a RetryPolicy is supplied to the StartWorkflowOptions\nas well, the "),e("Term",{attrs:{term:"workflow"}}),t._v(" will retry based on the RetryPolicy. While the "),e("Term",{attrs:{term:"workflow"}}),t._v(" is\nretrying, the server will not schedule the next cron run.")],1),t._v(" "),e("li",[t._v("Cadence server only schedules the next cron run after the current run is\ncompleted. If the next schedule is due while a "),e("Term",{attrs:{term:"workflow"}}),t._v(" is running (or retrying),\nthen it will skip that schedule.")],1),t._v(" "),e("li",[t._v("Cron "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" will not stop until they are terminated or cancelled.")],1)]),t._v(" "),e("p",[t._v("Cadence supports the standard cron spec:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for next run only happen")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won't")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule its next run. If next schedule is due while the workflow is running (or retrying), then it will skip that")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The cron spec is as following:")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ┌───────────── minute (0 - 59)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ ┌───────────── hour (0 - 23)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ ┌───────────── day of the month (1 - 31)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ ┌───────────── month (1 - 12)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// * * * * *")]),t._v("\nCronSchedule "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n")])])]),e("p",[t._v("Cadence also supports more "),e("a",{attrs:{href:"https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format",target:"_blank",rel:"noopener noreferrer"}},[t._v("advanced cron expressions"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("The "),e("a",{attrs:{href:"https://crontab.guru/",target:"_blank",rel:"noopener noreferrer"}},[t._v("crontab guru site"),e("OutboundLink")],1),t._v(" is useful for testing your cron expressions.")]),t._v(" "),e("h2",{attrs:{id:"convert-existing-cron-workflow"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#convert-existing-cron-workflow"}},[t._v("#")]),t._v(" Convert existing cron workflow")]),t._v(" "),e("p",[t._v("Before CronSchedule was available, the previous approach to implementing cron\n"),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" was to use a delay timer as the last step and then return\n"),e("code",[t._v("ContinueAsNew")]),t._v(". One problem with that implementation is that if the "),e("Term",{attrs:{term:"workflow"}}),t._v("\nfails or times out, the cron would stop.")],1),t._v(" "),e("p",[t._v("To convert those "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" to make use of Cadence CronSchedule, all you need is to\nremove the delay timer and return without using\n"),e("code",[t._v("ContinueAsNew")]),t._v(". Then start the "),e("Term",{attrs:{term:"workflow"}}),t._v(" with the desired CronSchedule.")],1),t._v(" "),e("h2",{attrs:{id:"retrieve-last-successful-result"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#retrieve-last-successful-result"}},[t._v("#")]),t._v(" Retrieve last successful result")]),t._v(" "),e("p",[t._v("Sometimes it is useful to obtain the progress of previous successful runs.\nThis is supported by two new APIs in the client library:\n"),e("code",[t._v("HasLastCompletionResult")]),t._v(" and "),e("code",[t._v("GetLastCompletionResult")]),t._v(". Below is an example of how\nto use this in Go:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CronWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CronResult"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n startTimestamp "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// By default start from 0 time.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("HasLastCompletionResult")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" progress CronResult\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetLastCompletionResult")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("progress"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("==")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n startTimestamp "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" progress"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("LastSyncTimestamp\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n endTimestamp "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Now")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Process work between startTimestamp (exclusive), endTimestamp (inclusive).")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Business logic implementation goes here.")]),t._v("\n\n result "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" CronResult"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("LastSyncTimestamp"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" endTimestamp"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" result"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("Note that this works even if one of the cron schedule runs failed. The\nnext schedule will still get the last successful result if it ever successfully\ncompleted at least once. For example, for a daily cron "),e("Term",{attrs:{term:"workflow"}}),t._v(", if the first day\nrun succeeds and the second day fails, then the third day run will still get\nthe result from first day's run using these APIs.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[90],{396:function(t,e,s){"use strict";s.r(e);var n=s(0),r=Object(n.a)({},(function(){var t=this,e=t._self._c;return e("ContentSlotsDistributor",{attrs:{"slot-key":t.$parent.slotKey}},[e("h1",{attrs:{id:"distributed-cron"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#distributed-cron"}},[t._v("#")]),t._v(" Distributed CRON")]),t._v(" "),e("p",[t._v("It is relatively straightforward to turn any Cadence "),e("Term",{attrs:{term:"workflow"}}),t._v(" into a Cron "),e("Term",{attrs:{term:"workflow"}}),t._v(". All you need\nis to supply a cron schedule when starting the "),e("Term",{attrs:{term:"workflow"}}),t._v(" using the CronSchedule\nparameter of\n"),e("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence/internal#StartWorkflowOptions",target:"_blank",rel:"noopener noreferrer"}},[t._v("StartWorkflowOptions"),e("OutboundLink")],1),t._v(".")],1),t._v(" "),e("p",[t._v("You can also start a "),e("Term",{attrs:{term:"workflow"}}),t._v(" using the Cadence "),e("Term",{attrs:{term:"CLI"}}),t._v(" with an optional cron schedule using the "),e("code",[t._v("--cron")]),t._v(" argument.")],1),t._v(" "),e("p",[t._v("For "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" with CronSchedule:")],1),t._v(" "),e("ul",[e("li",[t._v('Cron schedule is based on UTC time. For example cron schedule "15 8 * * *"\nwill run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays\nand saturdays.')]),t._v(" "),e("li",[t._v("If a "),e("Term",{attrs:{term:"workflow"}}),t._v(" failed and a RetryPolicy is supplied to the StartWorkflowOptions\nas well, the "),e("Term",{attrs:{term:"workflow"}}),t._v(" will retry based on the RetryPolicy. While the "),e("Term",{attrs:{term:"workflow"}}),t._v(" is\nretrying, the server will not schedule the next cron run.")],1),t._v(" "),e("li",[t._v("Cadence server only schedules the next cron run after the current run is\ncompleted. If the next schedule is due while a "),e("Term",{attrs:{term:"workflow"}}),t._v(" is running (or retrying),\nthen it will skip that schedule.")],1),t._v(" "),e("li",[t._v("Cron "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" will not stop until they are terminated or cancelled.")],1)]),t._v(" "),e("p",[t._v("Cadence supports the standard cron spec:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for next run only happen")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won't")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule its next run. If next schedule is due while the workflow is running (or retrying), then it will skip that")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// The cron spec is as following:")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// ┌───────────── minute (0 - 59)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ ┌───────────── hour (0 - 23)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ ┌───────────── day of the month (1 - 31)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ ┌───────────── month (1 - 12)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// │ │ │ │ │")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// * * * * *")]),t._v("\nCronSchedule "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("string")]),t._v("\n")])])]),e("p",[t._v("Cadence also supports more "),e("a",{attrs:{href:"https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format",target:"_blank",rel:"noopener noreferrer"}},[t._v("advanced cron expressions"),e("OutboundLink")],1),t._v(".")]),t._v(" "),e("p",[t._v("The "),e("a",{attrs:{href:"https://crontab.guru/",target:"_blank",rel:"noopener noreferrer"}},[t._v("crontab guru site"),e("OutboundLink")],1),t._v(" is useful for testing your cron expressions.")]),t._v(" "),e("h2",{attrs:{id:"convert-existing-cron-workflow"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#convert-existing-cron-workflow"}},[t._v("#")]),t._v(" Convert existing cron workflow")]),t._v(" "),e("p",[t._v("Before CronSchedule was available, the previous approach to implementing cron\n"),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" was to use a delay timer as the last step and then return\n"),e("code",[t._v("ContinueAsNew")]),t._v(". One problem with that implementation is that if the "),e("Term",{attrs:{term:"workflow"}}),t._v("\nfails or times out, the cron would stop.")],1),t._v(" "),e("p",[t._v("To convert those "),e("Term",{attrs:{term:"workflow",show:"workflows"}}),t._v(" to make use of Cadence CronSchedule, all you need is to\nremove the delay timer and return without using\n"),e("code",[t._v("ContinueAsNew")]),t._v(". Then start the "),e("Term",{attrs:{term:"workflow"}}),t._v(" with the desired CronSchedule.")],1),t._v(" "),e("h2",{attrs:{id:"retrieve-last-successful-result"}},[e("a",{staticClass:"header-anchor",attrs:{href:"#retrieve-last-successful-result"}},[t._v("#")]),t._v(" Retrieve last successful result")]),t._v(" "),e("p",[t._v("Sometimes it is useful to obtain the progress of previous successful runs.\nThis is supported by two new APIs in the client library:\n"),e("code",[t._v("HasLastCompletionResult")]),t._v(" and "),e("code",[t._v("GetLastCompletionResult")]),t._v(". Below is an example of how\nto use this in Go:")]),t._v(" "),e("div",{staticClass:"language-go extra-class"},[e("pre",{pre:!0,attrs:{class:"language-go"}},[e("code",[e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("func")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("CronWorkflow")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Context"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("CronResult"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token builtin"}},[t._v("error")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n startTimestamp "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("Time"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// By default start from 0 time.")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("HasLastCompletionResult")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("var")]),t._v(" progress CronResult\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("if")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("GetLastCompletionResult")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("&")]),t._v("progress"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(";")]),t._v(" err "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("==")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("\n startTimestamp "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v("=")]),t._v(" progress"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),t._v("LastSyncTimestamp\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n endTimestamp "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" workflow"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(".")]),e("span",{pre:!0,attrs:{class:"token function"}},[t._v("Now")]),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("(")]),t._v("ctx"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(")")]),t._v("\n\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Process work between startTimestamp (exclusive), endTimestamp (inclusive).")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token comment"}},[t._v("// Business logic implementation goes here.")]),t._v("\n\n result "),e("span",{pre:!0,attrs:{class:"token operator"}},[t._v(":=")]),t._v(" CronResult"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("{")]),t._v("LastSyncTimestamp"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(":")]),t._v(" endTimestamp"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n "),e("span",{pre:!0,attrs:{class:"token keyword"}},[t._v("return")]),t._v(" result"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v(",")]),t._v(" "),e("span",{pre:!0,attrs:{class:"token boolean"}},[t._v("nil")]),t._v("\n"),e("span",{pre:!0,attrs:{class:"token punctuation"}},[t._v("}")]),t._v("\n")])])]),e("p",[t._v("Note that this works even if one of the cron schedule runs failed. The\nnext schedule will still get the last successful result if it ever successfully\ncompleted at least once. For example, for a daily cron "),e("Term",{attrs:{term:"workflow"}}),t._v(", if the first day\nrun succeeds and the second day fails, then the third day run will still get\nthe result from first day's run using these APIs.")],1)])}),[],!1,null,null,null);e.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/93.0703a307.js b/assets/js/93.5fbc3c74.js similarity index 97% rename from assets/js/93.0703a307.js rename to assets/js/93.5fbc3c74.js index 4c24843c9..3a4fb2390 100644 --- a/assets/js/93.0703a307.js +++ b/assets/js/93.5fbc3c74.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[93],{401:function(e,t,o){"use strict";o.r(t);var r=o(0),n=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"go-client"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#go-client"}},[e._v("#")]),e._v(" Go client")]),e._v(" "),t("h2",{attrs:{id:"overview"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#overview"}},[e._v("#")]),e._v(" Overview")]),e._v(" "),t("p",[e._v("Go client attempts to follow Go language conventions. The conversion of a Go program to the fault-oblivious "),t("Term",{attrs:{term:"workflow"}}),e._v(" function is expected to be pretty mechanical.")],1),e._v(" "),t("p",[e._v("Cadence requires determinism of the "),t("Term",{attrs:{term:"workflow"}}),e._v(" code. It supports deterministic execution of the multithreaded code and constructs like "),t("code",[e._v("select")]),e._v(" that are non-deterministic by Go design. The Cadence solution is to provide corresponding constructs in the form of interfaces that have similar capability but support deterministic execution.")],1),e._v(" "),t("p",[e._v("For example, instead of native Go channels, "),t("Term",{attrs:{term:"workflow"}}),e._v(" code must use the "),t("code",[e._v("workflow.Channel")]),e._v(" interface. Instead of "),t("code",[e._v("select")]),e._v(", the "),t("code",[e._v("workflow.Selector")]),e._v(" interface must be used.")],1),e._v(" "),t("p",[e._v("For more information, see "),t("a",{attrs:{href:"/docs/go-client/create-workflows"}},[e._v("Creating Workflows")]),e._v(".")]),e._v(" "),t("h2",{attrs:{id:"links"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#links"}},[e._v("#")]),e._v(" Links")]),e._v(" "),t("ul",[t("li",[e._v("GitHub project: "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber-go/cadence-client"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("Samples: "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber-common/cadence-samples"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("GoDoc documentation: "),t("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://godoc.org/go.uber.org/cadence"),t("OutboundLink")],1)])])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[93],{400:function(e,t,o){"use strict";o.r(t);var r=o(0),n=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"go-client"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#go-client"}},[e._v("#")]),e._v(" Go client")]),e._v(" "),t("h2",{attrs:{id:"overview"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#overview"}},[e._v("#")]),e._v(" Overview")]),e._v(" "),t("p",[e._v("Go client attempts to follow Go language conventions. The conversion of a Go program to the fault-oblivious "),t("Term",{attrs:{term:"workflow"}}),e._v(" function is expected to be pretty mechanical.")],1),e._v(" "),t("p",[e._v("Cadence requires determinism of the "),t("Term",{attrs:{term:"workflow"}}),e._v(" code. It supports deterministic execution of the multithreaded code and constructs like "),t("code",[e._v("select")]),e._v(" that are non-deterministic by Go design. The Cadence solution is to provide corresponding constructs in the form of interfaces that have similar capability but support deterministic execution.")],1),e._v(" "),t("p",[e._v("For example, instead of native Go channels, "),t("Term",{attrs:{term:"workflow"}}),e._v(" code must use the "),t("code",[e._v("workflow.Channel")]),e._v(" interface. Instead of "),t("code",[e._v("select")]),e._v(", the "),t("code",[e._v("workflow.Selector")]),e._v(" interface must be used.")],1),e._v(" "),t("p",[e._v("For more information, see "),t("a",{attrs:{href:"/docs/go-client/create-workflows"}},[e._v("Creating Workflows")]),e._v(".")]),e._v(" "),t("h2",{attrs:{id:"links"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#links"}},[e._v("#")]),e._v(" Links")]),e._v(" "),t("ul",[t("li",[e._v("GitHub project: "),t("a",{attrs:{href:"https://github.com/uber-go/cadence-client",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber-go/cadence-client"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("Samples: "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://github.com/uber-common/cadence-samples"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("GoDoc documentation: "),t("a",{attrs:{href:"https://godoc.org/go.uber.org/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("https://godoc.org/go.uber.org/cadence"),t("OutboundLink")],1)])])])}),[],!1,null,null,null);t.default=n.exports}}]); \ No newline at end of file diff --git a/assets/js/94.77288551.js b/assets/js/94.775f5668.js similarity index 99% rename from assets/js/94.77288551.js rename to assets/js/94.775f5668.js index 304c01fd7..725a86d3c 100644 --- a/assets/js/94.77288551.js +++ b/assets/js/94.775f5668.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[94],{400:function(e,a,t){"use strict";t.r(a);var r=t(0),s=Object(r.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"command-line-interface"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#command-line-interface"}},[e._v("#")]),e._v(" Command Line Interface")]),e._v(" "),a("p",[e._v("The Cadence "),a("Term",{attrs:{term:"CLI"}}),e._v(" is a command-line tool you can use to perform various "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v(" on a Cadence server. It can perform\n"),a("Term",{attrs:{term:"domain"}}),e._v(" operations such as register, update, and describe as well as "),a("Term",{attrs:{term:"workflow"}}),e._v(" operations like start\n"),a("Term",{attrs:{term:"workflow"}}),e._v(", show "),a("Term",{attrs:{term:"workflow"}}),e._v(" history, and "),a("Term",{attrs:{term:"signal"}}),e._v(" "),a("Term",{attrs:{term:"workflow"}}),e._v(".")],1),e._v(" "),a("h2",{attrs:{id:"using-the-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#using-the-cli"}},[e._v("#")]),e._v(" Using the CLI")]),e._v(" "),a("h3",{attrs:{id:"homebrew"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#homebrew"}},[e._v("#")]),e._v(" Homebrew")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("brew install cadence-workflow\n")])])]),a("p",[e._v("After the installation is done, you can use CLI:")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("cadence --help\n")])])]),a("p",[e._v("This will always install the latest version. Follow "),a("a",{attrs:{href:"https://github.com/uber/cadence/discussions/4457",target:"_blank",rel:"noopener noreferrer"}},[e._v("this instructions"),a("OutboundLink")],1),e._v(" if you need to install older versions of Cadence CLI.")]),e._v(" "),a("h3",{attrs:{id:"docker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#docker"}},[e._v("#")]),e._v(" Docker")]),e._v(" "),a("p",[e._v("The Cadence "),a("Term",{attrs:{term:"CLI"}}),e._v(" can be used directly from the Docker Hub image "),a("em",[e._v("ubercadence/cli")]),e._v(" or by building the "),a("Term",{attrs:{term:"CLI"}}),e._v(" tool\nlocally.")],1),e._v(" "),a("p",[e._v("Example of using the docker image to describe a "),a("Term",{attrs:{term:"domain",show:""}})],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-it")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("frontendAddress"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[a("code",[e._v("master")]),e._v(" will be the latest CLI binary from the project. But you can specify a version to best match your server version:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-it")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("version"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("frontendAddress"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[e._v("For example "),a("code",[e._v("docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe")]),e._v(" will be the CLI that is released as part of the "),a("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.21.3",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.21.3 release"),a("OutboundLink")],1),e._v(".\nSee "),a("a",{attrs:{href:"https://hub.docker.com/r/ubercadence/cli/tags?page=1&ordering=last_updated",target:"_blank",rel:"noopener noreferrer"}},[e._v("docker hub page"),a("OutboundLink")],1),e._v(" for all the CLI image tags.\nNote that CLI versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. That's because "),a("a",{attrs:{href:"https://stackoverflow.com/questions/68217385/what-is-clientversionnotsupportederror-and-how-to-resolve-it",target:"_blank",rel:"noopener noreferrer"}},[e._v("the CLI version doesn't change in those versions"),a("OutboundLink")],1),e._v(".")]),e._v(" "),a("p",[e._v('NOTE: On Docker versions 18.03 and later, you may get a "connection refused" error when connecting to local server. You can work around this by setting the host to "host.docker.internal" (see '),a("a",{attrs:{href:"https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),a("OutboundLink")],1),e._v(" for more info).")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-it")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" host.docker.internal:7933 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[e._v("NOTE: Be sure to update your image when you want to try new features: "),a("code",[e._v("docker pull ubercadence/cli:master")])]),e._v(" "),a("p",[e._v("NOTE: If you are running docker-compose Cadence server, you can also logon to the container to execute CLI:")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("docker exec -it docker_cadence_1 /bin/bash\n\n# cadence --address $(hostname -i):7933 --do samples domain register\n")])])]),a("h3",{attrs:{id:"build-it-yourself"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#build-it-yourself"}},[e._v("#")]),e._v(" Build it yourself")]),e._v(" "),a("p",[e._v("To build the "),a("Term",{attrs:{term:"CLI"}}),e._v(" tool locally, clone the "),a("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence server repo"),a("OutboundLink")],1),e._v(", check out the version tag (e.g. "),a("code",[e._v("git checkout v0.21.3")]),e._v(") and run\n"),a("code",[e._v("make tools")]),e._v(". This produces an executable called "),a("code",[e._v("cadence")]),e._v(". With a local build, the same command to\ndescribe a "),a("Term",{attrs:{term:"domain"}}),e._v(" would look like this:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[e._v("Alternatively, you can build the CLI image, see "),a("RouterLink",{attrs:{to:"/docs/06-cli/docker/#diy-building-an-image-for-any-tag-or-branch"}},[e._v("instructions")])],1),e._v(" "),a("h2",{attrs:{id:"documentation"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#documentation"}},[e._v("#")]),e._v(" Documentation")]),e._v(" "),a("p",[e._v("CLI are documented by "),a("code",[e._v("--help")]),e._v(" or "),a("code",[e._v("-h")]),e._v(" in ANY tab of all levels:")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("$cadence --help\nNAME:\n cadence - A command-line tool for cadence users\n\nUSAGE:\n cadence [global options] command [command options] [arguments...]\n\nVERSION:\n 0.18.4\n\nCOMMANDS:\n domain, d Operate cadence domain\n workflow, wf Operate cadence workflow\n tasklist, tl Operate cadence tasklist\n admin, adm Run admin operation\n cluster, cl Operate cadence cluster\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --address value, --ad value host:port for cadence frontend service [$CADENCE_CLI_ADDRESS]\n --domain value, --do value cadence workflow domain [$CADENCE_CLI_DOMAIN]\n --context_timeout value, --ct value optional timeout for context of RPC call in seconds (default: 5) [$CADENCE_CONTEXT_TIMEOUT]\n --help, -h show help\n --version, -v print the version\n")])])]),a("p",[e._v("And")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("$cadence workflow -h\nNAME:\n cadence workflow - Operate cadence workflow\n\nUSAGE:\n cadence workflow command [command options] [arguments...]\n\nCOMMANDS:\n activity, act operate activities of workflow\n show show workflow history\n showid show workflow history with given workflow_id and run_id (a shortcut of `show -w -r `). run_id is only required for archived history\n start start a new workflow execution\n run start a new workflow execution and get workflow progress\n cancel, c cancel a workflow execution\n signal, s signal a workflow execution\n signalwithstart signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n terminate, term terminate a new workflow execution\n list, l list open or closed workflow executions\n listall, la list all open or closed workflow executions\n listarchived list archived workflow executions\n scan, sc, scanall scan workflow executions (need to enable Cadence server on ElasticSearch). It will be faster than listall, but result are not sorted.\n count, cnt count number of workflow executions (need to enable Cadence server on ElasticSearch)\n query query workflow execution\n stack query workflow execution with __stack_trace as query type\n describe, desc show information of workflow execution\n describeid, descid show information of workflow execution with given workflow_id and optional run_id (a shortcut of `describe -w -r `)\n observe, ob show the progress of workflow history\n observeid, obid show the progress of workflow history with given workflow_id and optional run_id (a shortcut of `observe -w -r `)\n reset, rs reset the workflow, by either eventID or resetType.\n reset-batch reset workflow in batch by resetType: LastDecisionCompleted,LastContinuedAsNew,BadBinary,DecisionCompletedTime,FirstDecisionScheduled,LastDecisionScheduled,FirstDecisionCompletedTo get base workflowIDs/runIDs to reset, source is from input file or visibility query.\n batch batch operation on a list of workflows from query.\n\nOPTIONS:\n --help, -h show help\n")])])]),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("$cadence wf signal -h\nNAME:\n cadence workflow signal - signal a workflow execution\n\nUSAGE:\n cadence workflow signal [command options] [arguments...]\n\nOPTIONS:\n --workflow_id value, --wid value, -w value WorkflowID\n --run_id value, --rid value, -r value RunID\n --name value, -n value SignalName\n --input value, -i value Input for the signal, in JSON format.\n --input_file value, --if value Input for the signal from JSON file.\n\n")])])]),a("p",[e._v("And etc.")]),e._v(" "),a("p",[e._v("The example commands below will use "),a("code",[e._v("cadence")]),e._v(" for brevity.")]),e._v(" "),a("h2",{attrs:{id:"environment-variables"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#environment-variables"}},[e._v("#")]),e._v(" Environment variables")]),e._v(" "),a("p",[e._v("Setting environment variables for repeated parameters can shorten the "),a("Term",{attrs:{term:"CLI"}}),e._v(" commands.")],1),e._v(" "),a("ul",[a("li",[a("strong",[e._v("CADENCE_CLI_ADDRESS")]),e._v(" - host:port for Cadence frontend service, the default is for the local server")]),e._v(" "),a("li",[a("strong",[e._v("CADENCE_CLI_DOMAIN")]),e._v(" - default "),a("Term",{attrs:{term:"workflow"}}),e._v(" "),a("Term",{attrs:{term:"domain"}}),e._v(", so you don't need to specify "),a("code",[e._v("--domain")])],1)]),e._v(" "),a("h2",{attrs:{id:"quick-start"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#quick-start"}},[e._v("#")]),e._v(" Quick Start")]),e._v(" "),a("p",[e._v("Run "),a("code",[e._v("cadence")]),e._v(" for help on top level commands and global options\nRun "),a("code",[e._v("cadence domain")]),e._v(" for help on "),a("Term",{attrs:{term:"domain"}}),e._v(" operations\nRun "),a("code",[e._v("cadence workflow")]),e._v(" for help on "),a("Term",{attrs:{term:"workflow"}}),e._v(" operations\nRun "),a("code",[e._v("cadence tasklist")]),e._v(" for help on tasklist operations\n("),a("code",[e._v("cadence help")]),e._v(", "),a("code",[e._v("cadence help [domain|workflow]")]),e._v(" will also print help messages)")],1),e._v(" "),a("p",[a("strong",[e._v("Note:")]),e._v(" make sure you have a Cadence server running before using "),a("Term",{attrs:{term:"CLI"}})],1),e._v(" "),a("h3",{attrs:{id:"domain-operation-examples"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#domain-operation-examples"}},[e._v("#")]),e._v(" Domain operation examples")]),e._v(" "),a("ul",[a("li",[e._v("Register a new "),a("Term",{attrs:{term:"domain"}}),e._v(' named "samples-domain":')],1)]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain register\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# OR using short alias")]),e._v("\ncadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain d re \n")])])]),a("p",[e._v("If your Cadence cluster has enable "),a("a",{attrs:{href:"https://cadenceworkflow.io/docs/concepts/cross-dc-replication/",target:"_blank",rel:"noopener noreferrer"}},[e._v("global domain(XDC replication)"),a("OutboundLink")],1),e._v(", then you have to specify the replicaiton settings when registering a domain:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domains")]),e._v(" amples-domain domain register "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--active_cluster")]),e._v(" clusterNameA "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--clusters")]),e._v(" clusterNameA clusterNameB\n")])])]),a("ul",[a("li",[e._v('View "samples-domain" details:')])]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("h3",{attrs:{id:"workflow-operation-examples"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-operation-examples"}},[e._v("#")]),e._v(" Workflow operation examples")]),e._v(" "),a("p",[e._v("The following examples assume the CADENCE_CLI_DOMAIN environment variable is set.")]),e._v(" "),a("h4",{attrs:{id:"run-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#run-workflow"}},[e._v("#")]),e._v(" Run workflow")]),e._v(" "),a("p",[e._v("Start a "),a("Term",{attrs:{term:"workflow"}}),e._v(" and see its progress. This command doesn't finish until "),a("Term",{attrs:{term:"workflow"}}),e._v(" completes.")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# view help messages for workflow run")]),e._v("\ncadence workflow run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-h")]),e._v("\n")])])]),a("p",[e._v("Brief explanation:\nTo run a "),a("Term",{attrs:{term:"workflow"}}),e._v(", the user must specify the following:")],1),e._v(" "),a("ol",[a("li",[e._v("Tasklist name (--tl)")]),e._v(" "),a("li",[e._v("Workflow type (--wt)")]),e._v(" "),a("li",[e._v("Execution start to close timeout in seconds (--et)")]),e._v(" "),a("li",[e._v("Input in JSON format (--i) (optional)")])]),e._v(" "),a("p",[e._v("s example uses "),a("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/recipes/helloworld/helloworld_workflow.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("this cadence-samples workflow"),a("OutboundLink")],1),e._v("\nand takes a string as input with the "),a("code",[e._v("-i '\"cadence\"'")]),e._v(" parameter. Single quotes ("),a("code",[e._v("''")]),e._v(") are used to wrap input as JSON.")]),e._v(" "),a("p",[a("strong",[e._v("Note:")]),e._v(" You need to start the "),a("Term",{attrs:{term:"worker"}}),e._v(" so that the "),a("Term",{attrs:{term:"workflow"}}),e._v(" can make progress.\n(Run "),a("code",[e._v("make && ./bin/helloworld -m worker")]),e._v(" in cadence-samples to start the "),a("Term",{attrs:{term:"worker"}}),e._v(")")],1),e._v(" "),a("h4",{attrs:{id:"show-running-workers-of-a-tasklist"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#show-running-workers-of-a-tasklist"}},[e._v("#")]),e._v(" Show running workers of a tasklist")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence tasklist desc "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup\n")])])]),a("h4",{attrs:{id:"start-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#start-workflow"}},[e._v("#")]),e._v(" Start workflow")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# view help messages for workflow start")]),e._v("\ncadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-h")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# for a workflow with multiple inputs, separate each json with space/newline like")]),e._v("\ncadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.WorkflowWith3Args "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('\'"your_input_string" 123 {"Name":"my-string", "Age":12345}\'')]),e._v("\n")])])]),a("p",[e._v("The "),a("Term",{attrs:{term:"workflow"}}),e._v(" "),a("code",[e._v("start")]),e._v(" command is similar to the "),a("code",[e._v("run")]),e._v(" command, but immediately returns the workflow_id and\nrun_id after starting the "),a("Term",{attrs:{term:"workflow"}}),e._v(". Use the "),a("code",[e._v("show")]),e._v(" command to view the "),a("Term",{attrs:{term:"workflow"}}),e._v("'s history/progress.")],1),e._v(" "),a("h5",{attrs:{id:"reuse-the-same-workflow-id-when-starting-running-a-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#reuse-the-same-workflow-id-when-starting-running-a-workflow"}},[e._v("#")]),e._v(" Reuse the same workflow id when starting/running a workflow")]),e._v(" "),a("p",[e._v("Use option "),a("code",[e._v("--workflowidreusepolicy")]),e._v(" or "),a("code",[e._v("--wrp")]),e._v(" to configure the "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" reuse policy.\n"),a("strong",[e._v("Option 0 AllowDuplicateFailedOnly:")]),e._v(" Allow starting a "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" using the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" when a "),a("Term",{attrs:{term:"workflow"}}),e._v(" with the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" is not already running and the last execution close state is one of "),a("em",[e._v("[terminated, cancelled, timedout, failed]")]),e._v(".\n"),a("strong",[e._v("Option 1 AllowDuplicate:")]),e._v(" Allow starting a "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" using the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" when a "),a("Term",{attrs:{term:"workflow"}}),e._v(" with the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" is not already running.\n"),a("strong",[e._v("Option 2 RejectDuplicate:")]),e._v(" Do not allow starting a "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" using the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" as a previous "),a("Term",{attrs:{term:"workflow"}}),e._v(".")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use AllowDuplicateFailedOnly option to start a workflow")]),e._v("\ncadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wrp")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use AllowDuplicate option to run a workflow")]),e._v("\ncadence workflow run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wrp")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n")])])]),a("h5",{attrs:{id:"start-a-workflow-with-a-memo"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#start-a-workflow-with-a-memo"}},[e._v("#")]),e._v(" Start a workflow with a memo")]),e._v(" "),a("p",[e._v("Memos are immutable key/value pairs that can be attached to a "),a("Term",{attrs:{term:"workflow"}}),e._v(" run when starting the "),a("Term",{attrs:{term:"workflow"}}),e._v(". These are\nvisible when listing "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(". More information on memos can be found\n"),a("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/#memo-vs-search-attributes"}},[e._v("here")]),e._v(".")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence wf start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-memo_key")]),e._v(" ‘“Service” “Env” “Instance”’ "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-memo")]),e._v(" ‘“serverName1” “test” "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("5")]),e._v("’\n")])])]),a("h4",{attrs:{id:"show-workflow-history"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#show-workflow-history"}},[e._v("#")]),e._v(" Show workflow history")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow show "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" 866ae14c-88cf-4f1e-980f-571e031d71b0\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is (without -w -r flag)")]),e._v("\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# if run_id is not provided, it will show the latest run history of that workflow_id")]),e._v("\ncadence workflow show "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is")]),e._v("\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717\n")])])]),a("h4",{attrs:{id:"show-workflow-execution-information"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#show-workflow-execution-information"}},[e._v("#")]),e._v(" Show workflow execution information")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow describe "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" 866ae14c-88cf-4f1e-980f-571e031d71b0\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is (without -w -r flag)")]),e._v("\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# if run_id is not provided, it will show the latest workflow execution of that workflow_id")]),e._v("\ncadence workflow describe "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is")]),e._v("\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717\n")])])]),a("h4",{attrs:{id:"list-closed-or-open-workflow-executions"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#list-closed-or-open-workflow-executions"}},[e._v("#")]),e._v(" List closed or open workflow executions")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow list\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# default will only show one page, to view more items, use --more flag")]),e._v("\ncadence workflow list "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-m")]),e._v("\n")])])]),a("p",[e._v("Use "),a("strong",[e._v("--query")]),e._v(" to list "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" with SQL like "),a("Term",{attrs:{term:"query",show:""}})],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow list "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--query")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("\"WorkflowType='main.SampleParentWorkflow' AND CloseTime = missing \"")]),e._v("\n")])])]),a("p",[e._v("This will return all open "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(' with workflowType as "main.SampleParentWorkflow".')],1),e._v(" "),a("h4",{attrs:{id:"query-workflow-execution"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#query-workflow-execution"}},[e._v("#")]),e._v(" Query workflow execution")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use custom query type")]),e._v("\ncadence workflow query "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--qt")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("query-type"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v('# use build-in query type "__stack_trace" which is supported by Cadence client library')]),e._v("\ncadence workflow query "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--qt")]),e._v(" __stack_trace\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut to query using __stack_trace is (without --qt flag)")]),e._v("\ncadence workflow stack "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("h4",{attrs:{id:"signal-cancel-terminate-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-cancel-terminate-workflow"}},[e._v("#")]),e._v(" Signal, cancel, terminate workflow")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# signal")]),e._v("\ncadence workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-n")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("signal-name"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"signal-value\"'")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# cancel")]),e._v("\ncadence workflow cancel "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# terminate")]),e._v("\ncadence workflow terminate "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v("\n")])])]),a("p",[e._v("Terminating a running "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" will record a WorkflowExecutionTerminated "),a("Term",{attrs:{term:"event"}}),e._v(" as the closing "),a("Term",{attrs:{term:"event"}}),e._v(" in the history. No more "),a("Term",{attrs:{term:"decision_task",show:"decision_tasks"}}),e._v(" will be scheduled for a terminated "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(".\nCanceling a running "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" will record a WorkflowExecutionCancelRequested "),a("Term",{attrs:{term:"event"}}),e._v(" in the history, and a new "),a("Term",{attrs:{term:"decision_task"}}),e._v(" will be scheduled. The "),a("Term",{attrs:{term:"workflow"}}),e._v(" has a chance to do some clean up work after cancellation.")],1),e._v(" "),a("h4",{attrs:{id:"signal-cancel-terminate-workflows-as-a-batch-job"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-cancel-terminate-workflows-as-a-batch-job"}},[e._v("#")]),e._v(" Signal, cancel, terminate workflows as a batch job")]),e._v(" "),a("p",[e._v("Batch job is based on List Workflow Query("),a("strong",[e._v("--query")]),e._v("). It supports "),a("Term",{attrs:{term:"signal"}}),e._v(", cancel and terminate as batch job type.\nFor terminating "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" as batch job, it will terminte the children recursively.")],1),e._v(" "),a("p",[e._v("Start a batch job(using "),a("Term",{attrs:{term:"signal"}}),e._v(" as batch type):")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--query")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("\"WorkflowType='main.SampleParentWorkflow' AND CloseTime=missing\"")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"test"')]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--bt")]),e._v(" signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--sig")]),e._v(" testname\nThis batch job will be operating on "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("5")]),e._v(" workflows.\nPlease confirm"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("[")]),e._v("Yes/No"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("]")]),e._v(":yes\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("{")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"jobID"')]),a("span",{pre:!0,attrs:{class:"token builtin class-name"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v(",\n "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"msg"')]),a("span",{pre:!0,attrs:{class:"token builtin class-name"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"batch job is started"')]),e._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("}")]),e._v("\n\n")])])]),a("p",[e._v("You need to remember the JobID or use List command to get all your batch jobs:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch list\n")])])]),a("p",[e._v("Describe the progress of a batch job:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch desc "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-jid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("batch-job-id"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("p",[e._v("Terminate a batch job:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch terminate "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-jid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("batch-job-id"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("p",[e._v("Note that the operation performed by a batch will not be rolled back by terminating the batch. However, you can use reset to rollback your "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(".")],1),e._v(" "),a("h4",{attrs:{id:"restart-reset-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#restart-reset-workflow"}},[e._v("#")]),e._v(" Restart, reset workflow")]),e._v(" "),a("p",[e._v("The Reset command allows resetting a "),a("Term",{attrs:{term:"workflow"}}),e._v(" to a particular point and continue running from there.\nThere are a lot of use cases:")],1),e._v(" "),a("ul",[a("li",[e._v("Rerun a failed "),a("Term",{attrs:{term:"workflow"}}),e._v(" from the beginning with the same start parameters.")],1),e._v(" "),a("li",[e._v("Rerun a failed "),a("Term",{attrs:{term:"workflow"}}),e._v(" from the failing point without losing the achieved progress(history).")],1),e._v(" "),a("li",[e._v("After deploying new code, reset an open "),a("Term",{attrs:{term:"workflow"}}),e._v(" to let the "),a("Term",{attrs:{term:"workflow"}}),e._v(" run to different flows.")],1)]),e._v(" "),a("p",[e._v("You can reset to some predefined "),a("Term",{attrs:{term:"event"}}),e._v(" types:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow reset "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reset_type")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("reset_type"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"some_reason"')]),e._v("\n")])])]),a("ul",[a("li",[e._v("FirstDecisionCompleted: reset to the beginning of the history.")]),e._v(" "),a("li",[e._v("LastDecisionCompleted: reset to the end of the history.")]),e._v(" "),a("li",[e._v("LastContinuedAsNew: reset to the end of the history for the previous run.")])]),e._v(" "),a("p",[e._v("If you are familiar with the Cadence history "),a("Term",{attrs:{term:"event"}}),e._v(", You can also reset to any "),a("Term",{attrs:{term:"decision"}}),e._v(" finish "),a("Term",{attrs:{term:"event"}}),e._v(" by using:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow reset "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--event_id")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("decision_finish_event_id"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"some_reason"')]),e._v("\n")])])]),a("p",[e._v("Some things to note:")]),e._v(" "),a("ul",[a("li",[e._v("When reset, a new run will be kicked off with the same workflowID. But if there is a running execution for the workflow(workflowID), the current run will be terminated.")]),e._v(" "),a("li",[e._v("decision_finish_event_id is the ID of "),a("Term",{attrs:{term:"event",show:"events"}}),e._v(" of the type: DecisionTaskComplete/DecisionTaskFailed/DecisionTaskTimeout.")],1),e._v(" "),a("li",[e._v("To restart a "),a("Term",{attrs:{term:"workflow"}}),e._v(" from the beginning, reset to the first "),a("Term",{attrs:{term:"decision_task"}}),e._v(" finish "),a("Term",{attrs:{term:"event"}}),e._v(".")],1)]),e._v(" "),a("p",[e._v("To reset multiple "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(", you can use batch reset command:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow reset-batch "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--input_file")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("file_of_workflows_to_reset"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reset_type")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("reset_type"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"some_reason"')]),e._v("\n")])])]),a("h4",{attrs:{id:"recovery-from-bad-deployment-auto-reset-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#recovery-from-bad-deployment-auto-reset-workflow"}},[e._v("#")]),e._v(" Recovery from bad deployment -- auto-reset workflow")]),e._v(" "),a("p",[e._v("If a bad deployment lets a "),a("Term",{attrs:{term:"workflow"}}),e._v(" run into a wrong state, you might want to reset the "),a("Term",{attrs:{term:"workflow"}}),e._v(" to the point that the bad deployment started to run. But usually it is not easy to find out all the "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" impacted, and every reset point for each "),a("Term",{attrs:{term:"workflow"}}),e._v(". In this case, auto-reset will automatically reset all the "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" given a bad deployment identifier.")],1),e._v(" "),a("p",[e._v("Let's get familiar with some concepts. Each deployment will have an identifier, we call it \""),a("strong",[e._v("Binary Checksum")]),e._v('" as it is usually generated by the md5sum of a binary file. For a '),a("Term",{attrs:{term:"workflow"}}),e._v(", each binary checksum will be associated with an "),a("strong",[e._v("auto-reset point")]),e._v(", which contains a "),a("strong",[e._v("runID")]),e._v(", an "),a("strong",[e._v("eventID")]),e._v(", and the "),a("strong",[e._v("created_time")]),e._v(" that binary/deployment made the first "),a("Term",{attrs:{term:"decision"}}),e._v(" for the "),a("Term",{attrs:{term:"workflow"}}),e._v(".")],1),e._v(" "),a("p",[e._v("To find out which "),a("strong",[e._v("binary checksum")]),e._v(" of the bad deployment to reset, you should be aware of at least one "),a("Term",{attrs:{term:"workflow"}}),e._v(" running into a bad state. Use the describe command with "),a("strong",[e._v("--reset_points_only")]),e._v(" option to show all the reset points:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence wf desc "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("WorkflowID"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reset_points_only")]),e._v("\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" BINARY CHECKSUM "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" CREATE TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" RUNID "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" EVENTID "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" c84c5afa552613a83294793f4e664a7f "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("2019")]),e._v("-05-24 "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v(":01:00.398455019 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("4")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" aae748fdc557a3f873adbe1dd066713f "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("2019")]),e._v("-05-24 "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("11")]),e._v(":01:00.067691445 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" d42d21b8-2adb-4313-b069-3837d44d6ce6 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("4")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("..")]),e._v(".\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("..")]),e._v(".\n")])])]),a("p",[e._v("Then use this command to tell Cadence to auto-reset all "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" impacted by the bad deployment. The command will store the bad binary checksum into "),a("Term",{attrs:{term:"domain"}}),e._v(" info and trigger a process to reset all your "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(".")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("YourDomainName"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" domain update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--add_bad_binary")]),e._v(" aae748fdc557a3f873adbe1dd066713f "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"rollback bad deployment"')]),e._v("\n")])])]),a("p",[e._v("As you add the bad binary checksum to your "),a("Term",{attrs:{term:"domain"}}),e._v(", Cadence will not dispatch any "),a("Term",{attrs:{term:"decision_task",show:"decision_tasks"}}),e._v(" to the bad binary. So make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). Otherwise your "),a("Term",{attrs:{term:"workflow"}}),e._v(" can't make any progress after auto-reset.")],1)])}),[],!1,null,null,null);a.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[94],{402:function(e,a,t){"use strict";t.r(a);var r=t(0),s=Object(r.a)({},(function(){var e=this,a=e._self._c;return a("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[a("h1",{attrs:{id:"command-line-interface"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#command-line-interface"}},[e._v("#")]),e._v(" Command Line Interface")]),e._v(" "),a("p",[e._v("The Cadence "),a("Term",{attrs:{term:"CLI"}}),e._v(" is a command-line tool you can use to perform various "),a("Term",{attrs:{term:"task",show:"tasks"}}),e._v(" on a Cadence server. It can perform\n"),a("Term",{attrs:{term:"domain"}}),e._v(" operations such as register, update, and describe as well as "),a("Term",{attrs:{term:"workflow"}}),e._v(" operations like start\n"),a("Term",{attrs:{term:"workflow"}}),e._v(", show "),a("Term",{attrs:{term:"workflow"}}),e._v(" history, and "),a("Term",{attrs:{term:"signal"}}),e._v(" "),a("Term",{attrs:{term:"workflow"}}),e._v(".")],1),e._v(" "),a("h2",{attrs:{id:"using-the-cli"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#using-the-cli"}},[e._v("#")]),e._v(" Using the CLI")]),e._v(" "),a("h3",{attrs:{id:"homebrew"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#homebrew"}},[e._v("#")]),e._v(" Homebrew")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("brew install cadence-workflow\n")])])]),a("p",[e._v("After the installation is done, you can use CLI:")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("cadence --help\n")])])]),a("p",[e._v("This will always install the latest version. Follow "),a("a",{attrs:{href:"https://github.com/uber/cadence/discussions/4457",target:"_blank",rel:"noopener noreferrer"}},[e._v("this instructions"),a("OutboundLink")],1),e._v(" if you need to install older versions of Cadence CLI.")]),e._v(" "),a("h3",{attrs:{id:"docker"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#docker"}},[e._v("#")]),e._v(" Docker")]),e._v(" "),a("p",[e._v("The Cadence "),a("Term",{attrs:{term:"CLI"}}),e._v(" can be used directly from the Docker Hub image "),a("em",[e._v("ubercadence/cli")]),e._v(" or by building the "),a("Term",{attrs:{term:"CLI"}}),e._v(" tool\nlocally.")],1),e._v(" "),a("p",[e._v("Example of using the docker image to describe a "),a("Term",{attrs:{term:"domain",show:""}})],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-it")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("frontendAddress"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[a("code",[e._v("master")]),e._v(" will be the latest CLI binary from the project. But you can specify a version to best match your server version:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-it")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("version"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("frontendAddress"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[e._v("For example "),a("code",[e._v("docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe")]),e._v(" will be the CLI that is released as part of the "),a("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.21.3",target:"_blank",rel:"noopener noreferrer"}},[e._v("v0.21.3 release"),a("OutboundLink")],1),e._v(".\nSee "),a("a",{attrs:{href:"https://hub.docker.com/r/ubercadence/cli/tags?page=1&ordering=last_updated",target:"_blank",rel:"noopener noreferrer"}},[e._v("docker hub page"),a("OutboundLink")],1),e._v(" for all the CLI image tags.\nNote that CLI versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. That's because "),a("a",{attrs:{href:"https://stackoverflow.com/questions/68217385/what-is-clientversionnotsupportederror-and-how-to-resolve-it",target:"_blank",rel:"noopener noreferrer"}},[e._v("the CLI version doesn't change in those versions"),a("OutboundLink")],1),e._v(".")]),e._v(" "),a("p",[e._v('NOTE: On Docker versions 18.03 and later, you may get a "connection refused" error when connecting to local server. You can work around this by setting the host to "host.docker.internal" (see '),a("a",{attrs:{href:"https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),a("OutboundLink")],1),e._v(" for more info).")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token function"}},[e._v("docker")]),e._v(" run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-it")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--rm")]),e._v(" ubercadence/cli:master "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" host.docker.internal:7933 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[e._v("NOTE: Be sure to update your image when you want to try new features: "),a("code",[e._v("docker pull ubercadence/cli:master")])]),e._v(" "),a("p",[e._v("NOTE: If you are running docker-compose Cadence server, you can also logon to the container to execute CLI:")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("docker exec -it docker_cadence_1 /bin/bash\n\n# cadence --address $(hostname -i):7933 --do samples domain register\n")])])]),a("h3",{attrs:{id:"build-it-yourself"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#build-it-yourself"}},[e._v("#")]),e._v(" Build it yourself")]),e._v(" "),a("p",[e._v("To build the "),a("Term",{attrs:{term:"CLI"}}),e._v(" tool locally, clone the "),a("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence server repo"),a("OutboundLink")],1),e._v(", check out the version tag (e.g. "),a("code",[e._v("git checkout v0.21.3")]),e._v(") and run\n"),a("code",[e._v("make tools")]),e._v(". This produces an executable called "),a("code",[e._v("cadence")]),e._v(". With a local build, the same command to\ndescribe a "),a("Term",{attrs:{term:"domain"}}),e._v(" would look like this:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("p",[e._v("Alternatively, you can build the CLI image, see "),a("RouterLink",{attrs:{to:"/docs/06-cli/docker/#diy-building-an-image-for-any-tag-or-branch"}},[e._v("instructions")])],1),e._v(" "),a("h2",{attrs:{id:"documentation"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#documentation"}},[e._v("#")]),e._v(" Documentation")]),e._v(" "),a("p",[e._v("CLI are documented by "),a("code",[e._v("--help")]),e._v(" or "),a("code",[e._v("-h")]),e._v(" in ANY tab of all levels:")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("$cadence --help\nNAME:\n cadence - A command-line tool for cadence users\n\nUSAGE:\n cadence [global options] command [command options] [arguments...]\n\nVERSION:\n 0.18.4\n\nCOMMANDS:\n domain, d Operate cadence domain\n workflow, wf Operate cadence workflow\n tasklist, tl Operate cadence tasklist\n admin, adm Run admin operation\n cluster, cl Operate cadence cluster\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --address value, --ad value host:port for cadence frontend service [$CADENCE_CLI_ADDRESS]\n --domain value, --do value cadence workflow domain [$CADENCE_CLI_DOMAIN]\n --context_timeout value, --ct value optional timeout for context of RPC call in seconds (default: 5) [$CADENCE_CONTEXT_TIMEOUT]\n --help, -h show help\n --version, -v print the version\n")])])]),a("p",[e._v("And")]),e._v(" "),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("$cadence workflow -h\nNAME:\n cadence workflow - Operate cadence workflow\n\nUSAGE:\n cadence workflow command [command options] [arguments...]\n\nCOMMANDS:\n activity, act operate activities of workflow\n show show workflow history\n showid show workflow history with given workflow_id and run_id (a shortcut of `show -w -r `). run_id is only required for archived history\n start start a new workflow execution\n run start a new workflow execution and get workflow progress\n cancel, c cancel a workflow execution\n signal, s signal a workflow execution\n signalwithstart signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n terminate, term terminate a new workflow execution\n list, l list open or closed workflow executions\n listall, la list all open or closed workflow executions\n listarchived list archived workflow executions\n scan, sc, scanall scan workflow executions (need to enable Cadence server on ElasticSearch). It will be faster than listall, but result are not sorted.\n count, cnt count number of workflow executions (need to enable Cadence server on ElasticSearch)\n query query workflow execution\n stack query workflow execution with __stack_trace as query type\n describe, desc show information of workflow execution\n describeid, descid show information of workflow execution with given workflow_id and optional run_id (a shortcut of `describe -w -r `)\n observe, ob show the progress of workflow history\n observeid, obid show the progress of workflow history with given workflow_id and optional run_id (a shortcut of `observe -w -r `)\n reset, rs reset the workflow, by either eventID or resetType.\n reset-batch reset workflow in batch by resetType: LastDecisionCompleted,LastContinuedAsNew,BadBinary,DecisionCompletedTime,FirstDecisionScheduled,LastDecisionScheduled,FirstDecisionCompletedTo get base workflowIDs/runIDs to reset, source is from input file or visibility query.\n batch batch operation on a list of workflows from query.\n\nOPTIONS:\n --help, -h show help\n")])])]),a("div",{staticClass:"language- extra-class"},[a("pre",{pre:!0,attrs:{class:"language-text"}},[a("code",[e._v("$cadence wf signal -h\nNAME:\n cadence workflow signal - signal a workflow execution\n\nUSAGE:\n cadence workflow signal [command options] [arguments...]\n\nOPTIONS:\n --workflow_id value, --wid value, -w value WorkflowID\n --run_id value, --rid value, -r value RunID\n --name value, -n value SignalName\n --input value, -i value Input for the signal, in JSON format.\n --input_file value, --if value Input for the signal from JSON file.\n\n")])])]),a("p",[e._v("And etc.")]),e._v(" "),a("p",[e._v("The example commands below will use "),a("code",[e._v("cadence")]),e._v(" for brevity.")]),e._v(" "),a("h2",{attrs:{id:"environment-variables"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#environment-variables"}},[e._v("#")]),e._v(" Environment variables")]),e._v(" "),a("p",[e._v("Setting environment variables for repeated parameters can shorten the "),a("Term",{attrs:{term:"CLI"}}),e._v(" commands.")],1),e._v(" "),a("ul",[a("li",[a("strong",[e._v("CADENCE_CLI_ADDRESS")]),e._v(" - host:port for Cadence frontend service, the default is for the local server")]),e._v(" "),a("li",[a("strong",[e._v("CADENCE_CLI_DOMAIN")]),e._v(" - default "),a("Term",{attrs:{term:"workflow"}}),e._v(" "),a("Term",{attrs:{term:"domain"}}),e._v(", so you don't need to specify "),a("code",[e._v("--domain")])],1)]),e._v(" "),a("h2",{attrs:{id:"quick-start"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#quick-start"}},[e._v("#")]),e._v(" Quick Start")]),e._v(" "),a("p",[e._v("Run "),a("code",[e._v("cadence")]),e._v(" for help on top level commands and global options\nRun "),a("code",[e._v("cadence domain")]),e._v(" for help on "),a("Term",{attrs:{term:"domain"}}),e._v(" operations\nRun "),a("code",[e._v("cadence workflow")]),e._v(" for help on "),a("Term",{attrs:{term:"workflow"}}),e._v(" operations\nRun "),a("code",[e._v("cadence tasklist")]),e._v(" for help on tasklist operations\n("),a("code",[e._v("cadence help")]),e._v(", "),a("code",[e._v("cadence help [domain|workflow]")]),e._v(" will also print help messages)")],1),e._v(" "),a("p",[a("strong",[e._v("Note:")]),e._v(" make sure you have a Cadence server running before using "),a("Term",{attrs:{term:"CLI"}})],1),e._v(" "),a("h3",{attrs:{id:"domain-operation-examples"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#domain-operation-examples"}},[e._v("#")]),e._v(" Domain operation examples")]),e._v(" "),a("ul",[a("li",[e._v("Register a new "),a("Term",{attrs:{term:"domain"}}),e._v(' named "samples-domain":')],1)]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain register\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# OR using short alias")]),e._v("\ncadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain d re \n")])])]),a("p",[e._v("If your Cadence cluster has enable "),a("a",{attrs:{href:"https://cadenceworkflow.io/docs/concepts/cross-dc-replication/",target:"_blank",rel:"noopener noreferrer"}},[e._v("global domain(XDC replication)"),a("OutboundLink")],1),e._v(", then you have to specify the replicaiton settings when registering a domain:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domains")]),e._v(" amples-domain domain register "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--active_cluster")]),e._v(" clusterNameA "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--clusters")]),e._v(" clusterNameA clusterNameB\n")])])]),a("ul",[a("li",[e._v('View "samples-domain" details:')])]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--domain")]),e._v(" samples-domain domain describe\n")])])]),a("h3",{attrs:{id:"workflow-operation-examples"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#workflow-operation-examples"}},[e._v("#")]),e._v(" Workflow operation examples")]),e._v(" "),a("p",[e._v("The following examples assume the CADENCE_CLI_DOMAIN environment variable is set.")]),e._v(" "),a("h4",{attrs:{id:"run-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#run-workflow"}},[e._v("#")]),e._v(" Run workflow")]),e._v(" "),a("p",[e._v("Start a "),a("Term",{attrs:{term:"workflow"}}),e._v(" and see its progress. This command doesn't finish until "),a("Term",{attrs:{term:"workflow"}}),e._v(" completes.")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# view help messages for workflow run")]),e._v("\ncadence workflow run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-h")]),e._v("\n")])])]),a("p",[e._v("Brief explanation:\nTo run a "),a("Term",{attrs:{term:"workflow"}}),e._v(", the user must specify the following:")],1),e._v(" "),a("ol",[a("li",[e._v("Tasklist name (--tl)")]),e._v(" "),a("li",[e._v("Workflow type (--wt)")]),e._v(" "),a("li",[e._v("Execution start to close timeout in seconds (--et)")]),e._v(" "),a("li",[e._v("Input in JSON format (--i) (optional)")])]),e._v(" "),a("p",[e._v("s example uses "),a("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/recipes/helloworld/helloworld_workflow.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("this cadence-samples workflow"),a("OutboundLink")],1),e._v("\nand takes a string as input with the "),a("code",[e._v("-i '\"cadence\"'")]),e._v(" parameter. Single quotes ("),a("code",[e._v("''")]),e._v(") are used to wrap input as JSON.")]),e._v(" "),a("p",[a("strong",[e._v("Note:")]),e._v(" You need to start the "),a("Term",{attrs:{term:"worker"}}),e._v(" so that the "),a("Term",{attrs:{term:"workflow"}}),e._v(" can make progress.\n(Run "),a("code",[e._v("make && ./bin/helloworld -m worker")]),e._v(" in cadence-samples to start the "),a("Term",{attrs:{term:"worker"}}),e._v(")")],1),e._v(" "),a("h4",{attrs:{id:"show-running-workers-of-a-tasklist"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#show-running-workers-of-a-tasklist"}},[e._v("#")]),e._v(" Show running workers of a tasklist")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence tasklist desc "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup\n")])])]),a("h4",{attrs:{id:"start-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#start-workflow"}},[e._v("#")]),e._v(" Start workflow")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# view help messages for workflow start")]),e._v("\ncadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-h")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# for a workflow with multiple inputs, separate each json with space/newline like")]),e._v("\ncadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.WorkflowWith3Args "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('\'"your_input_string" 123 {"Name":"my-string", "Age":12345}\'')]),e._v("\n")])])]),a("p",[e._v("The "),a("Term",{attrs:{term:"workflow"}}),e._v(" "),a("code",[e._v("start")]),e._v(" command is similar to the "),a("code",[e._v("run")]),e._v(" command, but immediately returns the workflow_id and\nrun_id after starting the "),a("Term",{attrs:{term:"workflow"}}),e._v(". Use the "),a("code",[e._v("show")]),e._v(" command to view the "),a("Term",{attrs:{term:"workflow"}}),e._v("'s history/progress.")],1),e._v(" "),a("h5",{attrs:{id:"reuse-the-same-workflow-id-when-starting-running-a-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#reuse-the-same-workflow-id-when-starting-running-a-workflow"}},[e._v("#")]),e._v(" Reuse the same workflow id when starting/running a workflow")]),e._v(" "),a("p",[e._v("Use option "),a("code",[e._v("--workflowidreusepolicy")]),e._v(" or "),a("code",[e._v("--wrp")]),e._v(" to configure the "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" reuse policy.\n"),a("strong",[e._v("Option 0 AllowDuplicateFailedOnly:")]),e._v(" Allow starting a "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" using the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" when a "),a("Term",{attrs:{term:"workflow"}}),e._v(" with the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" is not already running and the last execution close state is one of "),a("em",[e._v("[terminated, cancelled, timedout, failed]")]),e._v(".\n"),a("strong",[e._v("Option 1 AllowDuplicate:")]),e._v(" Allow starting a "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" using the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" when a "),a("Term",{attrs:{term:"workflow"}}),e._v(" with the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" is not already running.\n"),a("strong",[e._v("Option 2 RejectDuplicate:")]),e._v(" Do not allow starting a "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" using the same "),a("Term",{attrs:{term:"workflow_ID"}}),e._v(" as a previous "),a("Term",{attrs:{term:"workflow"}}),e._v(".")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use AllowDuplicateFailedOnly option to start a workflow")]),e._v("\ncadence workflow start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wrp")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use AllowDuplicate option to run a workflow")]),e._v("\ncadence workflow run "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--wrp")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n")])])]),a("h5",{attrs:{id:"start-a-workflow-with-a-memo"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#start-a-workflow-with-a-memo"}},[e._v("#")]),e._v(" Start a workflow with a memo")]),e._v(" "),a("p",[e._v("Memos are immutable key/value pairs that can be attached to a "),a("Term",{attrs:{term:"workflow"}}),e._v(" run when starting the "),a("Term",{attrs:{term:"workflow"}}),e._v(". These are\nvisible when listing "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(". More information on memos can be found\n"),a("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/#memo-vs-search-attributes"}},[e._v("here")]),e._v(".")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence wf start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-tl")]),e._v(" helloWorldGroup "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-wt")]),e._v(" main.Workflow "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-et")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("60")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"cadence\"'")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-memo_key")]),e._v(" ‘“Service” “Env” “Instance”’ "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-memo")]),e._v(" ‘“serverName1” “test” "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("5")]),e._v("’\n")])])]),a("h4",{attrs:{id:"show-workflow-history"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#show-workflow-history"}},[e._v("#")]),e._v(" Show workflow history")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow show "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" 866ae14c-88cf-4f1e-980f-571e031d71b0\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is (without -w -r flag)")]),e._v("\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# if run_id is not provided, it will show the latest run history of that workflow_id")]),e._v("\ncadence workflow show "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is")]),e._v("\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717\n")])])]),a("h4",{attrs:{id:"show-workflow-execution-information"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#show-workflow-execution-information"}},[e._v("#")]),e._v(" Show workflow execution information")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow describe "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717 "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" 866ae14c-88cf-4f1e-980f-571e031d71b0\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is (without -w -r flag)")]),e._v("\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# if run_id is not provided, it will show the latest workflow execution of that workflow_id")]),e._v("\ncadence workflow describe "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" 3ea6b242-b23c-4279-bb13-f215661b4717\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut of this is")]),e._v("\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717\n")])])]),a("h4",{attrs:{id:"list-closed-or-open-workflow-executions"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#list-closed-or-open-workflow-executions"}},[e._v("#")]),e._v(" List closed or open workflow executions")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow list\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# default will only show one page, to view more items, use --more flag")]),e._v("\ncadence workflow list "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-m")]),e._v("\n")])])]),a("p",[e._v("Use "),a("strong",[e._v("--query")]),e._v(" to list "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" with SQL like "),a("Term",{attrs:{term:"query",show:""}})],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow list "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--query")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("\"WorkflowType='main.SampleParentWorkflow' AND CloseTime = missing \"")]),e._v("\n")])])]),a("p",[e._v("This will return all open "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(' with workflowType as "main.SampleParentWorkflow".')],1),e._v(" "),a("h4",{attrs:{id:"query-workflow-execution"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#query-workflow-execution"}},[e._v("#")]),e._v(" Query workflow execution")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use custom query type")]),e._v("\ncadence workflow query "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--qt")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("query-type"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v('# use build-in query type "__stack_trace" which is supported by Cadence client library')]),e._v("\ncadence workflow query "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--qt")]),e._v(" __stack_trace\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# a shortcut to query using __stack_trace is (without --qt flag)")]),e._v("\ncadence workflow stack "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("h4",{attrs:{id:"signal-cancel-terminate-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-cancel-terminate-workflow"}},[e._v("#")]),e._v(" Signal, cancel, terminate workflow")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# signal")]),e._v("\ncadence workflow signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-n")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("signal-name"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-i")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("'\"signal-value\"'")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# cancel")]),e._v("\ncadence workflow cancel "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n\n"),a("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# terminate")]),e._v("\ncadence workflow terminate "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v("\n")])])]),a("p",[e._v("Terminating a running "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" will record a WorkflowExecutionTerminated "),a("Term",{attrs:{term:"event"}}),e._v(" as the closing "),a("Term",{attrs:{term:"event"}}),e._v(" in the history. No more "),a("Term",{attrs:{term:"decision_task",show:"decision_tasks"}}),e._v(" will be scheduled for a terminated "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(".\nCanceling a running "),a("Term",{attrs:{term:"workflow_execution"}}),e._v(" will record a WorkflowExecutionCancelRequested "),a("Term",{attrs:{term:"event"}}),e._v(" in the history, and a new "),a("Term",{attrs:{term:"decision_task"}}),e._v(" will be scheduled. The "),a("Term",{attrs:{term:"workflow"}}),e._v(" has a chance to do some clean up work after cancellation.")],1),e._v(" "),a("h4",{attrs:{id:"signal-cancel-terminate-workflows-as-a-batch-job"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#signal-cancel-terminate-workflows-as-a-batch-job"}},[e._v("#")]),e._v(" Signal, cancel, terminate workflows as a batch job")]),e._v(" "),a("p",[e._v("Batch job is based on List Workflow Query("),a("strong",[e._v("--query")]),e._v("). It supports "),a("Term",{attrs:{term:"signal"}}),e._v(", cancel and terminate as batch job type.\nFor terminating "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" as batch job, it will terminte the children recursively.")],1),e._v(" "),a("p",[e._v("Start a batch job(using "),a("Term",{attrs:{term:"signal"}}),e._v(" as batch type):")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch start "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--query")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v("\"WorkflowType='main.SampleParentWorkflow' AND CloseTime=missing\"")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"test"')]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--bt")]),e._v(" signal "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--sig")]),e._v(" testname\nThis batch job will be operating on "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("5")]),e._v(" workflows.\nPlease confirm"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("[")]),e._v("Yes/No"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("]")]),e._v(":yes\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("{")]),e._v("\n "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"jobID"')]),a("span",{pre:!0,attrs:{class:"token builtin class-name"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v(",\n "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"msg"')]),a("span",{pre:!0,attrs:{class:"token builtin class-name"}},[e._v(":")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"batch job is started"')]),e._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("}")]),e._v("\n\n")])])]),a("p",[e._v("You need to remember the JobID or use List command to get all your batch jobs:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch list\n")])])]),a("p",[e._v("Describe the progress of a batch job:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch desc "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-jid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("batch-job-id"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("p",[e._v("Terminate a batch job:")]),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" samples-domain wf batch terminate "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-jid")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("batch-job-id"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),a("p",[e._v("Note that the operation performed by a batch will not be rolled back by terminating the batch. However, you can use reset to rollback your "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(".")],1),e._v(" "),a("h4",{attrs:{id:"restart-reset-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#restart-reset-workflow"}},[e._v("#")]),e._v(" Restart, reset workflow")]),e._v(" "),a("p",[e._v("The Reset command allows resetting a "),a("Term",{attrs:{term:"workflow"}}),e._v(" to a particular point and continue running from there.\nThere are a lot of use cases:")],1),e._v(" "),a("ul",[a("li",[e._v("Rerun a failed "),a("Term",{attrs:{term:"workflow"}}),e._v(" from the beginning with the same start parameters.")],1),e._v(" "),a("li",[e._v("Rerun a failed "),a("Term",{attrs:{term:"workflow"}}),e._v(" from the failing point without losing the achieved progress(history).")],1),e._v(" "),a("li",[e._v("After deploying new code, reset an open "),a("Term",{attrs:{term:"workflow"}}),e._v(" to let the "),a("Term",{attrs:{term:"workflow"}}),e._v(" run to different flows.")],1)]),e._v(" "),a("p",[e._v("You can reset to some predefined "),a("Term",{attrs:{term:"event"}}),e._v(" types:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow reset "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reset_type")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("reset_type"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"some_reason"')]),e._v("\n")])])]),a("ul",[a("li",[e._v("FirstDecisionCompleted: reset to the beginning of the history.")]),e._v(" "),a("li",[e._v("LastDecisionCompleted: reset to the end of the history.")]),e._v(" "),a("li",[e._v("LastContinuedAsNew: reset to the end of the history for the previous run.")])]),e._v(" "),a("p",[e._v("If you are familiar with the Cadence history "),a("Term",{attrs:{term:"event"}}),e._v(", You can also reset to any "),a("Term",{attrs:{term:"decision"}}),e._v(" finish "),a("Term",{attrs:{term:"event"}}),e._v(" by using:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow reset "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("wid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-r")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("rid"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--event_id")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("decision_finish_event_id"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"some_reason"')]),e._v("\n")])])]),a("p",[e._v("Some things to note:")]),e._v(" "),a("ul",[a("li",[e._v("When reset, a new run will be kicked off with the same workflowID. But if there is a running execution for the workflow(workflowID), the current run will be terminated.")]),e._v(" "),a("li",[e._v("decision_finish_event_id is the ID of "),a("Term",{attrs:{term:"event",show:"events"}}),e._v(" of the type: DecisionTaskComplete/DecisionTaskFailed/DecisionTaskTimeout.")],1),e._v(" "),a("li",[e._v("To restart a "),a("Term",{attrs:{term:"workflow"}}),e._v(" from the beginning, reset to the first "),a("Term",{attrs:{term:"decision_task"}}),e._v(" finish "),a("Term",{attrs:{term:"event"}}),e._v(".")],1)]),e._v(" "),a("p",[e._v("To reset multiple "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(", you can use batch reset command:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence workflow reset-batch "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--input_file")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("file_of_workflows_to_reset"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reset_type")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("reset_type"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"some_reason"')]),e._v("\n")])])]),a("h4",{attrs:{id:"recovery-from-bad-deployment-auto-reset-workflow"}},[a("a",{staticClass:"header-anchor",attrs:{href:"#recovery-from-bad-deployment-auto-reset-workflow"}},[e._v("#")]),e._v(" Recovery from bad deployment -- auto-reset workflow")]),e._v(" "),a("p",[e._v("If a bad deployment lets a "),a("Term",{attrs:{term:"workflow"}}),e._v(" run into a wrong state, you might want to reset the "),a("Term",{attrs:{term:"workflow"}}),e._v(" to the point that the bad deployment started to run. But usually it is not easy to find out all the "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" impacted, and every reset point for each "),a("Term",{attrs:{term:"workflow"}}),e._v(". In this case, auto-reset will automatically reset all the "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" given a bad deployment identifier.")],1),e._v(" "),a("p",[e._v("Let's get familiar with some concepts. Each deployment will have an identifier, we call it \""),a("strong",[e._v("Binary Checksum")]),e._v('" as it is usually generated by the md5sum of a binary file. For a '),a("Term",{attrs:{term:"workflow"}}),e._v(", each binary checksum will be associated with an "),a("strong",[e._v("auto-reset point")]),e._v(", which contains a "),a("strong",[e._v("runID")]),e._v(", an "),a("strong",[e._v("eventID")]),e._v(", and the "),a("strong",[e._v("created_time")]),e._v(" that binary/deployment made the first "),a("Term",{attrs:{term:"decision"}}),e._v(" for the "),a("Term",{attrs:{term:"workflow"}}),e._v(".")],1),e._v(" "),a("p",[e._v("To find out which "),a("strong",[e._v("binary checksum")]),e._v(" of the bad deployment to reset, you should be aware of at least one "),a("Term",{attrs:{term:"workflow"}}),e._v(" running into a bad state. Use the describe command with "),a("strong",[e._v("--reset_points_only")]),e._v(" option to show all the reset points:")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence wf desc "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-w")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("WorkflowID"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reset_points_only")]),e._v("\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" BINARY CHECKSUM "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" CREATE TIME "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" RUNID "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" EVENTID "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" c84c5afa552613a83294793f4e664a7f "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("2019")]),e._v("-05-24 "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v(":01:00.398455019 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("4")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" aae748fdc557a3f873adbe1dd066713f "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("2019")]),e._v("-05-24 "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("11")]),e._v(":01:00.067691445 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" d42d21b8-2adb-4313-b069-3837d44d6ce6 "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token number"}},[e._v("4")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v("\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("..")]),e._v(".\n"),a("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("..")]),e._v(".\n")])])]),a("p",[e._v("Then use this command to tell Cadence to auto-reset all "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(" impacted by the bad deployment. The command will store the bad binary checksum into "),a("Term",{attrs:{term:"domain"}}),e._v(" info and trigger a process to reset all your "),a("Term",{attrs:{term:"workflow",show:"workflows"}}),e._v(".")],1),e._v(" "),a("div",{staticClass:"language-bash extra-class"},[a("pre",{pre:!0,attrs:{class:"language-bash"}},[a("code",[e._v("cadence "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("YourDomainName"),a("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" domain update "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--add_bad_binary")]),e._v(" aae748fdc557a3f873adbe1dd066713f "),a("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--reason")]),e._v(" "),a("span",{pre:!0,attrs:{class:"token string"}},[e._v('"rollback bad deployment"')]),e._v("\n")])])]),a("p",[e._v("As you add the bad binary checksum to your "),a("Term",{attrs:{term:"domain"}}),e._v(", Cadence will not dispatch any "),a("Term",{attrs:{term:"decision_task",show:"decision_tasks"}}),e._v(" to the bad binary. So make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). Otherwise your "),a("Term",{attrs:{term:"workflow"}}),e._v(" can't make any progress after auto-reset.")],1)])}),[],!1,null,null,null);a.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/95.3153b017.js b/assets/js/95.9126f948.js similarity index 99% rename from assets/js/95.3153b017.js rename to assets/js/95.9126f948.js index 65a836383..a519eb6f7 100644 --- a/assets/js/95.3153b017.js +++ b/assets/js/95.9126f948.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[95],{402:function(e,t,n){"use strict";n.r(t);var a=n(0),o=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-configuration"}},[e._v("#")]),e._v(" Cluster Configuration")]),e._v(" "),t("p",[e._v("This section will help to understand what you need for setting up a Cadence cluster.")]),e._v(" "),t("p",[e._v("You should understand some basic static configuration of Cadence cluster.")]),e._v(" "),t("p",[e._v('There are also many other configuration called "Dynamic Configuration" for fine tuning the cluster. The default values are good to go for small clusters.')]),e._v(" "),t("p",[e._v("Cadence’s minimum dependency is a database(Cassandra or SQL based like MySQL/Postgres). Cadence uses it for persistence. All instances of Cadence clusters are stateless.")]),e._v(" "),t("p",[e._v("For production you also need a metric server(Prometheus/Statsd/M3/etc).")]),e._v(" "),t("p",[e._v("For "),t("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#other-advanced-features"}},[e._v("advanced features")]),e._v(" Cadence depends on others like Elastisearch/OpenSearch+Kafka if you need "),t("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/"}},[e._v("Advanced visibility feature to search workflows")]),e._v(". Cadence will depends on a blob store like S3 if you need to enable "),t("RouterLink",{attrs:{to:"/docs/concepts/archival/"}},[e._v("archival feature")]),e._v(".")],1),e._v(" "),t("h2",{attrs:{id:"static-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#static-configuration"}},[e._v("#")]),e._v(" Static configuration")]),e._v(" "),t("h3",{attrs:{id:"configuration-directory-and-files"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#configuration-directory-and-files"}},[e._v("#")]),e._v(" Configuration Directory and Files")]),e._v(" "),t("p",[e._v("The default directory for configuration files is named "),t("strong",[e._v("config/")]),e._v(". This directory contains various configuration files, but not all files will necessarily be used in every scenario.")]),e._v(" "),t("h4",{attrs:{id:"combining-configuration-files"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#combining-configuration-files"}},[e._v("#")]),e._v(" Combining Configuration Files")]),e._v(" "),t("ul",[t("li",[e._v("Base Configuration: The "),t("code",[e._v("base.yaml")]),e._v(" file is always loaded first, providing a common configuration that applies to all environments.")]),e._v(" "),t("li",[e._v("Runtime Environment File: The second file to be loaded is specific to the runtime environment. The environment name can be specified through the "),t("code",[e._v("$CADENCE_ENVIRONMENT")]),e._v(" environment variable or passed as a command-line argument. If neither option is specified, "),t("code",[e._v("development.yaml")]),e._v(" is used by default.")]),e._v(" "),t("li",[e._v("Availability Zone File: If an availability zone is specified (either through the "),t("code",[e._v("$CADENCE_AVAILABILITY_ZONE")]),e._v(' environment variable or as a command-line argument), a file named after the zone will be merged. For example, if you specify "az1" as the zone, '),t("code",[e._v("production_az1.yaml")]),e._v(" will be used as well.")])]),e._v(" "),t("p",[e._v("To merge "),t("code",[e._v("base.yaml")]),e._v(", "),t("code",[e._v("production.yaml")]),e._v(", and "),t("code",[e._v("production_az1.yaml")]),e._v(' files, you need to specify "production" as the runtime environment and "az1" as the zone.')]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("// base.yaml -> production.yaml -> production_az1.yaml = final configuration\n")])])]),t("h4",{attrs:{id:"using-environment-variables"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#using-environment-variables"}},[e._v("#")]),e._v(" Using Environment Variables")]),e._v(" "),t("p",[e._v("Configuration values can be provided using environment variables with a specific syntax.\n"),t("code",[e._v("$VAR")]),e._v(": This notation will be replaced with the value of the specified environment variable. If the environment variable is not set, the value will be left blank.\nYou can declare a default value using the syntax "),t("code",[e._v("{$VAR:default}")]),e._v(". This means that if the environment variable VAR is not set, the default value will be used instead.")]),e._v(" "),t("p",[e._v("Note: If you want to include the "),t("code",[e._v("$")]),e._v(" symbol literally in your configuration file (without interpreting it as an environment variable substitution), escape it by using $$. This will prevent it from being replaced by an environment variable value.")]),e._v(" "),t("h3",{attrs:{id:"understand-the-basic-static-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#understand-the-basic-static-configuration"}},[e._v("#")]),e._v(" Understand the basic static configuration")]),e._v(" "),t("p",[e._v("There are quite many configs in Cadence. Here are the most basic configuration that you should understand.")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Config name")]),e._v(" "),t("th",[e._v("Explanation")]),e._v(" "),t("th",[e._v("Recommended value")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("numHistoryShards")]),e._v(" "),t("td",[e._v("This is the most important one in Cadence config.It will be a fixed number in the cluster forever. The only way to change it is to migrate to another cluster. Refer to Migrate cluster section. "),t("br"),e._v(" "),t("br"),e._v(" Some facts about it: "),t("br"),e._v(" 1. Each workflow will be mapped to a single shard. Within a shard, all the workflow creation/updates are serialized. "),t("br"),e._v(" 2. Each shard will be assigned to only one History node to own the shard, using a Consistent Hashing Ring. Each shard will consume a small amount of memory/CPU to do background processing. Therefore, a single History node cannot own too many shards. You may need to figure out a good number range based on your instance size(memory/CPU). "),t("br"),e._v(" 3. Also, you can’t add an infinite number of nodes to a cluster because this config is fixed. When the number of History nodes is closed or equal to numHistoryShards, there will be some History nodes that have no shards assigned to it. This will be wasting resources. "),t("br"),e._v(" "),t("br"),e._v(" Based on above, you don’t want to have a small number of shards which will limit the maximum size of your cluster. You also don’t want to have a too big number, which will require you to have a quite big initial size of the cluster. "),t("br"),e._v(" Also, typically a production cluster will start with a smaller number and then we add more nodes/hosts to it. But to keep high availability, it’s recommended to use at least 4 nodes for each service(Frontend/History/Matching) at the beginning.")]),e._v(" "),t("td",[e._v("1K~16K depending on the size ranges of the cluster you expect to run, and the instance size. "),t("strong",[e._v("Typically 2K for SQL based persistence, and 8K for Cassandra based.")])])]),e._v(" "),t("tr",[t("td",[e._v("ringpop")]),e._v(" "),t("td",[e._v("This is the config to let all nodes of all services connected to each other. ALL the bootstrap nodes MUST be reachable by ringpop when a service is starting up, within a MaxJoinDuration. defaultMaxJoinDuration is 2 minutes. "),t("br"),t("br"),e._v(" It’s not required that bootstrap nodes need to be Frontend/History or Matching. In fact, it can be running none of them as long as it runs Ringpop protocol.")]),e._v(" "),t("td",[e._v("For dns mode: Recommended to put the DNS of Frontend service "),t("br"),t("br"),e._v(" For hosts or hostfile mode: A list of Frontend service node addresses if using hosts mode. Make sure all the bootstrap nodes are reachable at startup.")])]),e._v(" "),t("tr",[t("td",[e._v("publicClient")]),e._v(" "),t("td",[e._v("The Cadence Frontend service addresses that internal Cadence system(like system workflows) need to talk to. "),t("br"),t("br"),e._v(" After connected, all nodes in Ringpop will form a ring with identifiers of what service they serve. Ideally Cadence should be able to get Frontend address from there. But Ringpop doesn’t expose this API yet.")]),e._v(" "),t("td",[e._v("Recommended be DNS of Frontend service, so that requests will be distributed to all Frontend nodes. "),t("br"),t("br"),e._v("Using localhost+Port or local container IP address+Port will not work if the IP/container is not running frontend")])]),e._v(" "),t("tr",[t("td",[e._v("services.NAME.rpc")]),e._v(" "),t("td",[e._v("Configuration of how to listen to network ports and serve traffic. "),t("br"),t("br"),e._v(" bindOnLocalHost:true will bind on 127.0.0.1. It’s mostly for local development. In production usually you have to specify the IP that containers will use by using bindOnIP "),t("br"),t("br"),e._v(" NAME is the matter for the “--services” option in the server startup command.")]),e._v(" "),t("td",[e._v("Name: Use as recommended in development.yaml. bindOnIP : an IP address that the container will serve the traffic with")])]),e._v(" "),t("tr",[t("td",[e._v("services.NAME.pprof")]),e._v(" "),t("td",[e._v("Golang profiling service , will bind on the same IP as RPC")]),e._v(" "),t("td",[e._v("a port that you want to serve pprof request")])]),e._v(" "),t("tr",[t("td",[e._v("services.Name.metrics")]),e._v(" "),t("td",[e._v("See Metrics&Logging section")]),e._v(" "),t("td",[e._v("cc")])]),e._v(" "),t("tr",[t("td",[e._v("clusterMetadata")]),e._v(" "),t("td",[e._v("Cadence cluster configuration. "),t("br"),t("br"),e._v("enableGlobalDomain:true will enable Cadence Cross datacenter replication(aka XDC) feature."),t("br"),t("br"),e._v("failoverVersionIncrement: This decides the maximum clusters that you will have replicated to each other at the same time. For example 10 is sufficient for most cases."),t("br"),t("br"),e._v("masterClusterName: a master cluster must be one of the enabled clusters, usually the very first cluster to start. It is only meaningful for internal purposes."),t("br"),t("br"),e._v("currentClusterName: current cluster name using this config file. "),t("br"),t("br"),e._v("clusterInformation is a map from clusterName to the cluster configure "),t("br"),t("br"),e._v("initialFailoverVersion: each cluster must use a different value from 0 to failoverVersionIncrement-1. "),t("br"),t("br"),e._v("rpcName: must be “cadence-frontend”. Can be improved in this issue. "),t("br"),t("br"),e._v("rpcAddress: the address to talk to the Frontend of the cluster for inter-cluster replication. "),t("br"),t("br"),e._v("Note that even if you don’t need XDC replication right now, if you want to migrate data stores in the future, you should enable xdc from every beginning. You just need to use the same name of cluster for both masterClusterName and currentClusterName. "),t("br"),t("br"),e._v(" Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/cross-dc-replication/#running-in-production"}},[e._v("cross dc replication")]),e._v(" for how to configure replication in production")],1),e._v(" "),t("td",[e._v("As explanation.")])]),e._v(" "),t("tr",[t("td",[e._v("dcRedirectionPolicy")]),e._v(" "),t("td",[e._v("For allowing forwarding frontend requests from passive cluster to active clusters.")]),e._v(" "),t("td",[e._v("“selected-apis-forwarding”")])]),e._v(" "),t("tr",[t("td",[e._v("archival")]),e._v(" "),t("td",[e._v("This is for archival history feature, skip if you don’t need it. Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/archival/#running-in-production"}},[e._v("workflow archival")]),e._v(" for how to configure archival in production")],1),e._v(" "),t("td",[e._v("N/A")])]),e._v(" "),t("tr",[t("td",[e._v("blobstore")]),e._v(" "),t("td",[e._v("This is also for archival history feature Default cadence server is using file based blob store implementation.")]),e._v(" "),t("td",[e._v("N/A")])]),e._v(" "),t("tr",[t("td",[e._v("domainDefaults")]),e._v(" "),t("td",[e._v("default config for each domain. Right now only being used for Archival feature.")]),e._v(" "),t("td",[e._v("N/A")])]),e._v(" "),t("tr",[t("td",[e._v("dynamicconfig (previously known as dynamicConfigClient)")]),e._v(" "),t("td",[e._v("Dynamic config is a config manager that enables you to change configs without restarting servers. It’s a good way for Cadence to keep high availability and make things easy to configure. "),t("br"),t("br"),e._v("By default Cadence server uses "),t("code",[e._v("filebased")]),e._v(" client which allows you to override default configs using a YAML file. However, this approach can be cumbersome in production environment because it's the operator's responsibility to sync the YAML files across Cadence nodes. "),t("br"),t("br"),e._v("Therefore, we provide another option, "),t("code",[e._v("configstore")]),e._v(" client, that stores config changes in the persistent data store for Cadence (e.g., Cassandra database) rather than the YAML file. This approach shifts the responsibility of syncing config changes from the operator to Cadence service. You can use Cadence CLI commands to list/get/update/restore config changes. "),t("br"),t("br"),e._v("You can also implement the dynamic config interface if you have a better way to manage configs.")]),e._v(" "),t("td",[e._v("Same as the sample development config")])]),e._v(" "),t("tr",[t("td",[e._v("persistence")]),e._v(" "),t("td",[e._v("Configuration for data store / persistence layer. "),t("br"),t("br"),e._v("Values of DefaultStore VisibilityStore AdvancedVisibilityStore should be keys of map DataStores. "),t("br"),t("br"),e._v("DefaultStore is for core Cadence functionality. "),t("br"),t("br"),e._v("VisibilityStore is for basic visibility feature "),t("br"),t("br"),e._v("AdvancedVisibilityStore is for advanced visibility"),t("br"),t("br"),e._v(" Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/#running-in-production"}},[e._v("advanced visibility")]),e._v(" for detailed configuration of advanced visibility. See "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/docs/persistence.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("persistence documentation"),t("OutboundLink")],1),e._v(" about using different database for Cadence")],1),e._v(" "),t("td",[e._v("As explanation")])])])]),e._v(" "),t("h3",{attrs:{id:"the-full-list-of-static-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#the-full-list-of-static-configuration"}},[e._v("#")]),e._v(" The full list of static configuration")]),e._v(" "),t("p",[e._v("Starting from v0.21.0, all the static configuration are defined by GoDocs in details.")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.21.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0/common/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/common/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other higher versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")])])])]),e._v(" "),t("p",[e._v("For earlier versions, you can find all the configurations similarly:")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.20.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.20.0/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.19.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.19.2/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.19.2/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.18.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.18.2/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.18.2/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.17.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.17.0/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.17.0/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other lower versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")])])])]),e._v(" "),t("h2",{attrs:{id:"dynamic-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#dynamic-configuration"}},[e._v("#")]),e._v(" Dynamic Configuration")]),e._v(" "),t("p",[e._v("Dynamic configuration is for fine tuning a Cadence cluster.")]),e._v(" "),t("p",[e._v("There are a lot more dynamic configurations than static configurations. Most of the default values are good for small clusters. As a cluster is scaled up, you may look for tuning it for the optimal performance.")]),e._v(" "),t("p",[e._v("Starting from v0.21.0 with this "),t("a",{attrs:{href:"https://github.com/uber/cadence/pull/4156/files",target:"_blank",rel:"noopener noreferrer"}},[e._v("change"),t("OutboundLink")],1),e._v(", all the dynamic configuration are well defined by GoDocs.")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.21.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0/common/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/common/dynamicconfig/constants.go#L58",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other higher versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")])])])]),e._v(" "),t("p",[e._v("For earlier versions, you can find all the configurations similarly:")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.20.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.20.0/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.19.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.19.2/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.19.2/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.18.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.18.2/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.18.2/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.17.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.17.0/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.17.0/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other lower versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")])])])]),e._v(" "),t("p",[e._v("However, the GoDocs in earlier versions don't contain detailed information. You need to look it up the newer version of GoDocs."),t("br"),e._v('\nFor example, search for "EnableGlobalDomain" in Dynamic Configuration '),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/667b7c68e67682a8d23f4b8f93e91a791313d8d6/common/dynamicconfig/constants.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("Comments in v0.21.0"),t("OutboundLink")],1),e._v(" or "),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0/common/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Docs of v0.21.0"),t("OutboundLink")],1),e._v(", as the usage of DynamicConfiguration never changes.")]),e._v(" "),t("ul",[t("li",[t("strong",[e._v("KeyName")]),e._v(" is the key that you will use in the dynamicconfig yaml content")]),e._v(" "),t("li",[t("strong",[e._v("Default value")]),e._v(" is the default value")]),e._v(" "),t("li",[t("strong",[e._v("Value type")]),e._v(" indicates the type that you should change the yaml value of:\n"),t("ul",[t("li",[e._v("Int should be integer like 123")]),e._v(" "),t("li",[e._v("Float should be number like 123.4")]),e._v(" "),t("li",[e._v("Duration should be Golang duration like: 10s, 2m, 5h for 10 seconds, 2 minutes and 5 hours.")]),e._v(" "),t("li",[e._v("Bool should be true or false")]),e._v(" "),t("li",[e._v("Map should be map of yaml")])])]),e._v(" "),t("li",[t("strong",[e._v("Allowed filters")]),e._v(" indicates what kinds of filters you can set as constraints with the dynamic configuration.\n"),t("ul",[t("li",[t("code",[e._v("DomainName")]),e._v(" can be used with "),t("code",[e._v("domainName")])]),e._v(" "),t("li",[t("code",[e._v("N/A")]),e._v(" means no filters can be set. The config will be global.")])])])]),e._v(" "),t("p",[e._v("For example, if you want to change the ratelimiting for List API, below is the config:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("// FrontendVisibilityListMaxQPS is max qps frontend can list open/close workflows\n// KeyName: frontend.visibilityListMaxQPS\n// Value type: Int\n// Default value: 10\n// Allowed filters: DomainName\nFrontendVisibilityListMaxQPS\n")])])]),t("p",[e._v("Then you can add the config like:")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("frontend.visibilityListMaxQPS")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("value")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("1000")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("constraints")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("domainName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"domainA"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("value")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("2000")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("constraints")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("domainName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"domainB"')]),e._v(" \n")])])]),t("p",[e._v("You will expect to see "),t("code",[e._v("domainA")]),e._v(" will be able to perform 1K List operation per second, while "),t("code",[e._v("domainB")]),e._v(" can perform 2K per second.")]),e._v(" "),t("p",[e._v("NOTE 1: the size related configuration numbers are based on byte.")]),e._v(" "),t("p",[e._v("NOTE 2: for .persistenceMaxQPS versus .persistenceGlobalMaxQPS --- persistenceMaxQPS is local for single node while persistenceGlobalMaxQPS is global for all node. persistenceGlobalMaxQPS is preferred if set as greater than zero. But by default it is zero so persistenceMaxQPS is being used.")]),e._v(" "),t("h3",{attrs:{id:"how-to-update-dynamic-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-to-update-dynamic-configuration"}},[e._v("#")]),e._v(" How to update Dynamic Configuration")]),e._v(" "),t("h4",{attrs:{id:"file-based-client"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#file-based-client"}},[e._v("#")]),e._v(" File-based client")]),e._v(" "),t("p",[e._v("By default, Cadence uses file-based client to manage dynamic configurations. Following are the approaches to changing dynamic configs using a yaml file.")]),e._v(" "),t("ul",[t("li",[e._v("Local docker-compose by mounting volume: 1. Change the dynamic configs in "),t("code",[e._v("cadence/config/dynamicconfig/development.yaml")]),e._v(". 2. Update the "),t("code",[e._v("cadence")]),e._v(" section in the docker compose file and mount "),t("code",[e._v("dynamicconfig")]),e._v(" folder to host machine like the following:")])]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("cadence")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("image")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" ubercadence/server"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("master"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v("auto"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v("setup\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("ports")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("...")]),e._v("(don't change anything here)\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("environment")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("...")]),e._v("(don't change anything here)\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"DYNAMIC_CONFIG_FILE_PATH=/etc/custom-dynamicconfig/development.yaml"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("volumes")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"/Users//cadence/config/dynamicconfig:/etc/custom-dynamicconfig"')]),e._v("\n")])])]),t("ul",[t("li",[t("p",[e._v("Local docker-compose by logging into the container: run "),t("code",[e._v("docker exec -it docker_cadence_1 /bin/bash")]),e._v(" to login your container. Then "),t("code",[e._v("vi config/dynamicconfig/development.yaml")]),e._v(" to make any change. After you changed the config, use "),t("code",[e._v("docker restart docker_cadence_1")]),e._v(" to restart the cadence instance. Note that you can also use this approach to change static config, but it must be changed through "),t("code",[e._v("config/config_template.yaml")]),e._v(" instead of "),t("code",[e._v("config/docker.yaml")]),e._v(" because "),t("code",[e._v("config/docker.yaml")]),e._v(" is generated on startup.")])]),e._v(" "),t("li",[t("p",[e._v("In production cluster: Follow this example of Helm Chart to deploy Cadence, update dynamic config "),t("a",{attrs:{href:"https://github.com/banzaicloud/banzai-charts/blob/be57e81c107fd2ccdfc6cf95dccf6cbab226920c/cadence/templates/server-configmap.yaml#L170",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1),e._v(" and restart the cluster.")])]),e._v(" "),t("li",[t("p",[e._v("DEBUG: How to make sure your updates on dynamicconfig is loaded? for example, if you added the following to "),t("code",[e._v("development.yaml")])])])]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("frontend.visibilityListMaxQPS")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("value")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("10000")]),e._v("\n")])])]),t("p",[e._v("After restarting Cadence instances, execute a command like this to let Cadence load the config(it's lazy loading when using it).\n"),t("code",[e._v("cadence --domain <> workflow list")])]),e._v(" "),t("p",[e._v("Then you should see the logs like below")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v('cadence_1 | {"level":"info","ts":"2021-05-07T18:43:07.869Z","msg":"First loading dynamic config","service":"cadence-frontend","key":"frontend.visibilityListMaxQPS,domainName:sample,clusterName:primary","value":"10000","default-value":"10","logging-call-at":"config.go:93"}\n')])])]),t("h4",{attrs:{id:"config-store-client"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#config-store-client"}},[e._v("#")]),e._v(" Config store client")]),e._v(" "),t("p",[e._v("You can set the "),t("code",[e._v("dynamicconfig")]),e._v(" client in the static configuration to "),t("code",[e._v("configstore")]),e._v(" in order to store config changes in a database, as shown below.")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dynamicconfig")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("client")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" configstore\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("configstore")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("pollInterval")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"10s"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("updateRetryAttempts")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("2")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("FetchTimeout")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"2s"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("UpdateTimeout")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"2s"')]),e._v("\n")])])]),t("p",[e._v("If you are still using the deprecated config "),t("code",[e._v("dynamicConfigClient")]),e._v(" like below, you need to replace it with the new "),t("code",[e._v("dynamicconfig")]),e._v(" as shown above to use "),t("code",[e._v("configstore")]),e._v(" client.")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dynamicConfigClient")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("filepath")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"/etc/cadence/config/dynamicconfig/config.yaml"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("pollInterval")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"10s"')]),e._v("\n")])])]),t("p",[e._v("After changing the client to "),t("code",[e._v("configstore")]),e._v(" and restarting Cadence, you can manage dynamic configs using "),t("code",[e._v("cadence admin config")]),e._v(" CLI commands. You may need to set your custom dynamic configs again as the previous configs are not automatically migrated from the YAML file to the database.")]),e._v(" "),t("ul",[t("li",[t("code",[e._v("cadence admin config listdc")]),e._v(" lists all dynamic config overrides")]),e._v(" "),t("li",[t("code",[e._v("cadence admin config getdc --dynamic_config_name ")]),e._v(" gets the value of a specific dynamic config")]),e._v(" "),t("li",[t("code",[e._v("cadence admin config updc --dynamic_config_name --dynamic_config_value '{\"Value\": }'")]),e._v(" updates the value of a specific dynamic config")]),e._v(" "),t("li",[t("code",[e._v("cadence admin config resdc --dynamic_config_name ")]),e._v(" restores a specific dynamic config to its default value")])]),e._v(" "),t("h2",{attrs:{id:"other-advanced-features"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#other-advanced-features"}},[e._v("#")]),e._v(" Other Advanced Features")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/#running-in-production"}},[e._v("advanced visibility")]),e._v(" for how to configure advanced visibility in production.")],1)]),e._v(" "),t("li",[t("p",[e._v("Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/archival/#running-in-production"}},[e._v("workflow archival")]),e._v(" for how to configure archival in production.")],1)]),e._v(" "),t("li",[t("p",[e._v("Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/cross-dc-replication/#running-in-production"}},[e._v("cross dc replication")]),e._v(" for how to configure replication in production.")],1)])]),e._v(" "),t("h2",{attrs:{id:"deployment-release"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#deployment-release"}},[e._v("#")]),e._v(" Deployment & Release")]),e._v(" "),t("p",[e._v("Kubernetes is the most popular way to deploy Cadence cluster. And easiest way is to use "),t("a",{attrs:{href:"https://github.com/banzaicloud/banzai-charts/tree/master/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Helm Charts"),t("OutboundLink")],1),e._v(" that maintained by a community project.")]),e._v(" "),t("p",[e._v("If you are looking for deploying Cadence using other technologies, then it's reccomended to use Cadence docker images. You can use offical ones, or you may customize it based on what you need. See "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/docker#using-docker-image-for-production",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence docker package"),t("OutboundLink")],1),e._v(" for how to run the images.")]),e._v(" "),t("p",[e._v("It's always recommended to use the latest release. See "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence release pages"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Please subscribe the release of project by :")]),e._v(" "),t("p",[e._v('Go to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".')]),e._v(" "),t("p",[e._v("And see "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#upgrading-server"}},[e._v("how to upgrade a Cadence cluster")])],1),e._v(" "),t("h2",{attrs:{id:"stress-bench-test-a-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#stress-bench-test-a-cluster"}},[e._v("#")]),e._v(" Stress/Bench Test a cluster")]),e._v(" "),t("p",[e._v("It's recommended to run bench test on your cluster following this "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/bench",target:"_blank",rel:"noopener noreferrer"}},[e._v("package"),t("OutboundLink")],1),e._v(" to see the maximum throughput that it can take, whenever you change some setup.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[95],{401:function(e,t,n){"use strict";n.r(t);var a=n(0),o=Object(a.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-configuration"}},[e._v("#")]),e._v(" Cluster Configuration")]),e._v(" "),t("p",[e._v("This section will help to understand what you need for setting up a Cadence cluster.")]),e._v(" "),t("p",[e._v("You should understand some basic static configuration of Cadence cluster.")]),e._v(" "),t("p",[e._v('There are also many other configuration called "Dynamic Configuration" for fine tuning the cluster. The default values are good to go for small clusters.')]),e._v(" "),t("p",[e._v("Cadence’s minimum dependency is a database(Cassandra or SQL based like MySQL/Postgres). Cadence uses it for persistence. All instances of Cadence clusters are stateless.")]),e._v(" "),t("p",[e._v("For production you also need a metric server(Prometheus/Statsd/M3/etc).")]),e._v(" "),t("p",[e._v("For "),t("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#other-advanced-features"}},[e._v("advanced features")]),e._v(" Cadence depends on others like Elastisearch/OpenSearch+Kafka if you need "),t("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/"}},[e._v("Advanced visibility feature to search workflows")]),e._v(". Cadence will depends on a blob store like S3 if you need to enable "),t("RouterLink",{attrs:{to:"/docs/concepts/archival/"}},[e._v("archival feature")]),e._v(".")],1),e._v(" "),t("h2",{attrs:{id:"static-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#static-configuration"}},[e._v("#")]),e._v(" Static configuration")]),e._v(" "),t("h3",{attrs:{id:"configuration-directory-and-files"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#configuration-directory-and-files"}},[e._v("#")]),e._v(" Configuration Directory and Files")]),e._v(" "),t("p",[e._v("The default directory for configuration files is named "),t("strong",[e._v("config/")]),e._v(". This directory contains various configuration files, but not all files will necessarily be used in every scenario.")]),e._v(" "),t("h4",{attrs:{id:"combining-configuration-files"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#combining-configuration-files"}},[e._v("#")]),e._v(" Combining Configuration Files")]),e._v(" "),t("ul",[t("li",[e._v("Base Configuration: The "),t("code",[e._v("base.yaml")]),e._v(" file is always loaded first, providing a common configuration that applies to all environments.")]),e._v(" "),t("li",[e._v("Runtime Environment File: The second file to be loaded is specific to the runtime environment. The environment name can be specified through the "),t("code",[e._v("$CADENCE_ENVIRONMENT")]),e._v(" environment variable or passed as a command-line argument. If neither option is specified, "),t("code",[e._v("development.yaml")]),e._v(" is used by default.")]),e._v(" "),t("li",[e._v("Availability Zone File: If an availability zone is specified (either through the "),t("code",[e._v("$CADENCE_AVAILABILITY_ZONE")]),e._v(' environment variable or as a command-line argument), a file named after the zone will be merged. For example, if you specify "az1" as the zone, '),t("code",[e._v("production_az1.yaml")]),e._v(" will be used as well.")])]),e._v(" "),t("p",[e._v("To merge "),t("code",[e._v("base.yaml")]),e._v(", "),t("code",[e._v("production.yaml")]),e._v(", and "),t("code",[e._v("production_az1.yaml")]),e._v(' files, you need to specify "production" as the runtime environment and "az1" as the zone.')]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("// base.yaml -> production.yaml -> production_az1.yaml = final configuration\n")])])]),t("h4",{attrs:{id:"using-environment-variables"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#using-environment-variables"}},[e._v("#")]),e._v(" Using Environment Variables")]),e._v(" "),t("p",[e._v("Configuration values can be provided using environment variables with a specific syntax.\n"),t("code",[e._v("$VAR")]),e._v(": This notation will be replaced with the value of the specified environment variable. If the environment variable is not set, the value will be left blank.\nYou can declare a default value using the syntax "),t("code",[e._v("{$VAR:default}")]),e._v(". This means that if the environment variable VAR is not set, the default value will be used instead.")]),e._v(" "),t("p",[e._v("Note: If you want to include the "),t("code",[e._v("$")]),e._v(" symbol literally in your configuration file (without interpreting it as an environment variable substitution), escape it by using $$. This will prevent it from being replaced by an environment variable value.")]),e._v(" "),t("h3",{attrs:{id:"understand-the-basic-static-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#understand-the-basic-static-configuration"}},[e._v("#")]),e._v(" Understand the basic static configuration")]),e._v(" "),t("p",[e._v("There are quite many configs in Cadence. Here are the most basic configuration that you should understand.")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Config name")]),e._v(" "),t("th",[e._v("Explanation")]),e._v(" "),t("th",[e._v("Recommended value")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("numHistoryShards")]),e._v(" "),t("td",[e._v("This is the most important one in Cadence config.It will be a fixed number in the cluster forever. The only way to change it is to migrate to another cluster. Refer to Migrate cluster section. "),t("br"),e._v(" "),t("br"),e._v(" Some facts about it: "),t("br"),e._v(" 1. Each workflow will be mapped to a single shard. Within a shard, all the workflow creation/updates are serialized. "),t("br"),e._v(" 2. Each shard will be assigned to only one History node to own the shard, using a Consistent Hashing Ring. Each shard will consume a small amount of memory/CPU to do background processing. Therefore, a single History node cannot own too many shards. You may need to figure out a good number range based on your instance size(memory/CPU). "),t("br"),e._v(" 3. Also, you can’t add an infinite number of nodes to a cluster because this config is fixed. When the number of History nodes is closed or equal to numHistoryShards, there will be some History nodes that have no shards assigned to it. This will be wasting resources. "),t("br"),e._v(" "),t("br"),e._v(" Based on above, you don’t want to have a small number of shards which will limit the maximum size of your cluster. You also don’t want to have a too big number, which will require you to have a quite big initial size of the cluster. "),t("br"),e._v(" Also, typically a production cluster will start with a smaller number and then we add more nodes/hosts to it. But to keep high availability, it’s recommended to use at least 4 nodes for each service(Frontend/History/Matching) at the beginning.")]),e._v(" "),t("td",[e._v("1K~16K depending on the size ranges of the cluster you expect to run, and the instance size. "),t("strong",[e._v("Typically 2K for SQL based persistence, and 8K for Cassandra based.")])])]),e._v(" "),t("tr",[t("td",[e._v("ringpop")]),e._v(" "),t("td",[e._v("This is the config to let all nodes of all services connected to each other. ALL the bootstrap nodes MUST be reachable by ringpop when a service is starting up, within a MaxJoinDuration. defaultMaxJoinDuration is 2 minutes. "),t("br"),t("br"),e._v(" It’s not required that bootstrap nodes need to be Frontend/History or Matching. In fact, it can be running none of them as long as it runs Ringpop protocol.")]),e._v(" "),t("td",[e._v("For dns mode: Recommended to put the DNS of Frontend service "),t("br"),t("br"),e._v(" For hosts or hostfile mode: A list of Frontend service node addresses if using hosts mode. Make sure all the bootstrap nodes are reachable at startup.")])]),e._v(" "),t("tr",[t("td",[e._v("publicClient")]),e._v(" "),t("td",[e._v("The Cadence Frontend service addresses that internal Cadence system(like system workflows) need to talk to. "),t("br"),t("br"),e._v(" After connected, all nodes in Ringpop will form a ring with identifiers of what service they serve. Ideally Cadence should be able to get Frontend address from there. But Ringpop doesn’t expose this API yet.")]),e._v(" "),t("td",[e._v("Recommended be DNS of Frontend service, so that requests will be distributed to all Frontend nodes. "),t("br"),t("br"),e._v("Using localhost+Port or local container IP address+Port will not work if the IP/container is not running frontend")])]),e._v(" "),t("tr",[t("td",[e._v("services.NAME.rpc")]),e._v(" "),t("td",[e._v("Configuration of how to listen to network ports and serve traffic. "),t("br"),t("br"),e._v(" bindOnLocalHost:true will bind on 127.0.0.1. It’s mostly for local development. In production usually you have to specify the IP that containers will use by using bindOnIP "),t("br"),t("br"),e._v(" NAME is the matter for the “--services” option in the server startup command.")]),e._v(" "),t("td",[e._v("Name: Use as recommended in development.yaml. bindOnIP : an IP address that the container will serve the traffic with")])]),e._v(" "),t("tr",[t("td",[e._v("services.NAME.pprof")]),e._v(" "),t("td",[e._v("Golang profiling service , will bind on the same IP as RPC")]),e._v(" "),t("td",[e._v("a port that you want to serve pprof request")])]),e._v(" "),t("tr",[t("td",[e._v("services.Name.metrics")]),e._v(" "),t("td",[e._v("See Metrics&Logging section")]),e._v(" "),t("td",[e._v("cc")])]),e._v(" "),t("tr",[t("td",[e._v("clusterMetadata")]),e._v(" "),t("td",[e._v("Cadence cluster configuration. "),t("br"),t("br"),e._v("enableGlobalDomain:true will enable Cadence Cross datacenter replication(aka XDC) feature."),t("br"),t("br"),e._v("failoverVersionIncrement: This decides the maximum clusters that you will have replicated to each other at the same time. For example 10 is sufficient for most cases."),t("br"),t("br"),e._v("masterClusterName: a master cluster must be one of the enabled clusters, usually the very first cluster to start. It is only meaningful for internal purposes."),t("br"),t("br"),e._v("currentClusterName: current cluster name using this config file. "),t("br"),t("br"),e._v("clusterInformation is a map from clusterName to the cluster configure "),t("br"),t("br"),e._v("initialFailoverVersion: each cluster must use a different value from 0 to failoverVersionIncrement-1. "),t("br"),t("br"),e._v("rpcName: must be “cadence-frontend”. Can be improved in this issue. "),t("br"),t("br"),e._v("rpcAddress: the address to talk to the Frontend of the cluster for inter-cluster replication. "),t("br"),t("br"),e._v("Note that even if you don’t need XDC replication right now, if you want to migrate data stores in the future, you should enable xdc from every beginning. You just need to use the same name of cluster for both masterClusterName and currentClusterName. "),t("br"),t("br"),e._v(" Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/cross-dc-replication/#running-in-production"}},[e._v("cross dc replication")]),e._v(" for how to configure replication in production")],1),e._v(" "),t("td",[e._v("As explanation.")])]),e._v(" "),t("tr",[t("td",[e._v("dcRedirectionPolicy")]),e._v(" "),t("td",[e._v("For allowing forwarding frontend requests from passive cluster to active clusters.")]),e._v(" "),t("td",[e._v("“selected-apis-forwarding”")])]),e._v(" "),t("tr",[t("td",[e._v("archival")]),e._v(" "),t("td",[e._v("This is for archival history feature, skip if you don’t need it. Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/archival/#running-in-production"}},[e._v("workflow archival")]),e._v(" for how to configure archival in production")],1),e._v(" "),t("td",[e._v("N/A")])]),e._v(" "),t("tr",[t("td",[e._v("blobstore")]),e._v(" "),t("td",[e._v("This is also for archival history feature Default cadence server is using file based blob store implementation.")]),e._v(" "),t("td",[e._v("N/A")])]),e._v(" "),t("tr",[t("td",[e._v("domainDefaults")]),e._v(" "),t("td",[e._v("default config for each domain. Right now only being used for Archival feature.")]),e._v(" "),t("td",[e._v("N/A")])]),e._v(" "),t("tr",[t("td",[e._v("dynamicconfig (previously known as dynamicConfigClient)")]),e._v(" "),t("td",[e._v("Dynamic config is a config manager that enables you to change configs without restarting servers. It’s a good way for Cadence to keep high availability and make things easy to configure. "),t("br"),t("br"),e._v("By default Cadence server uses "),t("code",[e._v("filebased")]),e._v(" client which allows you to override default configs using a YAML file. However, this approach can be cumbersome in production environment because it's the operator's responsibility to sync the YAML files across Cadence nodes. "),t("br"),t("br"),e._v("Therefore, we provide another option, "),t("code",[e._v("configstore")]),e._v(" client, that stores config changes in the persistent data store for Cadence (e.g., Cassandra database) rather than the YAML file. This approach shifts the responsibility of syncing config changes from the operator to Cadence service. You can use Cadence CLI commands to list/get/update/restore config changes. "),t("br"),t("br"),e._v("You can also implement the dynamic config interface if you have a better way to manage configs.")]),e._v(" "),t("td",[e._v("Same as the sample development config")])]),e._v(" "),t("tr",[t("td",[e._v("persistence")]),e._v(" "),t("td",[e._v("Configuration for data store / persistence layer. "),t("br"),t("br"),e._v("Values of DefaultStore VisibilityStore AdvancedVisibilityStore should be keys of map DataStores. "),t("br"),t("br"),e._v("DefaultStore is for core Cadence functionality. "),t("br"),t("br"),e._v("VisibilityStore is for basic visibility feature "),t("br"),t("br"),e._v("AdvancedVisibilityStore is for advanced visibility"),t("br"),t("br"),e._v(" Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/#running-in-production"}},[e._v("advanced visibility")]),e._v(" for detailed configuration of advanced visibility. See "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/docs/persistence.md",target:"_blank",rel:"noopener noreferrer"}},[e._v("persistence documentation"),t("OutboundLink")],1),e._v(" about using different database for Cadence")],1),e._v(" "),t("td",[e._v("As explanation")])])])]),e._v(" "),t("h3",{attrs:{id:"the-full-list-of-static-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#the-full-list-of-static-configuration"}},[e._v("#")]),e._v(" The full list of static configuration")]),e._v(" "),t("p",[e._v("Starting from v0.21.0, all the static configuration are defined by GoDocs in details.")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.21.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0/common/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/common/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other higher versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")])])])]),e._v(" "),t("p",[e._v("For earlier versions, you can find all the configurations similarly:")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.20.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.20.0/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.19.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.19.2/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.19.2/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.18.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.18.2/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.18.2/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.17.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.17.0/common/service/config#Config",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.17.0/common/service/config/config.go#L37",target:"_blank",rel:"noopener noreferrer"}},[e._v("Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other lower versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")])])])]),e._v(" "),t("h2",{attrs:{id:"dynamic-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#dynamic-configuration"}},[e._v("#")]),e._v(" Dynamic Configuration")]),e._v(" "),t("p",[e._v("Dynamic configuration is for fine tuning a Cadence cluster.")]),e._v(" "),t("p",[e._v("There are a lot more dynamic configurations than static configurations. Most of the default values are good for small clusters. As a cluster is scaled up, you may look for tuning it for the optimal performance.")]),e._v(" "),t("p",[e._v("Starting from v0.21.0 with this "),t("a",{attrs:{href:"https://github.com/uber/cadence/pull/4156/files",target:"_blank",rel:"noopener noreferrer"}},[e._v("change"),t("OutboundLink")],1),e._v(", all the dynamic configuration are well defined by GoDocs.")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.21.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0/common/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.21.0/common/dynamicconfig/constants.go#L58",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other higher versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.21.0")])])])]),e._v(" "),t("p",[e._v("For earlier versions, you can find all the configurations similarly:")]),e._v(" "),t("table",[t("thead",[t("tr",[t("th",[e._v("Version")]),e._v(" "),t("th",[e._v("GoDocs Link")]),e._v(" "),t("th",[e._v("Github Link")])])]),e._v(" "),t("tbody",[t("tr",[t("td",[e._v("v0.20.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.20.0/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.19.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.19.2/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.19.2/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.18.2")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.18.2/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.18.2/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("v0.17.0")]),e._v(" "),t("td",[t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.17.0/common/service/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration Docs"),t("OutboundLink")],1)]),e._v(" "),t("td",[t("a",{attrs:{href:"https://github.com/uber/cadence/blob/v0.17.0/common/service/dynamicconfig/constants.go#L53",target:"_blank",rel:"noopener noreferrer"}},[e._v("Dynamic Configuration"),t("OutboundLink")],1)])]),e._v(" "),t("tr",[t("td",[e._v("..."),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.20.0?tab=versions",target:"_blank",rel:"noopener noreferrer"}},[e._v("other lower versions"),t("OutboundLink")],1)]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")]),e._v(" "),t("td",[e._v("...Replace the version in the URL of v0.20.0")])])])]),e._v(" "),t("p",[e._v("However, the GoDocs in earlier versions don't contain detailed information. You need to look it up the newer version of GoDocs."),t("br"),e._v('\nFor example, search for "EnableGlobalDomain" in Dynamic Configuration '),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/667b7c68e67682a8d23f4b8f93e91a791313d8d6/common/dynamicconfig/constants.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("Comments in v0.21.0"),t("OutboundLink")],1),e._v(" or "),t("a",{attrs:{href:"https://pkg.go.dev/github.com/uber/cadence@v0.21.0/common/dynamicconfig#Key",target:"_blank",rel:"noopener noreferrer"}},[e._v("Docs of v0.21.0"),t("OutboundLink")],1),e._v(", as the usage of DynamicConfiguration never changes.")]),e._v(" "),t("ul",[t("li",[t("strong",[e._v("KeyName")]),e._v(" is the key that you will use in the dynamicconfig yaml content")]),e._v(" "),t("li",[t("strong",[e._v("Default value")]),e._v(" is the default value")]),e._v(" "),t("li",[t("strong",[e._v("Value type")]),e._v(" indicates the type that you should change the yaml value of:\n"),t("ul",[t("li",[e._v("Int should be integer like 123")]),e._v(" "),t("li",[e._v("Float should be number like 123.4")]),e._v(" "),t("li",[e._v("Duration should be Golang duration like: 10s, 2m, 5h for 10 seconds, 2 minutes and 5 hours.")]),e._v(" "),t("li",[e._v("Bool should be true or false")]),e._v(" "),t("li",[e._v("Map should be map of yaml")])])]),e._v(" "),t("li",[t("strong",[e._v("Allowed filters")]),e._v(" indicates what kinds of filters you can set as constraints with the dynamic configuration.\n"),t("ul",[t("li",[t("code",[e._v("DomainName")]),e._v(" can be used with "),t("code",[e._v("domainName")])]),e._v(" "),t("li",[t("code",[e._v("N/A")]),e._v(" means no filters can be set. The config will be global.")])])])]),e._v(" "),t("p",[e._v("For example, if you want to change the ratelimiting for List API, below is the config:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("// FrontendVisibilityListMaxQPS is max qps frontend can list open/close workflows\n// KeyName: frontend.visibilityListMaxQPS\n// Value type: Int\n// Default value: 10\n// Allowed filters: DomainName\nFrontendVisibilityListMaxQPS\n")])])]),t("p",[e._v("Then you can add the config like:")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("frontend.visibilityListMaxQPS")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("value")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("1000")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("constraints")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("domainName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"domainA"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("value")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("2000")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("constraints")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("domainName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"domainB"')]),e._v(" \n")])])]),t("p",[e._v("You will expect to see "),t("code",[e._v("domainA")]),e._v(" will be able to perform 1K List operation per second, while "),t("code",[e._v("domainB")]),e._v(" can perform 2K per second.")]),e._v(" "),t("p",[e._v("NOTE 1: the size related configuration numbers are based on byte.")]),e._v(" "),t("p",[e._v("NOTE 2: for .persistenceMaxQPS versus .persistenceGlobalMaxQPS --- persistenceMaxQPS is local for single node while persistenceGlobalMaxQPS is global for all node. persistenceGlobalMaxQPS is preferred if set as greater than zero. But by default it is zero so persistenceMaxQPS is being used.")]),e._v(" "),t("h3",{attrs:{id:"how-to-update-dynamic-configuration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-to-update-dynamic-configuration"}},[e._v("#")]),e._v(" How to update Dynamic Configuration")]),e._v(" "),t("h4",{attrs:{id:"file-based-client"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#file-based-client"}},[e._v("#")]),e._v(" File-based client")]),e._v(" "),t("p",[e._v("By default, Cadence uses file-based client to manage dynamic configurations. Following are the approaches to changing dynamic configs using a yaml file.")]),e._v(" "),t("ul",[t("li",[e._v("Local docker-compose by mounting volume: 1. Change the dynamic configs in "),t("code",[e._v("cadence/config/dynamicconfig/development.yaml")]),e._v(". 2. Update the "),t("code",[e._v("cadence")]),e._v(" section in the docker compose file and mount "),t("code",[e._v("dynamicconfig")]),e._v(" folder to host machine like the following:")])]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("cadence")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("image")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" ubercadence/server"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("master"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v("auto"),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v("setup\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("ports")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("...")]),e._v("(don't change anything here)\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("environment")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("...")]),e._v("(don't change anything here)\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"DYNAMIC_CONFIG_FILE_PATH=/etc/custom-dynamicconfig/development.yaml"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("volumes")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"/Users//cadence/config/dynamicconfig:/etc/custom-dynamicconfig"')]),e._v("\n")])])]),t("ul",[t("li",[t("p",[e._v("Local docker-compose by logging into the container: run "),t("code",[e._v("docker exec -it docker_cadence_1 /bin/bash")]),e._v(" to login your container. Then "),t("code",[e._v("vi config/dynamicconfig/development.yaml")]),e._v(" to make any change. After you changed the config, use "),t("code",[e._v("docker restart docker_cadence_1")]),e._v(" to restart the cadence instance. Note that you can also use this approach to change static config, but it must be changed through "),t("code",[e._v("config/config_template.yaml")]),e._v(" instead of "),t("code",[e._v("config/docker.yaml")]),e._v(" because "),t("code",[e._v("config/docker.yaml")]),e._v(" is generated on startup.")])]),e._v(" "),t("li",[t("p",[e._v("In production cluster: Follow this example of Helm Chart to deploy Cadence, update dynamic config "),t("a",{attrs:{href:"https://github.com/banzaicloud/banzai-charts/blob/be57e81c107fd2ccdfc6cf95dccf6cbab226920c/cadence/templates/server-configmap.yaml#L170",target:"_blank",rel:"noopener noreferrer"}},[e._v("here"),t("OutboundLink")],1),e._v(" and restart the cluster.")])]),e._v(" "),t("li",[t("p",[e._v("DEBUG: How to make sure your updates on dynamicconfig is loaded? for example, if you added the following to "),t("code",[e._v("development.yaml")])])])]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("frontend.visibilityListMaxQPS")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("value")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("10000")]),e._v("\n")])])]),t("p",[e._v("After restarting Cadence instances, execute a command like this to let Cadence load the config(it's lazy loading when using it).\n"),t("code",[e._v("cadence --domain <> workflow list")])]),e._v(" "),t("p",[e._v("Then you should see the logs like below")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v('cadence_1 | {"level":"info","ts":"2021-05-07T18:43:07.869Z","msg":"First loading dynamic config","service":"cadence-frontend","key":"frontend.visibilityListMaxQPS,domainName:sample,clusterName:primary","value":"10000","default-value":"10","logging-call-at":"config.go:93"}\n')])])]),t("h4",{attrs:{id:"config-store-client"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#config-store-client"}},[e._v("#")]),e._v(" Config store client")]),e._v(" "),t("p",[e._v("You can set the "),t("code",[e._v("dynamicconfig")]),e._v(" client in the static configuration to "),t("code",[e._v("configstore")]),e._v(" in order to store config changes in a database, as shown below.")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dynamicconfig")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("client")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" configstore\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("configstore")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("pollInterval")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"10s"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("updateRetryAttempts")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("2")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("FetchTimeout")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"2s"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("UpdateTimeout")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"2s"')]),e._v("\n")])])]),t("p",[e._v("If you are still using the deprecated config "),t("code",[e._v("dynamicConfigClient")]),e._v(" like below, you need to replace it with the new "),t("code",[e._v("dynamicconfig")]),e._v(" as shown above to use "),t("code",[e._v("configstore")]),e._v(" client.")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dynamicConfigClient")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("filepath")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"/etc/cadence/config/dynamicconfig/config.yaml"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("pollInterval")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"10s"')]),e._v("\n")])])]),t("p",[e._v("After changing the client to "),t("code",[e._v("configstore")]),e._v(" and restarting Cadence, you can manage dynamic configs using "),t("code",[e._v("cadence admin config")]),e._v(" CLI commands. You may need to set your custom dynamic configs again as the previous configs are not automatically migrated from the YAML file to the database.")]),e._v(" "),t("ul",[t("li",[t("code",[e._v("cadence admin config listdc")]),e._v(" lists all dynamic config overrides")]),e._v(" "),t("li",[t("code",[e._v("cadence admin config getdc --dynamic_config_name ")]),e._v(" gets the value of a specific dynamic config")]),e._v(" "),t("li",[t("code",[e._v("cadence admin config updc --dynamic_config_name --dynamic_config_value '{\"Value\": }'")]),e._v(" updates the value of a specific dynamic config")]),e._v(" "),t("li",[t("code",[e._v("cadence admin config resdc --dynamic_config_name ")]),e._v(" restores a specific dynamic config to its default value")])]),e._v(" "),t("h2",{attrs:{id:"other-advanced-features"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#other-advanced-features"}},[e._v("#")]),e._v(" Other Advanced Features")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/search-workflows/#running-in-production"}},[e._v("advanced visibility")]),e._v(" for how to configure advanced visibility in production.")],1)]),e._v(" "),t("li",[t("p",[e._v("Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/archival/#running-in-production"}},[e._v("workflow archival")]),e._v(" for how to configure archival in production.")],1)]),e._v(" "),t("li",[t("p",[e._v("Go to "),t("RouterLink",{attrs:{to:"/docs/concepts/cross-dc-replication/#running-in-production"}},[e._v("cross dc replication")]),e._v(" for how to configure replication in production.")],1)])]),e._v(" "),t("h2",{attrs:{id:"deployment-release"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#deployment-release"}},[e._v("#")]),e._v(" Deployment & Release")]),e._v(" "),t("p",[e._v("Kubernetes is the most popular way to deploy Cadence cluster. And easiest way is to use "),t("a",{attrs:{href:"https://github.com/banzaicloud/banzai-charts/tree/master/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence Helm Charts"),t("OutboundLink")],1),e._v(" that maintained by a community project.")]),e._v(" "),t("p",[e._v("If you are looking for deploying Cadence using other technologies, then it's reccomended to use Cadence docker images. You can use offical ones, or you may customize it based on what you need. See "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/docker#using-docker-image-for-production",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence docker package"),t("OutboundLink")],1),e._v(" for how to run the images.")]),e._v(" "),t("p",[e._v("It's always recommended to use the latest release. See "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cadence release pages"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("Please subscribe the release of project by :")]),e._v(" "),t("p",[e._v('Go to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".')]),e._v(" "),t("p",[e._v("And see "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#upgrading-server"}},[e._v("how to upgrade a Cadence cluster")])],1),e._v(" "),t("h2",{attrs:{id:"stress-bench-test-a-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#stress-bench-test-a-cluster"}},[e._v("#")]),e._v(" Stress/Bench Test a cluster")]),e._v(" "),t("p",[e._v("It's recommended to run bench test on your cluster following this "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/bench",target:"_blank",rel:"noopener noreferrer"}},[e._v("package"),t("OutboundLink")],1),e._v(" to see the maximum throughput that it can take, whenever you change some setup.")])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/96.5f08cd95.js b/assets/js/96.dc8dce30.js similarity index 99% rename from assets/js/96.5f08cd95.js rename to assets/js/96.dc8dce30.js index ac0983324..340e4ebb7 100644 --- a/assets/js/96.5f08cd95.js +++ b/assets/js/96.dc8dce30.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[96],{403:function(e,t,a){"use strict";a.r(t);var s=a(0),o=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-maintenance"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-maintenance"}},[e._v("#")]),e._v(" Cluster Maintenance")]),e._v(" "),t("p",[e._v("This includes how to use and maintain a Cadence cluster for both clients and server clusters.")]),e._v(" "),t("h2",{attrs:{id:"scale-up-down-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#scale-up-down-cluster"}},[e._v("#")]),e._v(" Scale up & down Cluster")]),e._v(" "),t("ul",[t("li",[e._v("When CPU/Memory is getting bottleneck on Cadence instances, you may scale up or add more instances.")]),e._v(" "),t("li",[e._v("Watch "),t("RouterLink",{attrs:{to:"/docs/operation-guide/monitor/"}},[e._v("Cadence metrics")]),e._v(" "),t("ul",[t("li",[e._v("See if the external traffic to frontend is normal")]),e._v(" "),t("li",[e._v("If the slowness is due to too many tasks on a tasklist, you may need to "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#scale-up-a-tasklist-using-scalable-tasklist-feature"}},[e._v("scale up the tasklist")])],1),e._v(" "),t("li",[e._v("If persistence latency is getting too high, try scale up your DB instance")])])],1),e._v(" "),t("li",[e._v("Never change the "),t("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configuration"}},[t("code",[e._v("numOfShards")]),e._v(" of a cluster")]),e._v(". If you need that because the current one is too small, follow the instructions to "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#migrate-cadence-cluster"}},[e._v("migrate your cluster to a new one")]),e._v(".")],1)]),e._v(" "),t("h2",{attrs:{id:"scale-up-a-tasklist-using-scalable-tasklist-feature"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#scale-up-a-tasklist-using-scalable-tasklist-feature"}},[e._v("#")]),e._v(" Scale up a tasklist using "),t("code",[e._v("Scalable tasklist")]),e._v(" feature")]),e._v(" "),t("p",[e._v("By default a tasklist is not scalable enough to support hundreds of tasks per second. That’s mainly because each tasklist is assigned to a Matching service node, and dispatching tasks in a tasklist is in sequence.")]),e._v(" "),t("p",[e._v("In the past, Cadence recommended using multiple tasklists to start workflow/activity. You need to make a list of tasklists and randomly pick one when starting workflows. And then when starting workers, let them listen to all the tasklists.")]),e._v(" "),t("p",[e._v("Nowadays, Cadence has a feature called “Scalable tasklist”. It will divide a tasklist into multiple logical partitions, which can distribute tasks to multiple Matching service nodes. By default this feature is not enabled because there is some performance penalty on the server side, plus it’s not common that a tasklist needs to support more than hundreds tasks per second.")]),e._v(" "),t("p",[e._v("You must make a dynamic configuration change in Cadence server to use this feature:")]),e._v(" "),t("p",[t("strong",[e._v("matching.numTasklistWritePartitions")])]),e._v(" "),t("p",[e._v("and")]),e._v(" "),t("p",[t("strong",[e._v("matching.numTasklistReadPartitions")])]),e._v(" "),t("p",[e._v("matching.numTasklistWritePartitions is the number of partitions when a Cadence server sends a task to the tasklist.\nmatching.numTasklistReadPartitions is the number of partitions when your worker accepts a task from the tasklist.")]),e._v(" "),t("p",[e._v("There are a few things to know when using this feature:")]),e._v(" "),t("ul",[t("li",[e._v("Always make sure "),t("code",[e._v("matching.numTasklistWritePartitions <= matching.numTasklistReadPartitions")]),e._v(" . Otherwise there may be some tasks that are sent to a tasklist partition but no poller(worker) will be able to pick up.")]),e._v(" "),t("li",[e._v("Because of above, when scaling down the number of partitions, you must decrease the WritePartitions first, to wait for a certain time to ensure that tasks are drained, and then decrease ReadPartitions.")]),e._v(" "),t("li",[e._v("Both domain names and taskListName should be specified in the dynamic config. An example of using this feature. See more details about dynamic config format using file based "),t("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configs"}},[e._v("dynamic config")]),e._v(".")],1)]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v('matching.numTasklistWritePartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\nmatching.numTasklistReadPartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\n')])])]),t("p",[e._v("NOTE: the value must be integer without double quotes.")]),e._v(" "),t("h2",{attrs:{id:"restarting-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#restarting-cluster"}},[e._v("#")]),e._v(" Restarting Cluster")]),e._v(" "),t("p",[e._v("Make sure rolling restart to keep high availability.")]),e._v(" "),t("h2",{attrs:{id:"optimize-sql-persistence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#optimize-sql-persistence"}},[e._v("#")]),e._v(" Optimize SQL Persistence")]),e._v(" "),t("ul",[t("li",[e._v("Connection is shared within a Cadence server host")]),e._v(" "),t("li",[e._v("For each host, The max number of connections it will consume is maxConn of defaultStore + maxConn of visibilityStore.")]),e._v(" "),t("li",[e._v("The total max number of connections your Cadence cluster will consume is the summary from all hosts(from Frontend/Matching/History/SysWorker services)")]),e._v(" "),t("li",[e._v("Frontend and history nodes need both default and visibility Stores, but matching and sys workers only need default Stores, they don't need to talk to visibility DBs.")]),e._v(" "),t("li",[e._v("For default Stores, history service will take the most connection, then Frontend/Matching. SysWorker will use much less than others")]),e._v(" "),t("li",[e._v("Default Stores is for Cadence’ core data model, which requires strong consistency. So it cannot use replicas. VisibilityStore is not for core data models. It’s recommended to use a separate DB for visibility store if using DB based visibility.")]),e._v(" "),t("li",[e._v("Visibility Stores usually take much less connection as the workload is much lightweight(less QPS and no explicit transactions).")]),e._v(" "),t("li",[e._v("Visibility Stores require eventual consistency for read. So it can use replicas.")]),e._v(" "),t("li",[e._v("MaxIdelConns should be less than MaxConns, so that the connections can be distributed better across hosts.")])]),e._v(" "),t("h2",{attrs:{id:"upgrading-server"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upgrading-server"}},[e._v("#")]),e._v(" Upgrading Server")]),e._v(" "),t("p",[e._v('To get notified about release, please subscribe the release of project by : Go to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".')]),e._v(" "),t("p",[e._v("It's recommended to upgrade one minor version at a time. E.g, if you are at 0.10, you should upgrade to 0.11, stabilize it with running some normal workload to make sure that the upgraded server is happy with the schema changes. After ~1 hour, then upgrade to 0.12. then 0.13. etc.")]),e._v(" "),t("p",[e._v("The reason is that for each minor upgrade, you should be able to follow the release notes about what you should do for upgrading. The release notes may require you to run some commands. This will also help to narrow down the cause when something goes wrong.")]),e._v(" "),t("h3",{attrs:{id:"how-to-upgrade"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-to-upgrade"}},[e._v("#")]),e._v(" How to upgrade:")]),e._v(" "),t("p",[e._v("Things that you may need to do for upgrading a minor version(patch version upgrades should not need it):")]),e._v(" "),t("ul",[t("li",[e._v("Schema(DB/ElasticSearch) changes")]),e._v(" "),t("li",[e._v("Configuration format/layout changes")]),e._v(" "),t("li",[e._v("Data migration -- this is very rare. For example, "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.16.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("upgrading from 0.15.x to 0.16.0 requires a data migration"),t("OutboundLink")],1),e._v(".")])]),e._v(" "),t("p",[e._v("You should read through the release instruction for each minor release to understand what needs to be done.")]),e._v(" "),t("ul",[t("li",[e._v("Schema changes need to be applied before upgrading server\n"),t("ul",[t("li",[e._v("Upgrade MySQL/Postgres schema if applicable")]),e._v(" "),t("li",[e._v("Upgrade Cassandra schema if applicable")]),e._v(" "),t("li",[e._v("Upgrade ElasticSearch schema if applicable")])])]),e._v(" "),t("li",[e._v("Usually schema change is backward compatible. So rolling back usually is not a problem. It also means that Cadence allows running a mixed version of schema, as long as they are all greater than or equal to the required version of the server.\nOther requirements for upgrading should be found in the release notes. It may contain information about config changes, or special rollback instructions if normal rollback may cause problems.")]),e._v(" "),t("li",[e._v("Similarly, data migration should be done before upgrading the server binary.")])]),e._v(" "),t("p",[e._v("NOTE: Do not use “auto-setup” images to upgrade your schema. It's mainly for development. At most for initial setup only.")]),e._v(" "),t("h3",{attrs:{id:"how-to-apply-db-schema-changes"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-to-apply-db-schema-changes"}},[e._v("#")]),e._v(" How to apply DB schema changes")]),e._v(" "),t("p",[e._v("For how to apply database schema, refer to this doc: "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/tools/sql",target:"_blank",rel:"noopener noreferrer"}},[e._v("SQL tool README"),t("OutboundLink")],1),e._v(" "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/tools/cassandra",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cassandra tool README"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("The tool makes use of a table called “schema_versions” to keep track of upgrading History. But there is no transaction guarantee for cross table operations. So in case of some error, you may need to fix or apply schema change manually.\nAlso, the schema tool by default will upgrade schema to the latest, so no manual is required. ( you can also specify to let it upgrade to any place, like 0.14).")]),e._v(" "),t("p",[e._v("Database schema changes are versioned in the folders: "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/schema/mysql/v57/cadence/versioned",target:"_blank",rel:"noopener noreferrer"}},[e._v("Versioned Schema Changes"),t("OutboundLink")],1),e._v(" for Default Store\nand "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/schema/mysql/v57/visibility/versioned",target:"_blank",rel:"noopener noreferrer"}},[e._v("Versioned Schema Changes"),t("OutboundLink")],1),e._v(" for Visibility Store if you use database for basic visibility instead of ElasticSearch.")]),e._v(" "),t("p",[e._v("If you use homebrew, the schema files are located at "),t("code",[e._v("/usr/local/etc/cadence/schema/")]),e._v(".")]),e._v(" "),t("p",[e._v("Alternatively, you can checkout the "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("repo"),t("OutboundLink")],1),e._v(" and the release tag. E.g. "),t("code",[e._v("git checkout v0.21.0")]),e._v(" and then the schema files is at "),t("code",[e._v("./schema/")])])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[96],{404:function(e,t,a){"use strict";a.r(t);var s=a(0),o=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-maintenance"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-maintenance"}},[e._v("#")]),e._v(" Cluster Maintenance")]),e._v(" "),t("p",[e._v("This includes how to use and maintain a Cadence cluster for both clients and server clusters.")]),e._v(" "),t("h2",{attrs:{id:"scale-up-down-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#scale-up-down-cluster"}},[e._v("#")]),e._v(" Scale up & down Cluster")]),e._v(" "),t("ul",[t("li",[e._v("When CPU/Memory is getting bottleneck on Cadence instances, you may scale up or add more instances.")]),e._v(" "),t("li",[e._v("Watch "),t("RouterLink",{attrs:{to:"/docs/operation-guide/monitor/"}},[e._v("Cadence metrics")]),e._v(" "),t("ul",[t("li",[e._v("See if the external traffic to frontend is normal")]),e._v(" "),t("li",[e._v("If the slowness is due to too many tasks on a tasklist, you may need to "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#scale-up-a-tasklist-using-scalable-tasklist-feature"}},[e._v("scale up the tasklist")])],1),e._v(" "),t("li",[e._v("If persistence latency is getting too high, try scale up your DB instance")])])],1),e._v(" "),t("li",[e._v("Never change the "),t("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configuration"}},[t("code",[e._v("numOfShards")]),e._v(" of a cluster")]),e._v(". If you need that because the current one is too small, follow the instructions to "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#migrate-cadence-cluster"}},[e._v("migrate your cluster to a new one")]),e._v(".")],1)]),e._v(" "),t("h2",{attrs:{id:"scale-up-a-tasklist-using-scalable-tasklist-feature"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#scale-up-a-tasklist-using-scalable-tasklist-feature"}},[e._v("#")]),e._v(" Scale up a tasklist using "),t("code",[e._v("Scalable tasklist")]),e._v(" feature")]),e._v(" "),t("p",[e._v("By default a tasklist is not scalable enough to support hundreds of tasks per second. That’s mainly because each tasklist is assigned to a Matching service node, and dispatching tasks in a tasklist is in sequence.")]),e._v(" "),t("p",[e._v("In the past, Cadence recommended using multiple tasklists to start workflow/activity. You need to make a list of tasklists and randomly pick one when starting workflows. And then when starting workers, let them listen to all the tasklists.")]),e._v(" "),t("p",[e._v("Nowadays, Cadence has a feature called “Scalable tasklist”. It will divide a tasklist into multiple logical partitions, which can distribute tasks to multiple Matching service nodes. By default this feature is not enabled because there is some performance penalty on the server side, plus it’s not common that a tasklist needs to support more than hundreds tasks per second.")]),e._v(" "),t("p",[e._v("You must make a dynamic configuration change in Cadence server to use this feature:")]),e._v(" "),t("p",[t("strong",[e._v("matching.numTasklistWritePartitions")])]),e._v(" "),t("p",[e._v("and")]),e._v(" "),t("p",[t("strong",[e._v("matching.numTasklistReadPartitions")])]),e._v(" "),t("p",[e._v("matching.numTasklistWritePartitions is the number of partitions when a Cadence server sends a task to the tasklist.\nmatching.numTasklistReadPartitions is the number of partitions when your worker accepts a task from the tasklist.")]),e._v(" "),t("p",[e._v("There are a few things to know when using this feature:")]),e._v(" "),t("ul",[t("li",[e._v("Always make sure "),t("code",[e._v("matching.numTasklistWritePartitions <= matching.numTasklistReadPartitions")]),e._v(" . Otherwise there may be some tasks that are sent to a tasklist partition but no poller(worker) will be able to pick up.")]),e._v(" "),t("li",[e._v("Because of above, when scaling down the number of partitions, you must decrease the WritePartitions first, to wait for a certain time to ensure that tasks are drained, and then decrease ReadPartitions.")]),e._v(" "),t("li",[e._v("Both domain names and taskListName should be specified in the dynamic config. An example of using this feature. See more details about dynamic config format using file based "),t("RouterLink",{attrs:{to:"/docs/operation-guide/setup/#static-configs"}},[e._v("dynamic config")]),e._v(".")],1)]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v('matching.numTasklistWritePartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\nmatching.numTasklistReadPartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\n')])])]),t("p",[e._v("NOTE: the value must be integer without double quotes.")]),e._v(" "),t("h2",{attrs:{id:"restarting-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#restarting-cluster"}},[e._v("#")]),e._v(" Restarting Cluster")]),e._v(" "),t("p",[e._v("Make sure rolling restart to keep high availability.")]),e._v(" "),t("h2",{attrs:{id:"optimize-sql-persistence"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#optimize-sql-persistence"}},[e._v("#")]),e._v(" Optimize SQL Persistence")]),e._v(" "),t("ul",[t("li",[e._v("Connection is shared within a Cadence server host")]),e._v(" "),t("li",[e._v("For each host, The max number of connections it will consume is maxConn of defaultStore + maxConn of visibilityStore.")]),e._v(" "),t("li",[e._v("The total max number of connections your Cadence cluster will consume is the summary from all hosts(from Frontend/Matching/History/SysWorker services)")]),e._v(" "),t("li",[e._v("Frontend and history nodes need both default and visibility Stores, but matching and sys workers only need default Stores, they don't need to talk to visibility DBs.")]),e._v(" "),t("li",[e._v("For default Stores, history service will take the most connection, then Frontend/Matching. SysWorker will use much less than others")]),e._v(" "),t("li",[e._v("Default Stores is for Cadence’ core data model, which requires strong consistency. So it cannot use replicas. VisibilityStore is not for core data models. It’s recommended to use a separate DB for visibility store if using DB based visibility.")]),e._v(" "),t("li",[e._v("Visibility Stores usually take much less connection as the workload is much lightweight(less QPS and no explicit transactions).")]),e._v(" "),t("li",[e._v("Visibility Stores require eventual consistency for read. So it can use replicas.")]),e._v(" "),t("li",[e._v("MaxIdelConns should be less than MaxConns, so that the connections can be distributed better across hosts.")])]),e._v(" "),t("h2",{attrs:{id:"upgrading-server"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#upgrading-server"}},[e._v("#")]),e._v(" Upgrading Server")]),e._v(" "),t("p",[e._v('To get notified about release, please subscribe the release of project by : Go to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".')]),e._v(" "),t("p",[e._v("It's recommended to upgrade one minor version at a time. E.g, if you are at 0.10, you should upgrade to 0.11, stabilize it with running some normal workload to make sure that the upgraded server is happy with the schema changes. After ~1 hour, then upgrade to 0.12. then 0.13. etc.")]),e._v(" "),t("p",[e._v("The reason is that for each minor upgrade, you should be able to follow the release notes about what you should do for upgrading. The release notes may require you to run some commands. This will also help to narrow down the cause when something goes wrong.")]),e._v(" "),t("h3",{attrs:{id:"how-to-upgrade"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-to-upgrade"}},[e._v("#")]),e._v(" How to upgrade:")]),e._v(" "),t("p",[e._v("Things that you may need to do for upgrading a minor version(patch version upgrades should not need it):")]),e._v(" "),t("ul",[t("li",[e._v("Schema(DB/ElasticSearch) changes")]),e._v(" "),t("li",[e._v("Configuration format/layout changes")]),e._v(" "),t("li",[e._v("Data migration -- this is very rare. For example, "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.16.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("upgrading from 0.15.x to 0.16.0 requires a data migration"),t("OutboundLink")],1),e._v(".")])]),e._v(" "),t("p",[e._v("You should read through the release instruction for each minor release to understand what needs to be done.")]),e._v(" "),t("ul",[t("li",[e._v("Schema changes need to be applied before upgrading server\n"),t("ul",[t("li",[e._v("Upgrade MySQL/Postgres schema if applicable")]),e._v(" "),t("li",[e._v("Upgrade Cassandra schema if applicable")]),e._v(" "),t("li",[e._v("Upgrade ElasticSearch schema if applicable")])])]),e._v(" "),t("li",[e._v("Usually schema change is backward compatible. So rolling back usually is not a problem. It also means that Cadence allows running a mixed version of schema, as long as they are all greater than or equal to the required version of the server.\nOther requirements for upgrading should be found in the release notes. It may contain information about config changes, or special rollback instructions if normal rollback may cause problems.")]),e._v(" "),t("li",[e._v("Similarly, data migration should be done before upgrading the server binary.")])]),e._v(" "),t("p",[e._v("NOTE: Do not use “auto-setup” images to upgrade your schema. It's mainly for development. At most for initial setup only.")]),e._v(" "),t("h3",{attrs:{id:"how-to-apply-db-schema-changes"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#how-to-apply-db-schema-changes"}},[e._v("#")]),e._v(" How to apply DB schema changes")]),e._v(" "),t("p",[e._v("For how to apply database schema, refer to this doc: "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/tools/sql",target:"_blank",rel:"noopener noreferrer"}},[e._v("SQL tool README"),t("OutboundLink")],1),e._v(" "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/tools/cassandra",target:"_blank",rel:"noopener noreferrer"}},[e._v("Cassandra tool README"),t("OutboundLink")],1)]),e._v(" "),t("p",[e._v("The tool makes use of a table called “schema_versions” to keep track of upgrading History. But there is no transaction guarantee for cross table operations. So in case of some error, you may need to fix or apply schema change manually.\nAlso, the schema tool by default will upgrade schema to the latest, so no manual is required. ( you can also specify to let it upgrade to any place, like 0.14).")]),e._v(" "),t("p",[e._v("Database schema changes are versioned in the folders: "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/schema/mysql/v57/cadence/versioned",target:"_blank",rel:"noopener noreferrer"}},[e._v("Versioned Schema Changes"),t("OutboundLink")],1),e._v(" for Default Store\nand "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/schema/mysql/v57/visibility/versioned",target:"_blank",rel:"noopener noreferrer"}},[e._v("Versioned Schema Changes"),t("OutboundLink")],1),e._v(" for Visibility Store if you use database for basic visibility instead of ElasticSearch.")]),e._v(" "),t("p",[e._v("If you use homebrew, the schema files are located at "),t("code",[e._v("/usr/local/etc/cadence/schema/")]),e._v(".")]),e._v(" "),t("p",[e._v("Alternatively, you can checkout the "),t("a",{attrs:{href:"https://github.com/uber/cadence",target:"_blank",rel:"noopener noreferrer"}},[e._v("repo"),t("OutboundLink")],1),e._v(" and the release tag. E.g. "),t("code",[e._v("git checkout v0.21.0")]),e._v(" and then the schema files is at "),t("code",[e._v("./schema/")])])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/97.fad9a783.js b/assets/js/97.8afc95f5.js similarity index 99% rename from assets/js/97.fad9a783.js rename to assets/js/97.8afc95f5.js index fc74bfc63..b3ddf9c37 100644 --- a/assets/js/97.fad9a783.js +++ b/assets/js/97.8afc95f5.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[97],{406:function(e,t,a){"use strict";a.r(t);var r=a(0),s=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-monitoring"}},[e._v("#")]),e._v(" Cluster Monitoring")]),e._v(" "),t("h2",{attrs:{id:"instructions"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#instructions"}},[e._v("#")]),e._v(" Instructions")]),e._v(" "),t("p",[e._v("Cadence emits metrics for both Server and client libraries:")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Follow this example to emit "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/pull/36",target:"_blank",rel:"noopener noreferrer"}},[e._v("client side metrics for Golang client"),t("OutboundLink")],1)]),e._v(" "),t("ul",[t("li",[e._v("You can use other metrics emitter like "),t("a",{attrs:{href:"https://github.com/uber-go/tally/tree/master/m3",target:"_blank",rel:"noopener noreferrer"}},[e._v("M3"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("Alternatively, you can implement the tally "),t("a",{attrs:{href:"https://github.com/uber-go/tally/blob/master/reporter.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("Reporter interface"),t("OutboundLink")],1)])])]),e._v(" "),t("li",[t("p",[e._v("Follow this example to emit "),t("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloMetric.java",target:"_blank",rel:"noopener noreferrer"}},[e._v("client side metrics for Java client"),t("OutboundLink")],1),e._v(" if using 3.x client, or "),t("a",{attrs:{href:"https://github.com/longquanzheng/cadence-java-samples-1/pull/1",target:"_blank",rel:"noopener noreferrer"}},[e._v("this example"),t("OutboundLink")],1),e._v(" if using 2.x client.")]),e._v(" "),t("ul",[t("li",[e._v("You can use other metrics emitter like "),t("a",{attrs:{href:"https://github.com/uber-java/tally/tree/master/m3",target:"_blank",rel:"noopener noreferrer"}},[e._v("M3"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("Alternatively, you can implement the tally "),t("a",{attrs:{href:"https://github.com/uber-java/tally/blob/master/core/src/main/java/com/uber/m3/tally/Scope.java",target:"_blank",rel:"noopener noreferrer"}},[e._v("Reporter interface"),t("OutboundLink")],1)])])]),e._v(" "),t("li",[t("p",[e._v("For running Cadence services in production, please follow this "),t("a",{attrs:{href:"https://github.com/banzaicloud/banzai-charts/blob/master/cadence/templates/server-service-monitor.yaml",target:"_blank",rel:"noopener noreferrer"}},[e._v("example of hemlchart"),t("OutboundLink")],1),e._v(" to emit server side metrics. Or you can follow "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/config/development_prometheus.yaml#L40",target:"_blank",rel:"noopener noreferrer"}},[e._v("the example of local environment"),t("OutboundLink")],1),e._v(" to Prometheus. All services need to expose a HTTP port to provide metircs like below")])])]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("metrics")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("prometheus")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("timerType")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"histogram"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("listenAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"0.0.0.0:8001"')]),e._v("\n")])])]),t("p",[e._v("The rest of the instruction uses local environment as an example.")]),e._v(" "),t("p",[e._v("For testing local server emitting metrics to Promethues, the easiest way is to use "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/docker/",target:"_blank",rel:"noopener noreferrer"}},[e._v("docker-compose"),t("OutboundLink")],1),e._v(" to start a local Cadence instance.")]),e._v(" "),t("p",[e._v("Make sure to update the "),t("code",[e._v("prometheus_config.yml")]),e._v(' to add "host.docker.internal:9098" to the scrape list before starting the docker-compose:')]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("global")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("scrape_interval")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" 5s\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("external_labels")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("monitor")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence-monitor'")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("scrape_configs")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("job_name")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'prometheus'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("static_configs")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("targets")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# addresses to scrape")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:9090'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8000'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8001'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8002'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8003'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'host.docker.internal:9098'")]),e._v("\n")])])]),t("p",[e._v("Note: "),t("code",[e._v("host.docker.internal")]),e._v(" "),t("a",{attrs:{href:"https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds",target:"_blank",rel:"noopener noreferrer"}},[e._v("may not work for some docker versions"),t("OutboundLink")],1)]),e._v(" "),t("ul",[t("li",[t("p",[e._v("After updating the prometheus_config.yaml as above, run "),t("code",[e._v("docker-compose up")]),e._v(" to start the local Cadence instance")])]),e._v(" "),t("li",[t("p",[e._v("Go the the sample repo, build the helloworld sample "),t("code",[e._v("make helloworld")]),e._v(" and run the worker "),t("code",[e._v("./bin/helloworld -m worker")]),e._v(", and then in another Shell start a workflow "),t("code",[e._v("./bin/helloworld")])])]),e._v(" "),t("li",[t("p",[e._v("Go to your "),t("a",{attrs:{href:"http://localhost:9090/",target:"_blank",rel:"noopener noreferrer"}},[e._v("local Prometheus dashboard"),t("OutboundLink")],1),e._v(", you should be able to check the metrics emitted by handler from client/frontend/matching/history/sysWorker and confirm your services are healthy through "),t("a",{attrs:{href:"http://localhost:9090/targets",target:"_blank",rel:"noopener noreferrer"}},[e._v("targets"),t("OutboundLink")],1),e._v(" "),t("img",{attrs:{width:"1192",alt:"Screen Shot 2021-02-20 at 11 31 11 AM",src:"https://user-images.githubusercontent.com/4523955/108606555-8d0dfb80-736f-11eb-968d-7678df37455c.png"}})])]),e._v(" "),t("li",[t("p",[e._v("Go to "),t("a",{attrs:{href:"http://localhost:3000",target:"_blank",rel:"noopener noreferrer"}},[e._v("local Grafana"),t("OutboundLink")],1),e._v(" , login as "),t("code",[e._v("admin/admin")]),e._v(".")])]),e._v(" "),t("li",[t("p",[e._v("Configure Prometheus as datasource: use "),t("code",[e._v("http://host.docker.internal:9090")]),e._v(" as URL of prometheus.")])]),e._v(" "),t("li",[t("p",[e._v("Import the "),t("RouterLink",{attrs:{to:"/docs/operation-guide/monitor/#grafana-prometheus-dashboard-templates"}},[e._v("Grafana dashboard tempalte")]),e._v(" as JSON files.")],1)])]),e._v(" "),t("p",[e._v("Client side dashboard looks like this:\n"),t("img",{attrs:{width:"1513",alt:"Screen Shot 2021-02-20 at 12 32 23 PM",src:"https://user-images.githubusercontent.com/4523955/108607838-b7fc4d80-7377-11eb-8fd9-edc0e58afaad.png"}})]),e._v(" "),t("p",[e._v("And server basic dashboard:\n"),t("img",{attrs:{width:"1514",alt:"Screen Shot 2021-02-20 at 12 31 54 PM",src:"https://user-images.githubusercontent.com/4523955/108607843-baf73e00-7377-11eb-9759-e67a1a00f442.png"}})]),e._v(" "),t("img",{attrs:{width:"1519",alt:"Screen Shot 2021-02-20 at 11 06 54 AM",src:"https://user-images.githubusercontent.com/4523955/108606577-b169d800-736f-11eb-8fcb-88801f23b656.png"}}),e._v(" "),t("h2",{attrs:{id:"datadog-dashboard-templates"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#datadog-dashboard-templates"}},[e._v("#")]),e._v(" DataDog dashboard templates")]),e._v(" "),t("p",[e._v("This "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs/tree/master/src/datadog",target:"_blank",rel:"noopener noreferrer"}},[e._v("package"),t("OutboundLink")],1),e._v(" contains examples of Cadence dashboards with DataDog.")]),e._v(" "),t("ul",[t("li",[t("p",[t("code",[e._v("Cadence-Client")]),e._v(" is the dashboard that includes all the metrics to help you understand Cadence client behavior. Most of these metrics are emitted by the client SDKs, with a few exceptions from server side (for example, workflow timeout).")])]),e._v(" "),t("li",[t("p",[t("code",[e._v("Cadence-Server")]),e._v(" is the the server dashboard that you can use to monitor and undertand the health and status of your Cadence cluster.")])])]),e._v(" "),t("p",[e._v("To use DataDog with Cadence, follow "),t("a",{attrs:{href:"https://docs.datadoghq.com/integrations/guide/prometheus-metrics/",target:"_blank",rel:"noopener noreferrer"}},[e._v("this instruction"),t("OutboundLink")],1),e._v(" to collect Prometheus metrics using DataDog agent.")]),e._v(" "),t("p",[e._v("NOTE1: don't forget to adjust "),t("code",[e._v("max_returned_metrics")]),e._v(" to a higher number(e.g. 100000). Otherwise DataDog agent won't be able to "),t("a",{attrs:{href:"https://docs.datadoghq.com/integrations/guide/prometheus-host-collection/",target:"_blank",rel:"noopener noreferrer"}},[e._v("collect all metrics(default is 2000)"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("NOTE2: the template contains templating variables "),t("code",[e._v("$App")]),e._v(" and "),t("code",[e._v("$Availability_Zone")]),e._v(". Feel free to remove them if you don't have them in your setup.")]),e._v(" "),t("h2",{attrs:{id:"grafana-prometheus-dashboard-templates"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#grafana-prometheus-dashboard-templates"}},[e._v("#")]),e._v(" Grafana+Prometheus dashboard templates")]),e._v(" "),t("p",[e._v("This "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs/tree/master/src/grafana/prometheus",target:"_blank",rel:"noopener noreferrer"}},[e._v("package"),t("OutboundLink")],1),e._v(" contains examples of Cadence dashboards with Prometheus.")]),e._v(" "),t("ul",[t("li",[t("p",[t("code",[e._v("Cadence-Client")]),e._v(" is the dashboard of client metrics, and a few server side metrics that belong to client side but have to be emitted by server(for example, workflow timeout).")])]),e._v(" "),t("li",[t("p",[t("code",[e._v("Cadence-Server-Basic")]),e._v(" is the the basic server dashboard to monitor/navigate the health/status of a Cadence cluster.")])]),e._v(" "),t("li",[t("p",[e._v("Apart from the basic server dashboard, it's recommended to set up dashboards on different components for Cadence server: Frontend, History, Matching, Worker, Persistence, Archival, etc. Any "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs",target:"_blank",rel:"noopener noreferrer"}},[e._v("contribution"),t("OutboundLink")],1),e._v(" is always welcome to enrich the existing templates or new templates!")])])]),e._v(" "),t("h2",{attrs:{id:"periodic-tests-canary-for-health-check"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#periodic-tests-canary-for-health-check"}},[e._v("#")]),e._v(" Periodic tests(Canary) for health check")]),e._v(" "),t("p",[e._v("It's recommended that you run periodical test to get signals on the healthness of your cluster. Please following instructions in "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/canary",target:"_blank",rel:"noopener noreferrer"}},[e._v("our canary package"),t("OutboundLink")],1),e._v(" to set these tests up.")]),e._v(" "),t("h2",{attrs:{id:"cadence-frontend-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-frontend-monitoring"}},[e._v("#")]),e._v(" Cadence Frontend Monitoring")]),e._v(" "),t("p",[e._v("This section describes recommended dashboards for monitoring Cadence services in your cluster. The structure mostly follows the DataDog dashboard template listed above.")]),e._v(" "),t("h3",{attrs:{id:"service-availability-server-metrics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-availability-server-metrics"}},[e._v("#")]),e._v(" Service Availability(server metrics)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: the availability of Cadence server using server metrics.")]),e._v(" "),t("li",[e._v("Suggested monitor: below 95% > 5 min then alert, below 99% for > 5 min triggers a warning")]),e._v(" "),t("li",[e._v("Monitor action: When fired, check if there is any persistence errors. If so then check the healthness of the database(may need to restart or scale up). If not then check the error logs.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_errors{*}\nsum:cadence_frontend.cadence_requests{*}\n(1 - a / b) * 100\n")])])]),t("h3",{attrs:{id:"startworkflow-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#startworkflow-per-second"}},[e._v("#")]),e._v(" StartWorkflow Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: how many workflows are started per second. This helps determine if your server is overloaded.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{(operation IN (startworkflowexecution,signalwithstartworkflowexecution))} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"activities-started-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activities-started-per-second"}},[e._v("#")]),e._v(" Activities Started Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: How many activities are started per second. Helps determine if the server is overloaded.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{operation:pollforactivitytask} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"decisions-started-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decisions-started-per-second"}},[e._v("#")]),e._v(" Decisions Started Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: How many workflow decisions are started per second. Helps determine if the server is overloaded.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{operation:pollfordecisiontask} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"periodical-test-suite-success-aka-canary"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#periodical-test-suite-success-aka-canary"}},[e._v("#")]),e._v(" Periodical Test Suite Success(aka Canary)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: The success counter of canary test suite")]),e._v(" "),t("li",[e._v("Suggested monitor: Monitor needed. If fired, look at the failed canary test case and investigate the reason of failure.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.workflow_success{workflowtype:workflow_sanity} by {workflowtype}.as_count()\n")])])]),t("h3",{attrs:{id:"frontend-all-api-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-all-api-per-second"}},[e._v("#")]),e._v(" Frontend all API per second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: all API on frontend per second. Information only.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{*}.as_rate()\n")])])]),t("h3",{attrs:{id:"frontend-api-per-second-breakdown-per-operation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-api-per-second-breakdown-per-operation"}},[e._v("#")]),e._v(" Frontend API per second (breakdown per operation)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: API on frontend per second. Information only.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"frontend-api-errors-per-second-breakdown-per-operation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-api-errors-per-second-breakdown-per-operation"}},[e._v("#")]),e._v(" Frontend API errors per second(breakdown per operation)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: API error on frontend per second. Information only.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is to facilitate investigation. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_errors{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h3",{attrs:{id:"frontend-regular-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-regular-api-latency"}},[e._v("#")]),e._v(" Frontend Regular API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: The latency of regular core API -- excluding long-poll/queryWorkflow/getHistory/ListWorkflow/CountWorkflow API.")]),e._v(" "),t("li",[e._v("Suggested monitor: 95% of all apis and of all operations that take over 1.5 seconds triggers a warning, over 2 seconds triggers an alert")]),e._v(" "),t("li",[e._v("Monitor action: If fired, investigate the database read/write latency. May need to throttle some spiky traffic from certain domains, or scale up the database")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{(operation NOT IN (pollfordecisiontask,pollforactivitytask,getworkflowexecutionhistory,queryworkflow,listworkflowexecutions,listclosedworkflowexecutions,listopenworkflowexecutions)) AND $pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-listworkflow-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-listworkflow-api-latency"}},[e._v("#")]),e._v(" Frontend ListWorkflow API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: The latency of ListWorkflow API.")]),e._v(" "),t("li",[e._v("Monitor: 95% of all apis and of all operations that take over 2 seconds triggers a warning, over 3 seconds triggers an alert")]),e._v(" "),t("li",[e._v("Monitor action: If fired, investigate the ElasticSearch read latency. May need to throttle some spiky traffic from certain domains, or scale up ElasticSearch cluster.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{(operation IN (listclosedworkflowexecutions,listopenworkflowexecutions,listworkflowexecutions,countworkflowexecutions)) AND $pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-long-poll-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-long-poll-api-latency"}},[e._v("#")]),e._v(" Frontend Long Poll API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: Long poll means that the worker is waiting for a task. The latency is an Indicator for how busy the worker is. Poll for activity task and poll for decision task are the types of long poll requests.The api call times out at 50 seconds if no task can be picked up.A very low latency could mean that more workers need to be added.")]),e._v(" "),t("li",[e._v("Suggested monitor: No monitor needed as long latency is expected.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollforactivitytask} by {operation}\navg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollfordecisiontask} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-get-history-query-workflow-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-get-history-query-workflow-api-latency"}},[e._v("#")]),e._v(" Frontend Get History/Query Workflow API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: GetHistory API acts like a long poll api, but there’s no explicit timeout. Long-poll of GetHistory is being used when WorkflowClient is waiting for the result of the workflow(essentially, WorkflowExecutionCompletedEvent).\nThis latency depends on the time it takes for the workflow to complete. QueryWorkflow API latency is also unpredictable as it depends on the availability and performance of workflow workers, which are owned by the application and workflow implementation(may require replaying history).")]),e._v(" "),t("li",[e._v("Suggested monitor: No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{(operation IN (getworkflowexecutionhistory,queryworkflow)) AND $pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-workflowclient-api-per-seconds-by-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-workflowclient-api-per-seconds-by-domain"}},[e._v("#")]),e._v(" Frontend WorkflowClient API per seconds by domain")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: Shows which domains are making the most requests using WorkflowClient(excluding worker API like PollForDecisionTask and RespondDecisionTaskCompleted). Used for troubleshooting.\nIn the future it can be used to set some rate limiting per domain.")]),e._v(" "),t("li",[e._v("Suggested monitor: No monitor needed.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{(operation IN (signalwithstartworkflowexecution,signalworkflowexecution,startworkflowexecution,terminateworkflowexecution,resetworkflowexecution,requestcancelworkflowexecution,listworkflowexecutions))} by {domain,operation}.as_rate()\n")])])]),t("h2",{attrs:{id:"cadence-application-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-application-monitoring"}},[e._v("#")]),e._v(" Cadence Application Monitoring")]),e._v(" "),t("p",[e._v("This section describes the recommended dashboards for monitoring Cadence application using metrics emitted by SDK. See the "),t("code",[e._v("setup")]),e._v(" section about how to collect those metrics.")]),e._v(" "),t("h3",{attrs:{id:"workflow-start-and-successful-completion"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-start-and-successful-completion"}},[e._v("#")]),e._v(" Workflow Start and Successful completion")]),e._v(" "),t("ul",[t("li",[e._v("Workflow successfully started/signalWithStart and completed/canceled/continuedAsNew")]),e._v(" "),t("li",[e._v("Monitor: not recommended")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_workflow_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_completed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_canceled{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_continue_as_new{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_signal_with_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\n")])])]),t("h3",{attrs:{id:"workflow-failure"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-failure"}},[e._v("#")]),e._v(" Workflow Failure")]),e._v(" "),t("ul",[t("li",[e._v("Metrics for all types of failures, including workflow failures(throw uncaught exceptions), workflow timeout and termination.")]),e._v(" "),t("li",[e._v("For timeout and termination, workflow worker doesn’t have a chance to emit metrics when it’s terminate, so the metric comes from the history service")]),e._v(" "),t("li",[e._v("Monitor: application should set monitor on timeout and failure to make sure workflow are not failing. Cancel/terminate are usually triggered by human intentionally.")]),e._v(" "),t("li",[e._v("When the metrics fire, go to Cadence UI to find the failed workflows and investigate the workflow history to understand the type of failure")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_workflow_failed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env}.as_count()\nsum:cadence_history.workflow_failed{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_terminate{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_timeout{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\n")])])]),t("h3",{attrs:{id:"decision-poll-counters"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-poll-counters"}},[e._v("#")]),e._v(" Decision Poll Counters")]),e._v(" "),t("ul",[t("li",[e._v("Indicates if the workflow worker is available and is polling tasks. If the worker is not available no counters will show.\nCan also check if the worker is using the right task list.\n“No task” poll type means that the worker exists and is idle.\nThe timeout for this long poll api is 50 seconds. If no task is received within 50 seconds, then an empty response will be returned and another long poll request will be sent.")]),e._v(" "),t("li",[e._v("Monitor: application can should monitor on it to make sure workers are available")]),e._v(" "),t("li",[e._v("When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_decision_poll_total{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_failed{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_no_task{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_succeed{$Domain,$Tasklist}.as_count()\n")])])]),t("h3",{attrs:{id:"decisiontasks-scheduled-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decisiontasks-scheduled-per-second"}},[e._v("#")]),e._v(" DecisionTasks Scheduled per second")]),e._v(" "),t("ul",[t("li",[e._v("Indicate how many decision tasks are scheduled")]),e._v(" "),t("li",[e._v("Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_requests_per_tl{*,operation:adddecisiontask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n")])])]),t("h3",{attrs:{id:"decision-scheduled-to-start-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-scheduled-to-start-latency"}},[e._v("#")]),e._v(" Decision Scheduled To Start Latency")]),e._v(" "),t("ul",[t("li",[e._v("If this latency is too high then either:\nThe worker is not available or too busy after the task has been scheduled.\nThe task list is overloaded(confirmed by DecisionTaskScheduled per second widget). By default a task list only has one partition and a partition can only be owned by one host and so the throughput of a task list is limited. More task lists can be added to scale or a scalable task list can be used to add more partitions.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it to make sure latency is tolerable")]),e._v(" "),t("li",[e._v("When fired, check if worker capacity is enough, then check if tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_decision_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist}\n")])])]),t("h3",{attrs:{id:"decision-execution-failure"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-execution-failure"}},[e._v("#")]),e._v(" Decision Execution Failure")]),e._v(" "),t("ul",[t("li",[e._v("This means some critical bugs in workflow code causing decision task execution failure")]),e._v(" "),t("li",[e._v("Monitor: application should set monitor on it to make sure no consistent failure")]),e._v(" "),t("li",[e._v("When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_decision_execution_failed{$Domain,$Tasklist} by {tasklist,workflowtype}.as_count()\n")])])]),t("h3",{attrs:{id:"decision-execution-timeout"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-execution-timeout"}},[e._v("#")]),e._v(" Decision Execution Timeout")]),e._v(" "),t("ul",[t("li",[e._v("This means some critical bugs in workflow code causing decision task execution timeout")]),e._v(" "),t("li",[e._v("Monitor: application should set monitor on it to make sure no consistent timeout")]),e._v(" "),t("li",[e._v("When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.start_to_close_timeout{operation:timeractivetaskdecision*,$Domain}.as_count()\n")])])]),t("h3",{attrs:{id:"workflow-end-to-end-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-end-to-end-latency"}},[e._v("#")]),e._v(" Workflow End to End Latency")]),e._v(" "),t("ul",[t("li",[e._v("This is for the client application to track their SLOs\nFor example, if you expect a workflow to take duration d to complete, you can use this latency to set a monitor.")]),e._v(" "),t("li",[e._v("Monitor: application can monitor this metrics if expecting workflow to complete within a certain duration.")]),e._v(" "),t("li",[e._v("When fired, investigate the workflow history to see the workflow takes longer than expected to complete")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_workflow_endtoend_latency.median{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\navg:cadence_client.cadence_workflow_endtoend_latency.95percentile{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\n")])])]),t("h3",{attrs:{id:"workflow-panic-and-nondeterministicerror"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-panic-and-nondeterministicerror"}},[e._v("#")]),e._v(" Workflow Panic and NonDeterministicError")]),e._v(" "),t("ul",[t("li",[e._v("These errors mean that there is a bug in the code and the deploy should be rolled back.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric")]),e._v(" "),t("li",[e._v("When fired, you may rollback the deployment to mitigate your issue. Usually this caused by bad (non-backward compatible) code change. After rollback, look at your worker error logs to see where the bug is.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_worker_panic{$Domain} by {env,domain}.as_rate()\nsum:cadence_client.cadence_non_deterministic_error{$Domain} by {env,domain}.as_rate()\n")])])]),t("h3",{attrs:{id:"workflow-sticky-cache-hit-rate-and-miss-count"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-sticky-cache-hit-rate-and-miss-count"}},[e._v("#")]),e._v(" Workflow Sticky Cache Hit Rate and Miss Count")]),e._v(" "),t("ul",[t("li",[e._v("This metric can be used for performance optimization.\nThis can be improved by adding more worker instances, or adjust the workerOption(GoSDK) or WorkferFactoryOption(Java SDK).\nCacheHitRate too low means workers will have to replay history to rebuild the workflow stack when executing a decision task. Depending on the the history size\n"),t("ul",[t("li",[e._v("If less than 1MB, then it’s okay to be lower than 50%")]),e._v(" "),t("li",[e._v("If greater than 1MB, then it’s okay to be greater than 50%")]),e._v(" "),t("li",[e._v("If greater than 5MB, , then it’s okay to be greater than 60%")]),e._v(" "),t("li",[e._v("If greater than 10MB , then it’s okay to be greater than 70%")]),e._v(" "),t("li",[e._v("If greater than 20MB , then it’s okay to be greater than 80%")]),e._v(" "),t("li",[e._v("If greater than 30MB , then it’s okay to be greater than 90%")]),e._v(" "),t("li",[e._v("Workflow history size should never be greater than 50MB.")])])]),e._v(" "),t("li",[e._v("A monitor can be set on this metric, if performance is important.")]),e._v(" "),t("li",[e._v("When fired, adjust the stickyCacheSize in the WorkerFactoryOption, or add more workers")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_sticky_cache_miss{$Domain} by {env,domain}.as_count()\nsum:cadence_client.cadence_sticky_cache_hit{$Domain} by {env,domain}.as_count()\n(b / (a+b)) * 100\n")])])]),t("h3",{attrs:{id:"activity-task-operations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-task-operations"}},[e._v("#")]),e._v(" Activity Task Operations")]),e._v(" "),t("ul",[t("li",[e._v("Activity started/completed counters")]),e._v(" "),t("li",[e._v("Monitor: not recommended")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_activity_task_failed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_completed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_timeouted{$Domain,$Tasklist} by {activitytype}.as_rate()\n")])])]),t("h3",{attrs:{id:"local-activity-task-operations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#local-activity-task-operations"}},[e._v("#")]),e._v(" Local Activity Task Operations")]),e._v(" "),t("ul",[t("li",[e._v("Local Activity execution counters")]),e._v(" "),t("li",[e._v("Monitor: not recommended")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_local_activity_total{$Domain,$Tasklist} by {activitytype}.as_count()\n")])])]),t("h3",{attrs:{id:"activity-execution-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-execution-latency"}},[e._v("#")]),e._v(" Activity Execution Latency")]),e._v(" "),t("ul",[t("li",[e._v("If it’s expected that an activity will take x amount of time to complete, a monitor on this metric could be helpful to enforce that expectation.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it if expecting workflow start/complete activities with certain latency")]),e._v(" "),t("li",[e._v("When fired, investigate the activity code and its dependencies")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_activity_execution_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_execution_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n")])])]),t("h3",{attrs:{id:"activity-poll-counters"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-poll-counters"}},[e._v("#")]),e._v(" Activity Poll Counters")]),e._v(" "),t("ul",[t("li",[e._v("Indicates the activity worker is available and is polling tasks. If the worker is not available no counters will show.\nCan also check if the worker is using the right task list.\n“No task” poll type means that the worker exists and is idle.\nThe timeout for this long poll api is 50 seconds. If within that 50 seconds, no task is received then an empty response will be returned and another long poll request will be sent.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it to make sure activity workers are available")]),e._v(" "),t("li",[e._v("When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_activity_poll_total{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_failed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_succeed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_no_task{$Domain,$Tasklist} by {activitytype}.as_count()\n")])])]),t("h3",{attrs:{id:"activitytasks-scheduled-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activitytasks-scheduled-per-second"}},[e._v("#")]),e._v(" ActivityTasks Scheduled per second")]),e._v(" "),t("ul",[t("li",[e._v("Indicate how many activities tasks are scheduled")]),e._v(" "),t("li",[e._v("Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_requests_per_tl{*,operation:addactivitytask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n")])])]),t("h3",{attrs:{id:"activity-scheduled-to-start-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-scheduled-to-start-latency"}},[e._v("#")]),e._v(" Activity Scheduled To Start Latency")]),e._v(" "),t("ul",[t("li",[e._v("If the latency is too high either:\nThe worker is not available or too busy\nThere are too many activities scheduled into the same tasklist and the tasklist is not scalable. Same as Decision Scheduled To Start Latency")]),e._v(" "),t("li",[e._v("Monitor: application Should set monitor on it")]),e._v(" "),t("li",[e._v("When fired, check if workers are enough, then check if the tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_activity_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n")])])]),t("h3",{attrs:{id:"activity-failure"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-failure"}},[e._v("#")]),e._v(" Activity Failure")]),e._v(" "),t("ul",[t("li",[e._v("A monitor on this metric will alert the team that activities are failing\nThe activity timeout metrics are emitted by the history service, because a timeout causes a hard stop and the client doesn’t have time to emit metrics.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it")]),e._v(" "),t("li",[e._v("When fired, investigate the activity code and its dependencies")]),e._v(" "),t("li",[t("code",[e._v("cadence_activity_execution_failed")]),e._v(" vs "),t("code",[e._v("cadence_activity_task_failed")]),e._v(":\nOnly have different when using RetryPolicy\ncadence_activity_task_failed counter increase per activity attempt\ncadence_activity_execution_failed counter increase when activity fails after all attempts")]),e._v(" "),t("li",[e._v("should only monitor on cadence_activity_execution_failed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_activity_execution_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_panic{$Domain} by {domain,env}.as_count()\nsum:cadence_client.cadence_activity_task_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_canceled{$Domain} by {domain,env}.as_count()\nsum:cadence_history.heartbeat_timeout{$Domain} by {domain,env}.as_count()\nsum:cadence_history.schedule_to_start_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.start_to_close_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.schedule_to_close_timeout{$Domain} by {domain,env}.as_count()\n")])])]),t("h3",{attrs:{id:"service-api-success-rate"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-success-rate"}},[e._v("#")]),e._v(" Service API success rate")]),e._v(" "),t("ul",[t("li",[e._v("The client’s experience of the service availability. It encompasses many apis. Things that could affect the service’s API success rate are:\n"),t("ul",[t("li",[e._v("Service availability")]),e._v(" "),t("li",[e._v("The network could have issues.")]),e._v(" "),t("li",[e._v("A required api is not available.")]),e._v(" "),t("li",[e._v("Client side errors like EntityNotExists, WorkflowAlreadyStarted etc. This means that application code has potential bugs of calling Cadence service.")])])]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it")]),e._v(" "),t("li",[e._v("When fired, check application logs to see if the error is Cadence server error or client side error. Error like EntityNotExists/ExecutionAlreadyStarted/QueryWorkflowFailed/etc are client side error, meaning that the application is misusing the APIs. If most errors are server side errors(internalServiceError), you can contact Cadence admin.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_error{*} by {domain}.as_count()\nsum:cadence_client.cadence_request{*} by {domain}.as_count()\n(1 - a / b) * 100\n")])])]),t("h3",{attrs:{id:"service-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-latency"}},[e._v("#")]),e._v(" Service API Latency")]),e._v(" "),t("ul",[t("li",[e._v("The latency of the API, excluding long poll APIs.")]),e._v(" "),t("li",[e._v("Application can set monitor on certain APIs, if necessary.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_latency.95percentile{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}\n")])])]),t("h3",{attrs:{id:"service-api-breakdown"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-breakdown"}},[e._v("#")]),e._v(" Service API Breakdown")]),e._v(" "),t("ul",[t("li",[e._v("A counter breakdown by API to help investigate availability")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_request{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}.as_count()\n")])])]),t("h3",{attrs:{id:"service-api-error-breakdown"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-error-breakdown"}},[e._v("#")]),e._v(" Service API Error Breakdown")]),e._v(" "),t("ul",[t("li",[e._v("A counter breakdown by API error to help investigate availability")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_error{$Domain} by {cadence_metric_scope}.as_count()\n")])])]),t("h3",{attrs:{id:"max-event-blob-size"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-event-blob-size"}},[e._v("#")]),e._v(" Max Event Blob size")]),e._v(" "),t("ul",[t("li",[e._v("By default the max size is 2 MB. If the input is greater than the max size the server will reject the request.\nThe size of a single history event. This applies to any event input, like start workflow event, start activity event, or signal event.\nIt should never be greater than 2MB.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric.")]),e._v(" "),t("li",[e._v("When fired, please review the design/code ASAP to reduce the blob size. Reducing the input/output of workflow/activity/signal will help.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("​​max:cadence_history.event_blob_size.quantile{!domain:all,$Domain} by {domain}\n")])])]),t("h3",{attrs:{id:"max-history-size"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-size"}},[e._v("#")]),e._v(" Max History Size")]),e._v(" "),t("ul",[t("li",[e._v("Workflow history cannot grow indefinitely. It will cause replay issues.\nIf the workflow exceeds the history’s max size the workflow will be terminate automatically. The max size by default is 200 megabytes.\nAs a suggestion for workflow design, workflow history should never grow greater than 50MB. Use continueAsNew to break long workflows into multiple runs.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric.")]),e._v(" "),t("li",[e._v("When fired, please review the design/code ASAP to reduce the history size. Reducing the input/output of workflow/activity/signal will help. Also you may need to use ContinueAsNew to break a single execution into smaller pieces.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("​​max:cadence_history.history_size.quantile{!domain:all,$Domain} by {domain}\n")])])]),t("h3",{attrs:{id:"max-history-length"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-length"}},[e._v("#")]),e._v(" Max History Length")]),e._v(" "),t("ul",[t("li",[e._v("The number of events of workflow history.\nIt should never be greater than 50K(workflow exceeding 200K events will be terminated by server). Use continueAsNew to break long workflows into multiple runs.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric.")]),e._v(" "),t("li",[e._v("When fired, please review the design/code ASAP to reduce the history length. You may need to use ContinueAsNew to break a single execution into smaller pieces.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("​​max:cadence_history.history_count.quantile{!domain:all,$Domain} by {domain}\n")])])]),t("h2",{attrs:{id:"cadence-history-service-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-history-service-monitoring"}},[e._v("#")]),e._v(" Cadence History Service Monitoring")]),e._v(" "),t("p",[e._v("History is the most critical/core service for cadence which implements the workflow logic.")]),e._v(" "),t("h3",{attrs:{id:"history-shard-movements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#history-shard-movements"}},[e._v("#")]),e._v(" History shard movements")]),e._v(" "),t("ul",[t("li",[e._v("Should only happen during deployment or when the node restarts.\nIf there’s shard movement without deployments then that’s unexpected and there’s probably a performance issue. The shard ownership is assigned by a particular history host, so if the shard is moving it’ll be hard for the frontend service to route a request to a particular history shard and to find it.")]),e._v(" "),t("li",[e._v("A monitor can be set to be alerted on shard movements without deployment.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.membership_changed_count{operation:shardcontroller}\nsum:cadence_history.shard_closed_count{operation:shardcontroller}\nsum:cadence_history.sharditem_created_count{operation:shardcontroller}\nsum:cadence_history.sharditem_removed_count{operation:shardcontroller}\n")])])]),t("h3",{attrs:{id:"transfer-tasks-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-tasks-per-second"}},[e._v("#")]),e._v(" Transfer Tasks Per Second")]),e._v(" "),t("ul",[t("li",[e._v("TransferTask is an internal background task that moves workflow state and transfers an action task from the history engine to another service(e.g. Matching service, ElasticSearch, etc)")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests{operation:transferactivetask*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"timer-tasks-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-tasks-per-second"}},[e._v("#")]),e._v(" Timer Tasks Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Timer tasks are tasks that are scheduled to be triggered at a given time in future. For example, workflow.sleep() will wait an x amount of time then the task will be pushed somewhere for a worker to pick up.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests{operation:timeractivetask*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"transfer-tasks-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-tasks-per-domain"}},[e._v("#")]),e._v(" Transfer Tasks Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Count breakdown by domain")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests_per_domain{operation:transferactive*} by {domain}.as_count()\n")])])]),t("h3",{attrs:{id:"timer-tasks-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-tasks-per-domain"}},[e._v("#")]),e._v(" Timer Tasks Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Count breakdown by domain")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests_per_domain{operation:timeractive*} by {domain}.as_count()\n")])])]),t("h3",{attrs:{id:"transfer-latency-by-type"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-latency-by-type"}},[e._v("#")]),e._v(" Transfer Latency by Type")]),e._v(" "),t("ul",[t("li",[e._v("If latency is too high then it’s an issue for a workflow. For example, if transfer task latency is 5 second, then it takes 5 second for activity/decision to actual receive the task.")]),e._v(" "),t("li",[e._v("Monitor should be set on diffeernt types of latency. Note that "),t("code",[e._v("queue_latency")]),e._v(" can go very high during deployment and it's expected. See below NOTE for explanation.")]),e._v(" "),t("li",[e._v("When fired, check if it’s due to some persistence issue.\nIf so then investigate the database(may need to scale up)\nIf not then see if need to scale up Cadence deployment(K8s instance)")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_history.task_latency.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:transfer*} by {operation}\n")])])]),t("h3",{attrs:{id:"timer-task-latency-by-type"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-task-latency-by-type"}},[e._v("#")]),e._v(" Timer Task Latency by type")]),e._v(" "),t("ul",[t("li",[e._v("If latency is too high then it’s an issue for a workflow. For example, if you set the workflow.sleep() for 10 seconds and the timer latency is 5 secs then the workflow will sleep for 15 seconds.")]),e._v(" "),t("li",[e._v("Monitor should be set on diffeernt types of latency.")]),e._v(" "),t("li",[e._v("When fired, check if it’s due to some persistence issue.\nIf so then investigate the database(may need to scale up) [Mostly]\nIf not then see if need to scale up Cadence deployment(K8s instance)")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_history.task_latency.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:timer*} by {operation}\n")])])]),t("h3",{attrs:{id:"note-task-queue-latency-vs-executing-latency-vs-processing-latency-in-transfer-timer-task-latency-metrics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#note-task-queue-latency-vs-executing-latency-vs-processing-latency-in-transfer-timer-task-latency-metrics"}},[e._v("#")]),e._v(" NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics")]),e._v(" "),t("ul",[t("li",[t("code",[e._v("task_latency_queue")]),e._v(": “Queue Latency” is “end to end” latency for users. The latency could go to several minutes during deployment because of metrics being re-emitted (but the actual latency is not that high)")]),e._v(" "),t("li",[t("code",[e._v("task_latency")]),e._v(": “Executing latency” is the time from submission to executing pool to completion. It includes scheduling, retry and processing time of the task.")]),e._v(" "),t("li",[t("code",[e._v("task_latency_processing")]),e._v(": “Processing latency” is the processing time of the task of a single attempt(without retry)")])]),e._v(" "),t("h3",{attrs:{id:"transfer-task-latency-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-task-latency-per-domain"}},[e._v("#")]),e._v(" Transfer Task Latency Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Latency breakdown by domain")]),e._v(" "),t("li",[e._v("No monitor needed.")]),e._v(" "),t("li",[e._v("Datadog query example: modify above queries to use domain tag.")])]),e._v(" "),t("h3",{attrs:{id:"timer-task-latency-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-task-latency-per-domain"}},[e._v("#")]),e._v(" Timer Task Latency Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Latency breakdown by domain")]),e._v(" "),t("li",[e._v("No monitor needed.")]),e._v(" "),t("li",[e._v("Datadog query example: modify above queries to use domain tag.")])]),e._v(" "),t("h3",{attrs:{id:"history-api-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#history-api-per-second"}},[e._v("#")]),e._v(" History API per Second")]),e._v(" "),t("p",[e._v("Information about history API\nDatadog query example")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.cadence_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"history-api-errors-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#history-api-errors-per-second"}},[e._v("#")]),e._v(" History API Errors per Second")]),e._v(" "),t("ul",[t("li",[e._v("Information about history API")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.cadence_errors{*} by {operation}.as_rate()\nsum:cadence_history.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h3",{attrs:{id:"max-history-size-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-size-2"}},[e._v("#")]),e._v(" Max History Size")]),e._v(" "),t("p",[e._v("The history size of the workflow cannot be too large otherwise it will cause performance issue during replay. The soft limit is 200MB. If exceeding workflow will be terminated by server.")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query is same as the client section")])]),e._v(" "),t("h3",{attrs:{id:"max-history-length-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-length-2"}},[e._v("#")]),e._v(" Max History Length")]),e._v(" "),t("p",[e._v("Similarly, the history length of the workflow cannot be too large otherwise it will cause performance issues during replay. The soft limit is 200K events. If exceeding, workflow will be terminated by server.")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query is same as the client section")])]),e._v(" "),t("h3",{attrs:{id:"max-event-blob-size-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-event-blob-size-2"}},[e._v("#")]),e._v(" Max Event Blob Size")]),e._v(" "),t("ul",[t("li",[e._v("The size of each event(e.g. Decided by input/output of workflow/activity/signal/chidlWorkflow/etc) cannot be too large otherwise it will also cause performance issue. The soft limit is 2MB. If exceeding, the requests will be rejected by server, meaning that workflow won’t be able to make any progress.")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query is same as the client section")])]),e._v(" "),t("h2",{attrs:{id:"cadence-matching-service-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-matching-service-monitoring"}},[e._v("#")]),e._v(" Cadence Matching Service Monitoring")]),e._v(" "),t("p",[e._v("Matching service is to match/assign tasks from cadence service to workers. Matching got the tasks from history service. If workers are active the task will be matched immediately , It’s called “sync match”. If workers are not available, matching will persist into database and then reload the tasks when workers are back(called “async match”)")]),e._v(" "),t("h3",{attrs:{id:"matching-apis-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#matching-apis-per-second"}},[e._v("#")]),e._v(" Matching APIs per Second")]),e._v(" "),t("ul",[t("li",[e._v("API processed by matching service per second")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"matching-api-errors-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#matching-api-errors-per-second"}},[e._v("#")]),e._v(" Matching API Errors per Second")]),e._v(" "),t("ul",[t("li",[e._v("API errors by matching service per second")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_errors_per_tl{*} by {operation,domain,tasklist}.as_rate()\nsum:cadence_matching.cadence_errors_bad_request_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_request{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_shard_ownership_lost{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_event_already_started{*} by {operation,domain,tasklist}\n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h3",{attrs:{id:"matching-regular-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#matching-regular-api-latency"}},[e._v("#")]),e._v(" Matching Regular API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Regular APIs are the APIs excluding long polls")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_matching.cadence_latency_per_tl.quantile{$pXXLatency,!operation:pollfor*,!operation:queryworkflow} by {operation,tasklist}\n")])])]),t("h3",{attrs:{id:"sync-match-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#sync-match-latency"}},[e._v("#")]),e._v(" Sync Match Latency:")]),e._v(" "),t("ul",[t("li",[e._v("If the latency is too high, probably the tasklist is overloaded. Consider using multiple tasklist, or enable scalable tasklist feature by adding more partition to the tasklist(default is one)\nTo confirm if there are too many tasks being added to the tasklist, use “AddTasks per second - domain, tasklist breakdown”")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.syncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n")])])]),t("h3",{attrs:{id:"async-match-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#async-match-latency"}},[e._v("#")]),e._v(" Async match Latency")]),e._v(" "),t("ul",[t("li",[e._v("If a match is done asynchronously it writes a match to the db to use later. Measures the time when the worker is not actively looking for tasks. If this is high, more workers are needed.")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.asyncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n")])])]),t("h2",{attrs:{id:"cadence-default-persistence-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-default-persistence-monitoring"}},[e._v("#")]),e._v(" Cadence Default Persistence Monitoring")]),e._v(" "),t("p",[e._v("The following monotors should be set up for Cadence persistence.")]),e._v(" "),t("h3",{attrs:{id:"persistence-availability"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-availability"}},[e._v("#")]),e._v(" Persistence Availability")]),e._v(" "),t("ul",[t("li",[e._v("The availability of the primary database for your Cadence server")]),e._v(" "),t("li",[e._v("Monitor required: Below 95% > 5min then alert, below 99% triggers a slack warning")]),e._v(" "),t("li",[e._v("When fired, check if it’s due to some persistence issue.\nIf so then investigate the database(may need to scale up) [Mostly]\nIf not then see if need to scale up Cadence deployment(K8s instance)")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_requests{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n(1 - e / f) * 100\n(1 - g / h) * 100\n")])])]),t("h3",{attrs:{id:"persistence-by-service-tps"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-service-tps"}},[e._v("#")]),e._v(" Persistence By Service TPS")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_requests{*}.as_rate()\nsum:cadence_history.persistence_requests{*}.as_rate()\nsum:cadence_worker.persistence_requests{*}.as_rate()\nsum:cadence_matching.persistence_requests{*}.as_rate()\n\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-tps"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-tps"}},[e._v("#")]),e._v(" Persistence By Operation TPS")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_history.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_rate()\n\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-latency"}},[e._v("#")]),e._v(" Persistence By Operation Latency")]),e._v(" "),t("ul",[t("li",[e._v("Monitor required, alert if 95% of all operation latency is greater than 1 second for 5mins, warning if greater than 0.5 seconds")]),e._v(" "),t("li",[e._v("When fired, investigate the database(may need to scale up) [Mostly]\nIf there’s a high latency, then there could be errors or something wrong with the db")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_matching.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_worker.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_frontend.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.persistence_latency.quantile{$pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"persistence-error-by-operation-count"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-error-by-operation-count"}},[e._v("#")]),e._v(" Persistence Error By Operation Count")]),e._v(" "),t("ul",[t("li",[e._v("It's to help investigate availability issue")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\n\nsum:cadence_frontend.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_history.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_matching.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_worker.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_bad_request{*} by {operation}.as_count()\n\n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h2",{attrs:{id:"cadence-advanced-visibility-persistence-monitoring-if-applicable"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-advanced-visibility-persistence-monitoring-if-applicable"}},[e._v("#")]),e._v(" Cadence Advanced Visibility Persistence Monitoring(if applicable)")]),e._v(" "),t("p",[e._v("Kafka & ElasticSearch are only for visibility. Only applicable ​​if using advanced visibility.\nFor writing visibility records, Cadence history service will write down the records into Kafka, and then Cadence worker service will read from Kafka and write into ElasticSearch(in batch, for performance optimization)\nFor reading visibility records, Frontend service will query ElasticSearch directly.")]),e._v(" "),t("h3",{attrs:{id:"persistence-availability-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-availability-2"}},[e._v("#")]),e._v(" Persistence Availability")]),e._v(" "),t("ul",[t("li",[e._v("The availability of Cadence server using database")]),e._v(" "),t("li",[e._v("Monitor can be set")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n")])])]),t("h3",{attrs:{id:"persistence-by-service-tps-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-service-tps-2"}},[e._v("#")]),e._v(" Persistence By Service TPS")]),e._v(" "),t("ul",[t("li",[e._v("The error of persistence API call by service")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_requests{*}.as_rate()\nsum:cadence_history.elasticsearch_requests{*}.as_rate()\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-tps-read-es-write-kafka"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-tps-read-es-write-kafka"}},[e._v("#")]),e._v(" Persistence By Operation TPS(read: ES, write: Kafka)")]),e._v(" "),t("ul",[t("li",[e._v("The rate of persistence API call by API")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_rate()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-latency-in-seconds-read-es-write-kafka"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-latency-in-seconds-read-es-write-kafka"}},[e._v("#")]),e._v(" Persistence By Operation Latency(in seconds) (read: ES, write: Kafka)")]),e._v(" "),t("ul",[t("li",[e._v("The latency of persistence API call")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.elasticsearch_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.elasticsearch_latency.quantile{$pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"persistence-error-by-operation-count-read-es-write-kafka"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-error-by-operation-count-read-es-write-kafka"}},[e._v("#")]),e._v(" Persistence Error By Operation Count (read: ES, write: Kafka)")]),e._v(" "),t("ul",[t("li",[e._v("The error of persistence API call")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\n")])])]),t("h3",{attrs:{id:"kafka-es-processor-counter"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-es-processor-counter"}},[e._v("#")]),e._v(" Kafka->ES processor counter")]),e._v(" "),t("ul",[t("li",[e._v("This is the metrics of a background processing: consuming Kafka messages and then populate to ElasticSearch in batch")]),e._v(" "),t("li",[e._v("Monitor on the running of the background processing(counter metrics is > 0)")]),e._v(" "),t("li",[e._v("When fired, restart Cadence service first to mitigate. Then look at logs to see why the process is stopped(process panic/error/etc).\nMay consider add more pods (replicaCount) to sys-worker service for higher availability")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_worker.es_processor_requests{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_retries{*} by {operation}.as_count()\n")])])]),t("h3",{attrs:{id:"kafka-es-processor-error"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-es-processor-error"}},[e._v("#")]),e._v(" Kafka->ES processor error")]),e._v(" "),t("ul",[t("li",[e._v("This is the error metrics of the above processing logic\nAlmost all errors are retryable errors so it’s not a problem.")]),e._v(" "),t("li",[e._v("Need to monitor error")]),e._v(" "),t("li",[e._v("When fired, Go to Kibana to find logs about the error details.\nThe most common error is missing the ElasticSearch index field -- an index field is added in dynamicconfig but not in ElasticSearch, or vice versa . If so, follow the runbook to add the field to ElasticSearch or dynamic config.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_worker.es_processor_error{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_corrupted_data{*} by {operation}.as_count()\n")])])]),t("h3",{attrs:{id:"kafka-es-processor-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-es-processor-latency"}},[e._v("#")]),e._v(" Kafka->ES processor latency")]),e._v(" "),t("ul",[t("li",[e._v("The latency of the processing logic")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_worker.es_processor_process_msg_latency.quantile{$pXXLatency} by {operation}.as_count()\n")])])]),t("h2",{attrs:{id:"cadence-dependency-metrics-monitor-suggestion"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-dependency-metrics-monitor-suggestion"}},[e._v("#")]),e._v(" Cadence Dependency Metrics Monitor suggestion")]),e._v(" "),t("h3",{attrs:{id:"computing-platform-metrics-for-cadence-deployment"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#computing-platform-metrics-for-cadence-deployment"}},[e._v("#")]),e._v(" Computing platform metrics for Cadence deployment")]),e._v(" "),t("p",[e._v("Cadence server being deployed on any computing platform(e.g. Kubernetese) should be monitored on the blow metrics:")]),e._v(" "),t("ul",[t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")])]),e._v(" "),t("h3",{attrs:{id:"database"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#database"}},[e._v("#")]),e._v(" Database")]),e._v(" "),t("p",[e._v("Depends on which database, you should at least monitor on the below metrics")]),e._v(" "),t("ul",[t("li",[e._v("Disk Usage")]),e._v(" "),t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")]),e._v(" "),t("li",[e._v("Read API latency")]),e._v(" "),t("li",[e._v("Write API Latency")])]),e._v(" "),t("h3",{attrs:{id:"kafka-if-applicable"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-if-applicable"}},[e._v("#")]),e._v(" Kafka (if applicable)")]),e._v(" "),t("ul",[t("li",[e._v("Disk Usage")]),e._v(" "),t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")])]),e._v(" "),t("h3",{attrs:{id:"elasticsearch-if-applicable"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#elasticsearch-if-applicable"}},[e._v("#")]),e._v(" ElasticSearch (if applicable)")]),e._v(" "),t("ul",[t("li",[e._v("Disk Usage")]),e._v(" "),t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")])]),e._v(" "),t("h2",{attrs:{id:"cadence-service-slo-recommendation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-service-slo-recommendation"}},[e._v("#")]),e._v(" Cadence Service SLO Recommendation")]),e._v(" "),t("ul",[t("li",[e._v("Core API availability: 99.9%")]),e._v(" "),t("li",[e._v("Core API latency: <1s")]),e._v(" "),t("li",[e._v("Overall task dispatch latency: <2s (queue_latency for transfer task and timer task)")])])])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[97],{403:function(e,t,a){"use strict";a.r(t);var r=a(0),s=Object(r.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-monitoring"}},[e._v("#")]),e._v(" Cluster Monitoring")]),e._v(" "),t("h2",{attrs:{id:"instructions"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#instructions"}},[e._v("#")]),e._v(" Instructions")]),e._v(" "),t("p",[e._v("Cadence emits metrics for both Server and client libraries:")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Follow this example to emit "),t("a",{attrs:{href:"https://github.com/uber-common/cadence-samples/pull/36",target:"_blank",rel:"noopener noreferrer"}},[e._v("client side metrics for Golang client"),t("OutboundLink")],1)]),e._v(" "),t("ul",[t("li",[e._v("You can use other metrics emitter like "),t("a",{attrs:{href:"https://github.com/uber-go/tally/tree/master/m3",target:"_blank",rel:"noopener noreferrer"}},[e._v("M3"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("Alternatively, you can implement the tally "),t("a",{attrs:{href:"https://github.com/uber-go/tally/blob/master/reporter.go",target:"_blank",rel:"noopener noreferrer"}},[e._v("Reporter interface"),t("OutboundLink")],1)])])]),e._v(" "),t("li",[t("p",[e._v("Follow this example to emit "),t("a",{attrs:{href:"https://github.com/uber/cadence-java-samples/blob/master/src/main/java/com/uber/cadence/samples/hello/HelloMetric.java",target:"_blank",rel:"noopener noreferrer"}},[e._v("client side metrics for Java client"),t("OutboundLink")],1),e._v(" if using 3.x client, or "),t("a",{attrs:{href:"https://github.com/longquanzheng/cadence-java-samples-1/pull/1",target:"_blank",rel:"noopener noreferrer"}},[e._v("this example"),t("OutboundLink")],1),e._v(" if using 2.x client.")]),e._v(" "),t("ul",[t("li",[e._v("You can use other metrics emitter like "),t("a",{attrs:{href:"https://github.com/uber-java/tally/tree/master/m3",target:"_blank",rel:"noopener noreferrer"}},[e._v("M3"),t("OutboundLink")],1)]),e._v(" "),t("li",[e._v("Alternatively, you can implement the tally "),t("a",{attrs:{href:"https://github.com/uber-java/tally/blob/master/core/src/main/java/com/uber/m3/tally/Scope.java",target:"_blank",rel:"noopener noreferrer"}},[e._v("Reporter interface"),t("OutboundLink")],1)])])]),e._v(" "),t("li",[t("p",[e._v("For running Cadence services in production, please follow this "),t("a",{attrs:{href:"https://github.com/banzaicloud/banzai-charts/blob/master/cadence/templates/server-service-monitor.yaml",target:"_blank",rel:"noopener noreferrer"}},[e._v("example of hemlchart"),t("OutboundLink")],1),e._v(" to emit server side metrics. Or you can follow "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/config/development_prometheus.yaml#L40",target:"_blank",rel:"noopener noreferrer"}},[e._v("the example of local environment"),t("OutboundLink")],1),e._v(" to Prometheus. All services need to expose a HTTP port to provide metircs like below")])])]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("metrics")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("prometheus")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("timerType")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"histogram"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("listenAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"0.0.0.0:8001"')]),e._v("\n")])])]),t("p",[e._v("The rest of the instruction uses local environment as an example.")]),e._v(" "),t("p",[e._v("For testing local server emitting metrics to Promethues, the easiest way is to use "),t("a",{attrs:{href:"https://github.com/uber/cadence/blob/master/docker/",target:"_blank",rel:"noopener noreferrer"}},[e._v("docker-compose"),t("OutboundLink")],1),e._v(" to start a local Cadence instance.")]),e._v(" "),t("p",[e._v("Make sure to update the "),t("code",[e._v("prometheus_config.yml")]),e._v(' to add "host.docker.internal:9098" to the scrape list before starting the docker-compose:')]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("global")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("scrape_interval")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" 5s\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("external_labels")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("monitor")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence-monitor'")]),e._v("\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("scrape_configs")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("job_name")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'prometheus'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("static_configs")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("targets")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# addresses to scrape")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:9090'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8000'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8001'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8002'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'cadence:8003'")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v("-")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'host.docker.internal:9098'")]),e._v("\n")])])]),t("p",[e._v("Note: "),t("code",[e._v("host.docker.internal")]),e._v(" "),t("a",{attrs:{href:"https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds",target:"_blank",rel:"noopener noreferrer"}},[e._v("may not work for some docker versions"),t("OutboundLink")],1)]),e._v(" "),t("ul",[t("li",[t("p",[e._v("After updating the prometheus_config.yaml as above, run "),t("code",[e._v("docker-compose up")]),e._v(" to start the local Cadence instance")])]),e._v(" "),t("li",[t("p",[e._v("Go the the sample repo, build the helloworld sample "),t("code",[e._v("make helloworld")]),e._v(" and run the worker "),t("code",[e._v("./bin/helloworld -m worker")]),e._v(", and then in another Shell start a workflow "),t("code",[e._v("./bin/helloworld")])])]),e._v(" "),t("li",[t("p",[e._v("Go to your "),t("a",{attrs:{href:"http://localhost:9090/",target:"_blank",rel:"noopener noreferrer"}},[e._v("local Prometheus dashboard"),t("OutboundLink")],1),e._v(", you should be able to check the metrics emitted by handler from client/frontend/matching/history/sysWorker and confirm your services are healthy through "),t("a",{attrs:{href:"http://localhost:9090/targets",target:"_blank",rel:"noopener noreferrer"}},[e._v("targets"),t("OutboundLink")],1),e._v(" "),t("img",{attrs:{width:"1192",alt:"Screen Shot 2021-02-20 at 11 31 11 AM",src:"https://user-images.githubusercontent.com/4523955/108606555-8d0dfb80-736f-11eb-968d-7678df37455c.png"}})])]),e._v(" "),t("li",[t("p",[e._v("Go to "),t("a",{attrs:{href:"http://localhost:3000",target:"_blank",rel:"noopener noreferrer"}},[e._v("local Grafana"),t("OutboundLink")],1),e._v(" , login as "),t("code",[e._v("admin/admin")]),e._v(".")])]),e._v(" "),t("li",[t("p",[e._v("Configure Prometheus as datasource: use "),t("code",[e._v("http://host.docker.internal:9090")]),e._v(" as URL of prometheus.")])]),e._v(" "),t("li",[t("p",[e._v("Import the "),t("RouterLink",{attrs:{to:"/docs/operation-guide/monitor/#grafana-prometheus-dashboard-templates"}},[e._v("Grafana dashboard tempalte")]),e._v(" as JSON files.")],1)])]),e._v(" "),t("p",[e._v("Client side dashboard looks like this:\n"),t("img",{attrs:{width:"1513",alt:"Screen Shot 2021-02-20 at 12 32 23 PM",src:"https://user-images.githubusercontent.com/4523955/108607838-b7fc4d80-7377-11eb-8fd9-edc0e58afaad.png"}})]),e._v(" "),t("p",[e._v("And server basic dashboard:\n"),t("img",{attrs:{width:"1514",alt:"Screen Shot 2021-02-20 at 12 31 54 PM",src:"https://user-images.githubusercontent.com/4523955/108607843-baf73e00-7377-11eb-9759-e67a1a00f442.png"}})]),e._v(" "),t("img",{attrs:{width:"1519",alt:"Screen Shot 2021-02-20 at 11 06 54 AM",src:"https://user-images.githubusercontent.com/4523955/108606577-b169d800-736f-11eb-8fcb-88801f23b656.png"}}),e._v(" "),t("h2",{attrs:{id:"datadog-dashboard-templates"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#datadog-dashboard-templates"}},[e._v("#")]),e._v(" DataDog dashboard templates")]),e._v(" "),t("p",[e._v("This "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs/tree/master/src/datadog",target:"_blank",rel:"noopener noreferrer"}},[e._v("package"),t("OutboundLink")],1),e._v(" contains examples of Cadence dashboards with DataDog.")]),e._v(" "),t("ul",[t("li",[t("p",[t("code",[e._v("Cadence-Client")]),e._v(" is the dashboard that includes all the metrics to help you understand Cadence client behavior. Most of these metrics are emitted by the client SDKs, with a few exceptions from server side (for example, workflow timeout).")])]),e._v(" "),t("li",[t("p",[t("code",[e._v("Cadence-Server")]),e._v(" is the the server dashboard that you can use to monitor and undertand the health and status of your Cadence cluster.")])])]),e._v(" "),t("p",[e._v("To use DataDog with Cadence, follow "),t("a",{attrs:{href:"https://docs.datadoghq.com/integrations/guide/prometheus-metrics/",target:"_blank",rel:"noopener noreferrer"}},[e._v("this instruction"),t("OutboundLink")],1),e._v(" to collect Prometheus metrics using DataDog agent.")]),e._v(" "),t("p",[e._v("NOTE1: don't forget to adjust "),t("code",[e._v("max_returned_metrics")]),e._v(" to a higher number(e.g. 100000). Otherwise DataDog agent won't be able to "),t("a",{attrs:{href:"https://docs.datadoghq.com/integrations/guide/prometheus-host-collection/",target:"_blank",rel:"noopener noreferrer"}},[e._v("collect all metrics(default is 2000)"),t("OutboundLink")],1),e._v(".")]),e._v(" "),t("p",[e._v("NOTE2: the template contains templating variables "),t("code",[e._v("$App")]),e._v(" and "),t("code",[e._v("$Availability_Zone")]),e._v(". Feel free to remove them if you don't have them in your setup.")]),e._v(" "),t("h2",{attrs:{id:"grafana-prometheus-dashboard-templates"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#grafana-prometheus-dashboard-templates"}},[e._v("#")]),e._v(" Grafana+Prometheus dashboard templates")]),e._v(" "),t("p",[e._v("This "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs/tree/master/src/grafana/prometheus",target:"_blank",rel:"noopener noreferrer"}},[e._v("package"),t("OutboundLink")],1),e._v(" contains examples of Cadence dashboards with Prometheus.")]),e._v(" "),t("ul",[t("li",[t("p",[t("code",[e._v("Cadence-Client")]),e._v(" is the dashboard of client metrics, and a few server side metrics that belong to client side but have to be emitted by server(for example, workflow timeout).")])]),e._v(" "),t("li",[t("p",[t("code",[e._v("Cadence-Server-Basic")]),e._v(" is the the basic server dashboard to monitor/navigate the health/status of a Cadence cluster.")])]),e._v(" "),t("li",[t("p",[e._v("Apart from the basic server dashboard, it's recommended to set up dashboards on different components for Cadence server: Frontend, History, Matching, Worker, Persistence, Archival, etc. Any "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs",target:"_blank",rel:"noopener noreferrer"}},[e._v("contribution"),t("OutboundLink")],1),e._v(" is always welcome to enrich the existing templates or new templates!")])])]),e._v(" "),t("h2",{attrs:{id:"periodic-tests-canary-for-health-check"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#periodic-tests-canary-for-health-check"}},[e._v("#")]),e._v(" Periodic tests(Canary) for health check")]),e._v(" "),t("p",[e._v("It's recommended that you run periodical test to get signals on the healthness of your cluster. Please following instructions in "),t("a",{attrs:{href:"https://github.com/uber/cadence/tree/master/canary",target:"_blank",rel:"noopener noreferrer"}},[e._v("our canary package"),t("OutboundLink")],1),e._v(" to set these tests up.")]),e._v(" "),t("h2",{attrs:{id:"cadence-frontend-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-frontend-monitoring"}},[e._v("#")]),e._v(" Cadence Frontend Monitoring")]),e._v(" "),t("p",[e._v("This section describes recommended dashboards for monitoring Cadence services in your cluster. The structure mostly follows the DataDog dashboard template listed above.")]),e._v(" "),t("h3",{attrs:{id:"service-availability-server-metrics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-availability-server-metrics"}},[e._v("#")]),e._v(" Service Availability(server metrics)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: the availability of Cadence server using server metrics.")]),e._v(" "),t("li",[e._v("Suggested monitor: below 95% > 5 min then alert, below 99% for > 5 min triggers a warning")]),e._v(" "),t("li",[e._v("Monitor action: When fired, check if there is any persistence errors. If so then check the healthness of the database(may need to restart or scale up). If not then check the error logs.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_errors{*}\nsum:cadence_frontend.cadence_requests{*}\n(1 - a / b) * 100\n")])])]),t("h3",{attrs:{id:"startworkflow-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#startworkflow-per-second"}},[e._v("#")]),e._v(" StartWorkflow Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: how many workflows are started per second. This helps determine if your server is overloaded.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{(operation IN (startworkflowexecution,signalwithstartworkflowexecution))} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"activities-started-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activities-started-per-second"}},[e._v("#")]),e._v(" Activities Started Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: How many activities are started per second. Helps determine if the server is overloaded.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{operation:pollforactivitytask} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"decisions-started-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decisions-started-per-second"}},[e._v("#")]),e._v(" Decisions Started Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: How many workflow decisions are started per second. Helps determine if the server is overloaded.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{operation:pollfordecisiontask} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"periodical-test-suite-success-aka-canary"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#periodical-test-suite-success-aka-canary"}},[e._v("#")]),e._v(" Periodical Test Suite Success(aka Canary)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: The success counter of canary test suite")]),e._v(" "),t("li",[e._v("Suggested monitor: Monitor needed. If fired, look at the failed canary test case and investigate the reason of failure.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.workflow_success{workflowtype:workflow_sanity} by {workflowtype}.as_count()\n")])])]),t("h3",{attrs:{id:"frontend-all-api-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-all-api-per-second"}},[e._v("#")]),e._v(" Frontend all API per second")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: all API on frontend per second. Information only.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{*}.as_rate()\n")])])]),t("h3",{attrs:{id:"frontend-api-per-second-breakdown-per-operation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-api-per-second-breakdown-per-operation"}},[e._v("#")]),e._v(" Frontend API per second (breakdown per operation)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: API on frontend per second. Information only.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is a business metrics. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"frontend-api-errors-per-second-breakdown-per-operation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-api-errors-per-second-breakdown-per-operation"}},[e._v("#")]),e._v(" Frontend API errors per second(breakdown per operation)")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: API error on frontend per second. Information only.")]),e._v(" "),t("li",[e._v("Suggested monitor: This is to facilitate investigation. No monitoring required.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_errors{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h3",{attrs:{id:"frontend-regular-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-regular-api-latency"}},[e._v("#")]),e._v(" Frontend Regular API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: The latency of regular core API -- excluding long-poll/queryWorkflow/getHistory/ListWorkflow/CountWorkflow API.")]),e._v(" "),t("li",[e._v("Suggested monitor: 95% of all apis and of all operations that take over 1.5 seconds triggers a warning, over 2 seconds triggers an alert")]),e._v(" "),t("li",[e._v("Monitor action: If fired, investigate the database read/write latency. May need to throttle some spiky traffic from certain domains, or scale up the database")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{(operation NOT IN (pollfordecisiontask,pollforactivitytask,getworkflowexecutionhistory,queryworkflow,listworkflowexecutions,listclosedworkflowexecutions,listopenworkflowexecutions)) AND $pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-listworkflow-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-listworkflow-api-latency"}},[e._v("#")]),e._v(" Frontend ListWorkflow API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: The latency of ListWorkflow API.")]),e._v(" "),t("li",[e._v("Monitor: 95% of all apis and of all operations that take over 2 seconds triggers a warning, over 3 seconds triggers an alert")]),e._v(" "),t("li",[e._v("Monitor action: If fired, investigate the ElasticSearch read latency. May need to throttle some spiky traffic from certain domains, or scale up ElasticSearch cluster.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{(operation IN (listclosedworkflowexecutions,listopenworkflowexecutions,listworkflowexecutions,countworkflowexecutions)) AND $pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-long-poll-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-long-poll-api-latency"}},[e._v("#")]),e._v(" Frontend Long Poll API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: Long poll means that the worker is waiting for a task. The latency is an Indicator for how busy the worker is. Poll for activity task and poll for decision task are the types of long poll requests.The api call times out at 50 seconds if no task can be picked up.A very low latency could mean that more workers need to be added.")]),e._v(" "),t("li",[e._v("Suggested monitor: No monitor needed as long latency is expected.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollforactivitytask} by {operation}\navg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollfordecisiontask} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-get-history-query-workflow-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-get-history-query-workflow-api-latency"}},[e._v("#")]),e._v(" Frontend Get History/Query Workflow API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: GetHistory API acts like a long poll api, but there’s no explicit timeout. Long-poll of GetHistory is being used when WorkflowClient is waiting for the result of the workflow(essentially, WorkflowExecutionCompletedEvent).\nThis latency depends on the time it takes for the workflow to complete. QueryWorkflow API latency is also unpredictable as it depends on the availability and performance of workflow workers, which are owned by the application and workflow implementation(may require replaying history).")]),e._v(" "),t("li",[e._v("Suggested monitor: No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.cadence_latency.quantile{(operation IN (getworkflowexecutionhistory,queryworkflow)) AND $pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"frontend-workflowclient-api-per-seconds-by-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#frontend-workflowclient-api-per-seconds-by-domain"}},[e._v("#")]),e._v(" Frontend WorkflowClient API per seconds by domain")]),e._v(" "),t("ul",[t("li",[e._v("Meaning: Shows which domains are making the most requests using WorkflowClient(excluding worker API like PollForDecisionTask and RespondDecisionTaskCompleted). Used for troubleshooting.\nIn the future it can be used to set some rate limiting per domain.")]),e._v(" "),t("li",[e._v("Suggested monitor: No monitor needed.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.cadence_requests{(operation IN (signalwithstartworkflowexecution,signalworkflowexecution,startworkflowexecution,terminateworkflowexecution,resetworkflowexecution,requestcancelworkflowexecution,listworkflowexecutions))} by {domain,operation}.as_rate()\n")])])]),t("h2",{attrs:{id:"cadence-application-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-application-monitoring"}},[e._v("#")]),e._v(" Cadence Application Monitoring")]),e._v(" "),t("p",[e._v("This section describes the recommended dashboards for monitoring Cadence application using metrics emitted by SDK. See the "),t("code",[e._v("setup")]),e._v(" section about how to collect those metrics.")]),e._v(" "),t("h3",{attrs:{id:"workflow-start-and-successful-completion"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-start-and-successful-completion"}},[e._v("#")]),e._v(" Workflow Start and Successful completion")]),e._v(" "),t("ul",[t("li",[e._v("Workflow successfully started/signalWithStart and completed/canceled/continuedAsNew")]),e._v(" "),t("li",[e._v("Monitor: not recommended")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_workflow_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_completed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_canceled{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_continue_as_new{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_signal_with_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\n")])])]),t("h3",{attrs:{id:"workflow-failure"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-failure"}},[e._v("#")]),e._v(" Workflow Failure")]),e._v(" "),t("ul",[t("li",[e._v("Metrics for all types of failures, including workflow failures(throw uncaught exceptions), workflow timeout and termination.")]),e._v(" "),t("li",[e._v("For timeout and termination, workflow worker doesn’t have a chance to emit metrics when it’s terminate, so the metric comes from the history service")]),e._v(" "),t("li",[e._v("Monitor: application should set monitor on timeout and failure to make sure workflow are not failing. Cancel/terminate are usually triggered by human intentionally.")]),e._v(" "),t("li",[e._v("When the metrics fire, go to Cadence UI to find the failed workflows and investigate the workflow history to understand the type of failure")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_workflow_failed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env}.as_count()\nsum:cadence_history.workflow_failed{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_terminate{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_timeout{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\n")])])]),t("h3",{attrs:{id:"decision-poll-counters"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-poll-counters"}},[e._v("#")]),e._v(" Decision Poll Counters")]),e._v(" "),t("ul",[t("li",[e._v("Indicates if the workflow worker is available and is polling tasks. If the worker is not available no counters will show.\nCan also check if the worker is using the right task list.\n“No task” poll type means that the worker exists and is idle.\nThe timeout for this long poll api is 50 seconds. If no task is received within 50 seconds, then an empty response will be returned and another long poll request will be sent.")]),e._v(" "),t("li",[e._v("Monitor: application can should monitor on it to make sure workers are available")]),e._v(" "),t("li",[e._v("When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_decision_poll_total{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_failed{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_no_task{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_succeed{$Domain,$Tasklist}.as_count()\n")])])]),t("h3",{attrs:{id:"decisiontasks-scheduled-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decisiontasks-scheduled-per-second"}},[e._v("#")]),e._v(" DecisionTasks Scheduled per second")]),e._v(" "),t("ul",[t("li",[e._v("Indicate how many decision tasks are scheduled")]),e._v(" "),t("li",[e._v("Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_requests_per_tl{*,operation:adddecisiontask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n")])])]),t("h3",{attrs:{id:"decision-scheduled-to-start-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-scheduled-to-start-latency"}},[e._v("#")]),e._v(" Decision Scheduled To Start Latency")]),e._v(" "),t("ul",[t("li",[e._v("If this latency is too high then either:\nThe worker is not available or too busy after the task has been scheduled.\nThe task list is overloaded(confirmed by DecisionTaskScheduled per second widget). By default a task list only has one partition and a partition can only be owned by one host and so the throughput of a task list is limited. More task lists can be added to scale or a scalable task list can be used to add more partitions.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it to make sure latency is tolerable")]),e._v(" "),t("li",[e._v("When fired, check if worker capacity is enough, then check if tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_decision_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist}\n")])])]),t("h3",{attrs:{id:"decision-execution-failure"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-execution-failure"}},[e._v("#")]),e._v(" Decision Execution Failure")]),e._v(" "),t("ul",[t("li",[e._v("This means some critical bugs in workflow code causing decision task execution failure")]),e._v(" "),t("li",[e._v("Monitor: application should set monitor on it to make sure no consistent failure")]),e._v(" "),t("li",[e._v("When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_decision_execution_failed{$Domain,$Tasklist} by {tasklist,workflowtype}.as_count()\n")])])]),t("h3",{attrs:{id:"decision-execution-timeout"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#decision-execution-timeout"}},[e._v("#")]),e._v(" Decision Execution Timeout")]),e._v(" "),t("ul",[t("li",[e._v("This means some critical bugs in workflow code causing decision task execution timeout")]),e._v(" "),t("li",[e._v("Monitor: application should set monitor on it to make sure no consistent timeout")]),e._v(" "),t("li",[e._v("When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.start_to_close_timeout{operation:timeractivetaskdecision*,$Domain}.as_count()\n")])])]),t("h3",{attrs:{id:"workflow-end-to-end-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-end-to-end-latency"}},[e._v("#")]),e._v(" Workflow End to End Latency")]),e._v(" "),t("ul",[t("li",[e._v("This is for the client application to track their SLOs\nFor example, if you expect a workflow to take duration d to complete, you can use this latency to set a monitor.")]),e._v(" "),t("li",[e._v("Monitor: application can monitor this metrics if expecting workflow to complete within a certain duration.")]),e._v(" "),t("li",[e._v("When fired, investigate the workflow history to see the workflow takes longer than expected to complete")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_workflow_endtoend_latency.median{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\navg:cadence_client.cadence_workflow_endtoend_latency.95percentile{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\n")])])]),t("h3",{attrs:{id:"workflow-panic-and-nondeterministicerror"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-panic-and-nondeterministicerror"}},[e._v("#")]),e._v(" Workflow Panic and NonDeterministicError")]),e._v(" "),t("ul",[t("li",[e._v("These errors mean that there is a bug in the code and the deploy should be rolled back.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric")]),e._v(" "),t("li",[e._v("When fired, you may rollback the deployment to mitigate your issue. Usually this caused by bad (non-backward compatible) code change. After rollback, look at your worker error logs to see where the bug is.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_worker_panic{$Domain} by {env,domain}.as_rate()\nsum:cadence_client.cadence_non_deterministic_error{$Domain} by {env,domain}.as_rate()\n")])])]),t("h3",{attrs:{id:"workflow-sticky-cache-hit-rate-and-miss-count"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#workflow-sticky-cache-hit-rate-and-miss-count"}},[e._v("#")]),e._v(" Workflow Sticky Cache Hit Rate and Miss Count")]),e._v(" "),t("ul",[t("li",[e._v("This metric can be used for performance optimization.\nThis can be improved by adding more worker instances, or adjust the workerOption(GoSDK) or WorkferFactoryOption(Java SDK).\nCacheHitRate too low means workers will have to replay history to rebuild the workflow stack when executing a decision task. Depending on the the history size\n"),t("ul",[t("li",[e._v("If less than 1MB, then it’s okay to be lower than 50%")]),e._v(" "),t("li",[e._v("If greater than 1MB, then it’s okay to be greater than 50%")]),e._v(" "),t("li",[e._v("If greater than 5MB, , then it’s okay to be greater than 60%")]),e._v(" "),t("li",[e._v("If greater than 10MB , then it’s okay to be greater than 70%")]),e._v(" "),t("li",[e._v("If greater than 20MB , then it’s okay to be greater than 80%")]),e._v(" "),t("li",[e._v("If greater than 30MB , then it’s okay to be greater than 90%")]),e._v(" "),t("li",[e._v("Workflow history size should never be greater than 50MB.")])])]),e._v(" "),t("li",[e._v("A monitor can be set on this metric, if performance is important.")]),e._v(" "),t("li",[e._v("When fired, adjust the stickyCacheSize in the WorkerFactoryOption, or add more workers")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_sticky_cache_miss{$Domain} by {env,domain}.as_count()\nsum:cadence_client.cadence_sticky_cache_hit{$Domain} by {env,domain}.as_count()\n(b / (a+b)) * 100\n")])])]),t("h3",{attrs:{id:"activity-task-operations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-task-operations"}},[e._v("#")]),e._v(" Activity Task Operations")]),e._v(" "),t("ul",[t("li",[e._v("Activity started/completed counters")]),e._v(" "),t("li",[e._v("Monitor: not recommended")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_activity_task_failed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_completed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_timeouted{$Domain,$Tasklist} by {activitytype}.as_rate()\n")])])]),t("h3",{attrs:{id:"local-activity-task-operations"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#local-activity-task-operations"}},[e._v("#")]),e._v(" Local Activity Task Operations")]),e._v(" "),t("ul",[t("li",[e._v("Local Activity execution counters")]),e._v(" "),t("li",[e._v("Monitor: not recommended")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_local_activity_total{$Domain,$Tasklist} by {activitytype}.as_count()\n")])])]),t("h3",{attrs:{id:"activity-execution-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-execution-latency"}},[e._v("#")]),e._v(" Activity Execution Latency")]),e._v(" "),t("ul",[t("li",[e._v("If it’s expected that an activity will take x amount of time to complete, a monitor on this metric could be helpful to enforce that expectation.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it if expecting workflow start/complete activities with certain latency")]),e._v(" "),t("li",[e._v("When fired, investigate the activity code and its dependencies")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_activity_execution_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_execution_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n")])])]),t("h3",{attrs:{id:"activity-poll-counters"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-poll-counters"}},[e._v("#")]),e._v(" Activity Poll Counters")]),e._v(" "),t("ul",[t("li",[e._v("Indicates the activity worker is available and is polling tasks. If the worker is not available no counters will show.\nCan also check if the worker is using the right task list.\n“No task” poll type means that the worker exists and is idle.\nThe timeout for this long poll api is 50 seconds. If within that 50 seconds, no task is received then an empty response will be returned and another long poll request will be sent.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it to make sure activity workers are available")]),e._v(" "),t("li",[e._v("When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_activity_poll_total{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_failed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_succeed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_no_task{$Domain,$Tasklist} by {activitytype}.as_count()\n")])])]),t("h3",{attrs:{id:"activitytasks-scheduled-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activitytasks-scheduled-per-second"}},[e._v("#")]),e._v(" ActivityTasks Scheduled per second")]),e._v(" "),t("ul",[t("li",[e._v("Indicate how many activities tasks are scheduled")]),e._v(" "),t("li",[e._v("Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_requests_per_tl{*,operation:addactivitytask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n")])])]),t("h3",{attrs:{id:"activity-scheduled-to-start-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-scheduled-to-start-latency"}},[e._v("#")]),e._v(" Activity Scheduled To Start Latency")]),e._v(" "),t("ul",[t("li",[e._v("If the latency is too high either:\nThe worker is not available or too busy\nThere are too many activities scheduled into the same tasklist and the tasklist is not scalable. Same as Decision Scheduled To Start Latency")]),e._v(" "),t("li",[e._v("Monitor: application Should set monitor on it")]),e._v(" "),t("li",[e._v("When fired, check if workers are enough, then check if the tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_activity_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n")])])]),t("h3",{attrs:{id:"activity-failure"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#activity-failure"}},[e._v("#")]),e._v(" Activity Failure")]),e._v(" "),t("ul",[t("li",[e._v("A monitor on this metric will alert the team that activities are failing\nThe activity timeout metrics are emitted by the history service, because a timeout causes a hard stop and the client doesn’t have time to emit metrics.")]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it")]),e._v(" "),t("li",[e._v("When fired, investigate the activity code and its dependencies")]),e._v(" "),t("li",[t("code",[e._v("cadence_activity_execution_failed")]),e._v(" vs "),t("code",[e._v("cadence_activity_task_failed")]),e._v(":\nOnly have different when using RetryPolicy\ncadence_activity_task_failed counter increase per activity attempt\ncadence_activity_execution_failed counter increase when activity fails after all attempts")]),e._v(" "),t("li",[e._v("should only monitor on cadence_activity_execution_failed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_activity_execution_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_panic{$Domain} by {domain,env}.as_count()\nsum:cadence_client.cadence_activity_task_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_canceled{$Domain} by {domain,env}.as_count()\nsum:cadence_history.heartbeat_timeout{$Domain} by {domain,env}.as_count()\nsum:cadence_history.schedule_to_start_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.start_to_close_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.schedule_to_close_timeout{$Domain} by {domain,env}.as_count()\n")])])]),t("h3",{attrs:{id:"service-api-success-rate"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-success-rate"}},[e._v("#")]),e._v(" Service API success rate")]),e._v(" "),t("ul",[t("li",[e._v("The client’s experience of the service availability. It encompasses many apis. Things that could affect the service’s API success rate are:\n"),t("ul",[t("li",[e._v("Service availability")]),e._v(" "),t("li",[e._v("The network could have issues.")]),e._v(" "),t("li",[e._v("A required api is not available.")]),e._v(" "),t("li",[e._v("Client side errors like EntityNotExists, WorkflowAlreadyStarted etc. This means that application code has potential bugs of calling Cadence service.")])])]),e._v(" "),t("li",[e._v("Monitor: application can set monitor on it")]),e._v(" "),t("li",[e._v("When fired, check application logs to see if the error is Cadence server error or client side error. Error like EntityNotExists/ExecutionAlreadyStarted/QueryWorkflowFailed/etc are client side error, meaning that the application is misusing the APIs. If most errors are server side errors(internalServiceError), you can contact Cadence admin.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_error{*} by {domain}.as_count()\nsum:cadence_client.cadence_request{*} by {domain}.as_count()\n(1 - a / b) * 100\n")])])]),t("h3",{attrs:{id:"service-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-latency"}},[e._v("#")]),e._v(" Service API Latency")]),e._v(" "),t("ul",[t("li",[e._v("The latency of the API, excluding long poll APIs.")]),e._v(" "),t("li",[e._v("Application can set monitor on certain APIs, if necessary.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_client.cadence_latency.95percentile{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}\n")])])]),t("h3",{attrs:{id:"service-api-breakdown"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-breakdown"}},[e._v("#")]),e._v(" Service API Breakdown")]),e._v(" "),t("ul",[t("li",[e._v("A counter breakdown by API to help investigate availability")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_request{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}.as_count()\n")])])]),t("h3",{attrs:{id:"service-api-error-breakdown"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#service-api-error-breakdown"}},[e._v("#")]),e._v(" Service API Error Breakdown")]),e._v(" "),t("ul",[t("li",[e._v("A counter breakdown by API error to help investigate availability")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_client.cadence_error{$Domain} by {cadence_metric_scope}.as_count()\n")])])]),t("h3",{attrs:{id:"max-event-blob-size"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-event-blob-size"}},[e._v("#")]),e._v(" Max Event Blob size")]),e._v(" "),t("ul",[t("li",[e._v("By default the max size is 2 MB. If the input is greater than the max size the server will reject the request.\nThe size of a single history event. This applies to any event input, like start workflow event, start activity event, or signal event.\nIt should never be greater than 2MB.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric.")]),e._v(" "),t("li",[e._v("When fired, please review the design/code ASAP to reduce the blob size. Reducing the input/output of workflow/activity/signal will help.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("​​max:cadence_history.event_blob_size.quantile{!domain:all,$Domain} by {domain}\n")])])]),t("h3",{attrs:{id:"max-history-size"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-size"}},[e._v("#")]),e._v(" Max History Size")]),e._v(" "),t("ul",[t("li",[e._v("Workflow history cannot grow indefinitely. It will cause replay issues.\nIf the workflow exceeds the history’s max size the workflow will be terminate automatically. The max size by default is 200 megabytes.\nAs a suggestion for workflow design, workflow history should never grow greater than 50MB. Use continueAsNew to break long workflows into multiple runs.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric.")]),e._v(" "),t("li",[e._v("When fired, please review the design/code ASAP to reduce the history size. Reducing the input/output of workflow/activity/signal will help. Also you may need to use ContinueAsNew to break a single execution into smaller pieces.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("​​max:cadence_history.history_size.quantile{!domain:all,$Domain} by {domain}\n")])])]),t("h3",{attrs:{id:"max-history-length"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-length"}},[e._v("#")]),e._v(" Max History Length")]),e._v(" "),t("ul",[t("li",[e._v("The number of events of workflow history.\nIt should never be greater than 50K(workflow exceeding 200K events will be terminated by server). Use continueAsNew to break long workflows into multiple runs.")]),e._v(" "),t("li",[e._v("A monitor should be set on this metric.")]),e._v(" "),t("li",[e._v("When fired, please review the design/code ASAP to reduce the history length. You may need to use ContinueAsNew to break a single execution into smaller pieces.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("​​max:cadence_history.history_count.quantile{!domain:all,$Domain} by {domain}\n")])])]),t("h2",{attrs:{id:"cadence-history-service-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-history-service-monitoring"}},[e._v("#")]),e._v(" Cadence History Service Monitoring")]),e._v(" "),t("p",[e._v("History is the most critical/core service for cadence which implements the workflow logic.")]),e._v(" "),t("h3",{attrs:{id:"history-shard-movements"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#history-shard-movements"}},[e._v("#")]),e._v(" History shard movements")]),e._v(" "),t("ul",[t("li",[e._v("Should only happen during deployment or when the node restarts.\nIf there’s shard movement without deployments then that’s unexpected and there’s probably a performance issue. The shard ownership is assigned by a particular history host, so if the shard is moving it’ll be hard for the frontend service to route a request to a particular history shard and to find it.")]),e._v(" "),t("li",[e._v("A monitor can be set to be alerted on shard movements without deployment.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.membership_changed_count{operation:shardcontroller}\nsum:cadence_history.shard_closed_count{operation:shardcontroller}\nsum:cadence_history.sharditem_created_count{operation:shardcontroller}\nsum:cadence_history.sharditem_removed_count{operation:shardcontroller}\n")])])]),t("h3",{attrs:{id:"transfer-tasks-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-tasks-per-second"}},[e._v("#")]),e._v(" Transfer Tasks Per Second")]),e._v(" "),t("ul",[t("li",[e._v("TransferTask is an internal background task that moves workflow state and transfers an action task from the history engine to another service(e.g. Matching service, ElasticSearch, etc)")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests{operation:transferactivetask*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"timer-tasks-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-tasks-per-second"}},[e._v("#")]),e._v(" Timer Tasks Per Second")]),e._v(" "),t("ul",[t("li",[e._v("Timer tasks are tasks that are scheduled to be triggered at a given time in future. For example, workflow.sleep() will wait an x amount of time then the task will be pushed somewhere for a worker to pick up.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests{operation:timeractivetask*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"transfer-tasks-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-tasks-per-domain"}},[e._v("#")]),e._v(" Transfer Tasks Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Count breakdown by domain")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests_per_domain{operation:transferactive*} by {domain}.as_count()\n")])])]),t("h3",{attrs:{id:"timer-tasks-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-tasks-per-domain"}},[e._v("#")]),e._v(" Timer Tasks Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Count breakdown by domain")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.task_requests_per_domain{operation:timeractive*} by {domain}.as_count()\n")])])]),t("h3",{attrs:{id:"transfer-latency-by-type"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-latency-by-type"}},[e._v("#")]),e._v(" Transfer Latency by Type")]),e._v(" "),t("ul",[t("li",[e._v("If latency is too high then it’s an issue for a workflow. For example, if transfer task latency is 5 second, then it takes 5 second for activity/decision to actual receive the task.")]),e._v(" "),t("li",[e._v("Monitor should be set on diffeernt types of latency. Note that "),t("code",[e._v("queue_latency")]),e._v(" can go very high during deployment and it's expected. See below NOTE for explanation.")]),e._v(" "),t("li",[e._v("When fired, check if it’s due to some persistence issue.\nIf so then investigate the database(may need to scale up)\nIf not then see if need to scale up Cadence deployment(K8s instance)")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_history.task_latency.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:transfer*} by {operation}\n")])])]),t("h3",{attrs:{id:"timer-task-latency-by-type"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-task-latency-by-type"}},[e._v("#")]),e._v(" Timer Task Latency by type")]),e._v(" "),t("ul",[t("li",[e._v("If latency is too high then it’s an issue for a workflow. For example, if you set the workflow.sleep() for 10 seconds and the timer latency is 5 secs then the workflow will sleep for 15 seconds.")]),e._v(" "),t("li",[e._v("Monitor should be set on diffeernt types of latency.")]),e._v(" "),t("li",[e._v("When fired, check if it’s due to some persistence issue.\nIf so then investigate the database(may need to scale up) [Mostly]\nIf not then see if need to scale up Cadence deployment(K8s instance)")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_history.task_latency.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:timer*} by {operation}\n")])])]),t("h3",{attrs:{id:"note-task-queue-latency-vs-executing-latency-vs-processing-latency-in-transfer-timer-task-latency-metrics"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#note-task-queue-latency-vs-executing-latency-vs-processing-latency-in-transfer-timer-task-latency-metrics"}},[e._v("#")]),e._v(" NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics")]),e._v(" "),t("ul",[t("li",[t("code",[e._v("task_latency_queue")]),e._v(": “Queue Latency” is “end to end” latency for users. The latency could go to several minutes during deployment because of metrics being re-emitted (but the actual latency is not that high)")]),e._v(" "),t("li",[t("code",[e._v("task_latency")]),e._v(": “Executing latency” is the time from submission to executing pool to completion. It includes scheduling, retry and processing time of the task.")]),e._v(" "),t("li",[t("code",[e._v("task_latency_processing")]),e._v(": “Processing latency” is the processing time of the task of a single attempt(without retry)")])]),e._v(" "),t("h3",{attrs:{id:"transfer-task-latency-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#transfer-task-latency-per-domain"}},[e._v("#")]),e._v(" Transfer Task Latency Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Latency breakdown by domain")]),e._v(" "),t("li",[e._v("No monitor needed.")]),e._v(" "),t("li",[e._v("Datadog query example: modify above queries to use domain tag.")])]),e._v(" "),t("h3",{attrs:{id:"timer-task-latency-per-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#timer-task-latency-per-domain"}},[e._v("#")]),e._v(" Timer Task Latency Per Domain")]),e._v(" "),t("ul",[t("li",[e._v("Latency breakdown by domain")]),e._v(" "),t("li",[e._v("No monitor needed.")]),e._v(" "),t("li",[e._v("Datadog query example: modify above queries to use domain tag.")])]),e._v(" "),t("h3",{attrs:{id:"history-api-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#history-api-per-second"}},[e._v("#")]),e._v(" History API per Second")]),e._v(" "),t("p",[e._v("Information about history API\nDatadog query example")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.cadence_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"history-api-errors-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#history-api-errors-per-second"}},[e._v("#")]),e._v(" History API Errors per Second")]),e._v(" "),t("ul",[t("li",[e._v("Information about history API")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_history.cadence_errors{*} by {operation}.as_rate()\nsum:cadence_history.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h3",{attrs:{id:"max-history-size-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-size-2"}},[e._v("#")]),e._v(" Max History Size")]),e._v(" "),t("p",[e._v("The history size of the workflow cannot be too large otherwise it will cause performance issue during replay. The soft limit is 200MB. If exceeding workflow will be terminated by server.")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query is same as the client section")])]),e._v(" "),t("h3",{attrs:{id:"max-history-length-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-history-length-2"}},[e._v("#")]),e._v(" Max History Length")]),e._v(" "),t("p",[e._v("Similarly, the history length of the workflow cannot be too large otherwise it will cause performance issues during replay. The soft limit is 200K events. If exceeding, workflow will be terminated by server.")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query is same as the client section")])]),e._v(" "),t("h3",{attrs:{id:"max-event-blob-size-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#max-event-blob-size-2"}},[e._v("#")]),e._v(" Max Event Blob Size")]),e._v(" "),t("ul",[t("li",[e._v("The size of each event(e.g. Decided by input/output of workflow/activity/signal/chidlWorkflow/etc) cannot be too large otherwise it will also cause performance issue. The soft limit is 2MB. If exceeding, the requests will be rejected by server, meaning that workflow won’t be able to make any progress.")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query is same as the client section")])]),e._v(" "),t("h2",{attrs:{id:"cadence-matching-service-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-matching-service-monitoring"}},[e._v("#")]),e._v(" Cadence Matching Service Monitoring")]),e._v(" "),t("p",[e._v("Matching service is to match/assign tasks from cadence service to workers. Matching got the tasks from history service. If workers are active the task will be matched immediately , It’s called “sync match”. If workers are not available, matching will persist into database and then reload the tasks when workers are back(called “async match”)")]),e._v(" "),t("h3",{attrs:{id:"matching-apis-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#matching-apis-per-second"}},[e._v("#")]),e._v(" Matching APIs per Second")]),e._v(" "),t("ul",[t("li",[e._v("API processed by matching service per second")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"matching-api-errors-per-second"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#matching-api-errors-per-second"}},[e._v("#")]),e._v(" Matching API Errors per Second")]),e._v(" "),t("ul",[t("li",[e._v("API errors by matching service per second")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.cadence_errors_per_tl{*} by {operation,domain,tasklist}.as_rate()\nsum:cadence_matching.cadence_errors_bad_request_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_request{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_shard_ownership_lost{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_event_already_started{*} by {operation,domain,tasklist}\n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h3",{attrs:{id:"matching-regular-api-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#matching-regular-api-latency"}},[e._v("#")]),e._v(" Matching Regular API Latency")]),e._v(" "),t("ul",[t("li",[e._v("Regular APIs are the APIs excluding long polls")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_matching.cadence_latency_per_tl.quantile{$pXXLatency,!operation:pollfor*,!operation:queryworkflow} by {operation,tasklist}\n")])])]),t("h3",{attrs:{id:"sync-match-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#sync-match-latency"}},[e._v("#")]),e._v(" Sync Match Latency:")]),e._v(" "),t("ul",[t("li",[e._v("If the latency is too high, probably the tasklist is overloaded. Consider using multiple tasklist, or enable scalable tasklist feature by adding more partition to the tasklist(default is one)\nTo confirm if there are too many tasks being added to the tasklist, use “AddTasks per second - domain, tasklist breakdown”")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.syncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n")])])]),t("h3",{attrs:{id:"async-match-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#async-match-latency"}},[e._v("#")]),e._v(" Async match Latency")]),e._v(" "),t("ul",[t("li",[e._v("If a match is done asynchronously it writes a match to the db to use later. Measures the time when the worker is not actively looking for tasks. If this is high, more workers are needed.")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_matching.asyncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n")])])]),t("h2",{attrs:{id:"cadence-default-persistence-monitoring"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-default-persistence-monitoring"}},[e._v("#")]),e._v(" Cadence Default Persistence Monitoring")]),e._v(" "),t("p",[e._v("The following monotors should be set up for Cadence persistence.")]),e._v(" "),t("h3",{attrs:{id:"persistence-availability"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-availability"}},[e._v("#")]),e._v(" Persistence Availability")]),e._v(" "),t("ul",[t("li",[e._v("The availability of the primary database for your Cadence server")]),e._v(" "),t("li",[e._v("Monitor required: Below 95% > 5min then alert, below 99% triggers a slack warning")]),e._v(" "),t("li",[e._v("When fired, check if it’s due to some persistence issue.\nIf so then investigate the database(may need to scale up) [Mostly]\nIf not then see if need to scale up Cadence deployment(K8s instance)")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_requests{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n(1 - e / f) * 100\n(1 - g / h) * 100\n")])])]),t("h3",{attrs:{id:"persistence-by-service-tps"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-service-tps"}},[e._v("#")]),e._v(" Persistence By Service TPS")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_requests{*}.as_rate()\nsum:cadence_history.persistence_requests{*}.as_rate()\nsum:cadence_worker.persistence_requests{*}.as_rate()\nsum:cadence_matching.persistence_requests{*}.as_rate()\n\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-tps"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-tps"}},[e._v("#")]),e._v(" Persistence By Operation TPS")]),e._v(" "),t("ul",[t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_history.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_rate()\n\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-latency"}},[e._v("#")]),e._v(" Persistence By Operation Latency")]),e._v(" "),t("ul",[t("li",[e._v("Monitor required, alert if 95% of all operation latency is greater than 1 second for 5mins, warning if greater than 0.5 seconds")]),e._v(" "),t("li",[e._v("When fired, investigate the database(may need to scale up) [Mostly]\nIf there’s a high latency, then there could be errors or something wrong with the db")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_matching.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_worker.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_frontend.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.persistence_latency.quantile{$pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"persistence-error-by-operation-count"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-error-by-operation-count"}},[e._v("#")]),e._v(" Persistence Error By Operation Count")]),e._v(" "),t("ul",[t("li",[e._v("It's to help investigate availability issue")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\n\nsum:cadence_frontend.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_history.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_matching.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_worker.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_bad_request{*} by {operation}.as_count()\n\n")])])]),t("ul",[t("li",[t("code",[e._v("cadence_errors")]),e._v(" is internal service errors.")]),e._v(" "),t("li",[e._v("any "),t("code",[e._v("cadence_errors_*")]),e._v(" is client side error")])]),e._v(" "),t("h2",{attrs:{id:"cadence-advanced-visibility-persistence-monitoring-if-applicable"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-advanced-visibility-persistence-monitoring-if-applicable"}},[e._v("#")]),e._v(" Cadence Advanced Visibility Persistence Monitoring(if applicable)")]),e._v(" "),t("p",[e._v("Kafka & ElasticSearch are only for visibility. Only applicable ​​if using advanced visibility.\nFor writing visibility records, Cadence history service will write down the records into Kafka, and then Cadence worker service will read from Kafka and write into ElasticSearch(in batch, for performance optimization)\nFor reading visibility records, Frontend service will query ElasticSearch directly.")]),e._v(" "),t("h3",{attrs:{id:"persistence-availability-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-availability-2"}},[e._v("#")]),e._v(" Persistence Availability")]),e._v(" "),t("ul",[t("li",[e._v("The availability of Cadence server using database")]),e._v(" "),t("li",[e._v("Monitor can be set")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n")])])]),t("h3",{attrs:{id:"persistence-by-service-tps-2"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-service-tps-2"}},[e._v("#")]),e._v(" Persistence By Service TPS")]),e._v(" "),t("ul",[t("li",[e._v("The error of persistence API call by service")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_requests{*}.as_rate()\nsum:cadence_history.elasticsearch_requests{*}.as_rate()\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-tps-read-es-write-kafka"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-tps-read-es-write-kafka"}},[e._v("#")]),e._v(" Persistence By Operation TPS(read: ES, write: Kafka)")]),e._v(" "),t("ul",[t("li",[e._v("The rate of persistence API call by API")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_rate()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_rate()\n")])])]),t("h3",{attrs:{id:"persistence-by-operation-latency-in-seconds-read-es-write-kafka"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-by-operation-latency-in-seconds-read-es-write-kafka"}},[e._v("#")]),e._v(" Persistence By Operation Latency(in seconds) (read: ES, write: Kafka)")]),e._v(" "),t("ul",[t("li",[e._v("The latency of persistence API call")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("avg:cadence_frontend.elasticsearch_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.elasticsearch_latency.quantile{$pXXLatency} by {operation}\n")])])]),t("h3",{attrs:{id:"persistence-error-by-operation-count-read-es-write-kafka"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#persistence-error-by-operation-count-read-es-write-kafka"}},[e._v("#")]),e._v(" Persistence Error By Operation Count (read: ES, write: Kafka)")]),e._v(" "),t("ul",[t("li",[e._v("The error of persistence API call")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\n")])])]),t("h3",{attrs:{id:"kafka-es-processor-counter"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-es-processor-counter"}},[e._v("#")]),e._v(" Kafka->ES processor counter")]),e._v(" "),t("ul",[t("li",[e._v("This is the metrics of a background processing: consuming Kafka messages and then populate to ElasticSearch in batch")]),e._v(" "),t("li",[e._v("Monitor on the running of the background processing(counter metrics is > 0)")]),e._v(" "),t("li",[e._v("When fired, restart Cadence service first to mitigate. Then look at logs to see why the process is stopped(process panic/error/etc).\nMay consider add more pods (replicaCount) to sys-worker service for higher availability")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_worker.es_processor_requests{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_retries{*} by {operation}.as_count()\n")])])]),t("h3",{attrs:{id:"kafka-es-processor-error"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-es-processor-error"}},[e._v("#")]),e._v(" Kafka->ES processor error")]),e._v(" "),t("ul",[t("li",[e._v("This is the error metrics of the above processing logic\nAlmost all errors are retryable errors so it’s not a problem.")]),e._v(" "),t("li",[e._v("Need to monitor error")]),e._v(" "),t("li",[e._v("When fired, Go to Kibana to find logs about the error details.\nThe most common error is missing the ElasticSearch index field -- an index field is added in dynamicconfig but not in ElasticSearch, or vice versa . If so, follow the runbook to add the field to ElasticSearch or dynamic config.")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_worker.es_processor_error{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_corrupted_data{*} by {operation}.as_count()\n")])])]),t("h3",{attrs:{id:"kafka-es-processor-latency"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-es-processor-latency"}},[e._v("#")]),e._v(" Kafka->ES processor latency")]),e._v(" "),t("ul",[t("li",[e._v("The latency of the processing logic")]),e._v(" "),t("li",[e._v("No monitor needed")]),e._v(" "),t("li",[e._v("Datadog query example")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("sum:cadence_worker.es_processor_process_msg_latency.quantile{$pXXLatency} by {operation}.as_count()\n")])])]),t("h2",{attrs:{id:"cadence-dependency-metrics-monitor-suggestion"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-dependency-metrics-monitor-suggestion"}},[e._v("#")]),e._v(" Cadence Dependency Metrics Monitor suggestion")]),e._v(" "),t("h3",{attrs:{id:"computing-platform-metrics-for-cadence-deployment"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#computing-platform-metrics-for-cadence-deployment"}},[e._v("#")]),e._v(" Computing platform metrics for Cadence deployment")]),e._v(" "),t("p",[e._v("Cadence server being deployed on any computing platform(e.g. Kubernetese) should be monitored on the blow metrics:")]),e._v(" "),t("ul",[t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")])]),e._v(" "),t("h3",{attrs:{id:"database"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#database"}},[e._v("#")]),e._v(" Database")]),e._v(" "),t("p",[e._v("Depends on which database, you should at least monitor on the below metrics")]),e._v(" "),t("ul",[t("li",[e._v("Disk Usage")]),e._v(" "),t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")]),e._v(" "),t("li",[e._v("Read API latency")]),e._v(" "),t("li",[e._v("Write API Latency")])]),e._v(" "),t("h3",{attrs:{id:"kafka-if-applicable"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#kafka-if-applicable"}},[e._v("#")]),e._v(" Kafka (if applicable)")]),e._v(" "),t("ul",[t("li",[e._v("Disk Usage")]),e._v(" "),t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")])]),e._v(" "),t("h3",{attrs:{id:"elasticsearch-if-applicable"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#elasticsearch-if-applicable"}},[e._v("#")]),e._v(" ElasticSearch (if applicable)")]),e._v(" "),t("ul",[t("li",[e._v("Disk Usage")]),e._v(" "),t("li",[e._v("CPU")]),e._v(" "),t("li",[e._v("Memory")])]),e._v(" "),t("h2",{attrs:{id:"cadence-service-slo-recommendation"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cadence-service-slo-recommendation"}},[e._v("#")]),e._v(" Cadence Service SLO Recommendation")]),e._v(" "),t("ul",[t("li",[e._v("Core API availability: 99.9%")]),e._v(" "),t("li",[e._v("Core API latency: <1s")]),e._v(" "),t("li",[e._v("Overall task dispatch latency: <2s (queue_latency for transfer task and timer task)")])])])}),[],!1,null,null,null);t.default=s.exports}}]); \ No newline at end of file diff --git a/assets/js/98.c7edf670.js b/assets/js/98.771c9bac.js similarity index 98% rename from assets/js/98.c7edf670.js rename to assets/js/98.771c9bac.js index dfa55cfb6..7895a8a54 100644 --- a/assets/js/98.c7edf670.js +++ b/assets/js/98.771c9bac.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[98],{404:function(e,t,a){"use strict";a.r(t);var s=a(0),o=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-troubleshooting"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-troubleshooting"}},[e._v("#")]),e._v(" Cluster Troubleshooting")]),e._v(" "),t("p",[e._v("This section is to cover some common operation issues as a RunBook. Feel free to add more, or raise issues in the to ask for more in "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadence-docs"),t("OutboundLink")],1),e._v(" project.Or talk to us in Slack support channel!")]),e._v(" "),t("p",[e._v("We will keep adding more stuff. Any contribution is very welcome.")]),e._v(" "),t("h2",{attrs:{id:"errors"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#errors"}},[e._v("#")]),e._v(" Errors")]),e._v(" "),t("ul",[t("li",[t("code",[e._v("Persistence Max QPS Reached for List Operations")]),e._v(" "),t("ul",[t("li",[e._v("Check metrics to see how many List operations are performed per second on the domain. Alternatively you can enable "),t("code",[e._v("debug")]),e._v(" log level to see more details of how a List request is ratelimited, if it's a staging/QA cluster.")]),e._v(" "),t("li",[e._v("Raise the ratelimiting for the domain if you believe the default ratelimit is too low")])])]),e._v(" "),t("li",[t("code",[e._v("Failed to lock shard. Previous range ID: 132; new range ID: 133")]),e._v(" and "),t("code",[e._v("Failed to update shard. Previous range ID: 210; new range ID: 212")]),e._v(" "),t("ul",[t("li",[e._v("When this keep happening, it's very likely a critical configuration error. Either there are two clusters using the same database, or two clusters are using the same ringpop(bootstrap hosts).")])])])]),e._v(" "),t("h2",{attrs:{id:"api-high-latency-timeout-task-disptaching-slowness-or-too-many-operations-onto-db-and-timeouts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#api-high-latency-timeout-task-disptaching-slowness-or-too-many-operations-onto-db-and-timeouts"}},[e._v("#")]),e._v(" API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("If it happens after you attemped to truncate tables inorder to reuse the same database/keyspace for a new cluster, it's possible that the data is not deleted completely. You should make sure to shutdown the Cadence when trucating, and make sure the database is cleaned. Alternatively, use a different keyspace/database is a safer way.")])]),e._v(" "),t("li",[t("p",[e._v("Timeout pushing task to matching engine, e.g. "),t("code",[e._v('"Fail to process task","service":"cadence-history","shard-id":431,"address":"172.31.48.64:7934","component":"transfer-queue-processor","cluster-name":"active","shard-id":431,"queue-task-id":590357768,"queue-task-visibility-timestamp":1637356594382077880,"xdc-failover-version":-24,"queue-task-type":0,"wf-domain-id":"f4d6824f-9d24-4a82-81e0-e0e080be4c21","wf-id":"55d64d58-e398-4bf5-88bc-a4696a2ba87f:63ed7cda-afcf-41cd-9d5a-ee5e1b0f2844","wf-run-id":"53b52ee0-3218-418e-a9bf-7768e671f9c1","error":"code:deadline-exceeded message:timeout","lifecycle":"ProcessingFailed","logging-call-at":"task.go:331"')])]),e._v(" "),t("ul",[t("li",[e._v("If this happens after traffic increased for a certain domain, it's likely that a tasklist is overloaded. Consider "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#scale-up-a-tasklist-using-scalable-tasklist-feature"}},[e._v("scale up the tasklist")])],1)])]),e._v(" "),t("li",[t("p",[e._v("If the request volume aligned with the traffic increased on all domain, consider "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#scale-up-down-cluster"}},[e._v("scale up the cluster")])],1)])])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[98],{405:function(e,t,a){"use strict";a.r(t);var s=a(0),o=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"cluster-troubleshooting"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#cluster-troubleshooting"}},[e._v("#")]),e._v(" Cluster Troubleshooting")]),e._v(" "),t("p",[e._v("This section is to cover some common operation issues as a RunBook. Feel free to add more, or raise issues in the to ask for more in "),t("a",{attrs:{href:"https://github.com/uber/cadence-docs/issues",target:"_blank",rel:"noopener noreferrer"}},[e._v("cadence-docs"),t("OutboundLink")],1),e._v(" project.Or talk to us in Slack support channel!")]),e._v(" "),t("p",[e._v("We will keep adding more stuff. Any contribution is very welcome.")]),e._v(" "),t("h2",{attrs:{id:"errors"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#errors"}},[e._v("#")]),e._v(" Errors")]),e._v(" "),t("ul",[t("li",[t("code",[e._v("Persistence Max QPS Reached for List Operations")]),e._v(" "),t("ul",[t("li",[e._v("Check metrics to see how many List operations are performed per second on the domain. Alternatively you can enable "),t("code",[e._v("debug")]),e._v(" log level to see more details of how a List request is ratelimited, if it's a staging/QA cluster.")]),e._v(" "),t("li",[e._v("Raise the ratelimiting for the domain if you believe the default ratelimit is too low")])])]),e._v(" "),t("li",[t("code",[e._v("Failed to lock shard. Previous range ID: 132; new range ID: 133")]),e._v(" and "),t("code",[e._v("Failed to update shard. Previous range ID: 210; new range ID: 212")]),e._v(" "),t("ul",[t("li",[e._v("When this keep happening, it's very likely a critical configuration error. Either there are two clusters using the same database, or two clusters are using the same ringpop(bootstrap hosts).")])])])]),e._v(" "),t("h2",{attrs:{id:"api-high-latency-timeout-task-disptaching-slowness-or-too-many-operations-onto-db-and-timeouts"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#api-high-latency-timeout-task-disptaching-slowness-or-too-many-operations-onto-db-and-timeouts"}},[e._v("#")]),e._v(" API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("If it happens after you attemped to truncate tables inorder to reuse the same database/keyspace for a new cluster, it's possible that the data is not deleted completely. You should make sure to shutdown the Cadence when trucating, and make sure the database is cleaned. Alternatively, use a different keyspace/database is a safer way.")])]),e._v(" "),t("li",[t("p",[e._v("Timeout pushing task to matching engine, e.g. "),t("code",[e._v('"Fail to process task","service":"cadence-history","shard-id":431,"address":"172.31.48.64:7934","component":"transfer-queue-processor","cluster-name":"active","shard-id":431,"queue-task-id":590357768,"queue-task-visibility-timestamp":1637356594382077880,"xdc-failover-version":-24,"queue-task-type":0,"wf-domain-id":"f4d6824f-9d24-4a82-81e0-e0e080be4c21","wf-id":"55d64d58-e398-4bf5-88bc-a4696a2ba87f:63ed7cda-afcf-41cd-9d5a-ee5e1b0f2844","wf-run-id":"53b52ee0-3218-418e-a9bf-7768e671f9c1","error":"code:deadline-exceeded message:timeout","lifecycle":"ProcessingFailed","logging-call-at":"task.go:331"')])]),e._v(" "),t("ul",[t("li",[e._v("If this happens after traffic increased for a certain domain, it's likely that a tasklist is overloaded. Consider "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#scale-up-a-tasklist-using-scalable-tasklist-feature"}},[e._v("scale up the tasklist")])],1)])]),e._v(" "),t("li",[t("p",[e._v("If the request volume aligned with the traffic increased on all domain, consider "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#scale-up-down-cluster"}},[e._v("scale up the cluster")])],1)])])])}),[],!1,null,null,null);t.default=o.exports}}]); \ No newline at end of file diff --git a/assets/js/99.e04e2276.js b/assets/js/99.7e9e1999.js similarity index 99% rename from assets/js/99.e04e2276.js rename to assets/js/99.7e9e1999.js index 65375be71..35127ee36 100644 --- a/assets/js/99.e04e2276.js +++ b/assets/js/99.7e9e1999.js @@ -1 +1 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[99],{405:function(e,t,a){"use strict";a.r(t);var s=a(0),r=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"migrate-cadence-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migrate-cadence-cluster"}},[e._v("#")]),e._v(" Migrate Cadence cluster.")]),e._v(" "),t("p",[e._v("There could be some reasons that you need to migrate Cadence clusters:")]),e._v(" "),t("ul",[t("li",[e._v("Migrate to different storage, for example from Postgres/MySQL to Cassandra, or using multiple SQL database as a sharded SQL cluster for Cadence")]),e._v(" "),t("li",[e._v("Split traffic")]),e._v(" "),t("li",[e._v("Datacenter migration")]),e._v(" "),t("li",[e._v("Scale up -- to change numOfHistoryShards.")])]),e._v(" "),t("p",[e._v("Below is two different approaches for migrating a cluster.")]),e._v(" "),t("h2",{attrs:{id:"migrate-with-naive-approach"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migrate-with-naive-approach"}},[e._v("#")]),e._v(" Migrate with naive approach")]),e._v(" "),t("ol",[t("li",[e._v("Set up a new Cadence cluster")]),e._v(" "),t("li",[e._v("Connect client workers to both old and new clusters")]),e._v(" "),t("li",[e._v("Change workflow code to start new workflows only in the new cluster")]),e._v(" "),t("li",[e._v("Wait for all old workflows to finish in the old cluster")]),e._v(" "),t("li",[e._v("Shutdown the old Cadence cluster and stop the client workers from connecting to it.")])]),e._v(" "),t("p",[e._v("NOTE 1: With this approach, workflow history/visibility will not be migrated to new cluster.")]),e._v(" "),t("p",[e._v("NOTE 2: This is the only way to migrate a local domain, because a local domain cannot be converted to a global domain, even after a cluster enables XDC feature.")]),e._v(" "),t("p",[e._v("NOTE 3: Starting from "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.22.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("version 0.22.0"),t("OutboundLink")],1),e._v(", global domain is preferred/recommended. Please ensure you create and use global domains only.\nIf you are using local domains, an easy way is to create a global domain and migrate to the new global domain using the above steps.")]),e._v(" "),t("h2",{attrs:{id:"migrate-with-global-domain-replication-feature"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migrate-with-global-domain-replication-feature"}},[e._v("#")]),e._v(" Migrate with "),t("RouterLink",{attrs:{to:"/docs/concepts/cross-dc-replication/#running-in-production"}},[e._v("Global Domain Replication")]),e._v(" feature")],1),e._v(" "),t("p",[e._v("NOTE 1: If a domain are NOT a global domain, you cannot use the XDC feature to migrate. The only way is to migrate in a "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#migrate-cadence-cluster"}},[e._v("naive approach")])],1),e._v(" "),t("p",[e._v("NOTE 2: Only migrating to the same numHistoryShards is allowed.")]),e._v(" "),t("h3",{attrs:{id:"step-0-verify-clusters-setup-is-correct"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-0-verify-clusters-setup-is-correct"}},[e._v("#")]),e._v(" Step 0 - Verify clusters' setup is correct")]),e._v(" "),t("ul",[t("li",[e._v("Make sure the new cluster doesn’t already have the domain names that needs to be migrated (otherwise domain replication would fail).")])]),e._v(" "),t("p",[e._v("To get all the domains from current cluster:")]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("currentClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" admin domain list\n")])])]),t("p",[e._v("Then\nFor each global domain")]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("newClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("domain_name"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" domain describe\n")])])]),t("p",[e._v("to make sure it doesn't exist in the new cluster.")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Target replication cluster should have numHistoryShards >= source cluster")])]),e._v(" "),t("li",[t("p",[e._v("Target cluster should have the same search attributes enabled in dynamic configuration and in ElasticSearch.")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Check the dynamic configuration to see if they have the same list of "),t("code",[e._v("frontend.validSearchAttributes")]),e._v(". If any is missing in the new cluster, update the dynamic config for the new cluster.")])]),e._v(" "),t("li",[t("p",[e._v("Check results of the below command to make sure that the ES fields matched with the dynamic configuration")])])])])]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[t("span",{pre:!0,attrs:{class:"token function"}},[e._v("curl")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-u")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("UNAME"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(":"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("PW"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-X")]),e._v(" GET https://"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("ES_HOST_OF_NEW_CLUSTER"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("/cadence-visibility-index "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-H")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'Content-Type: application/json'")]),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" jq "),t("span",{pre:!0,attrs:{class:"token builtin class-name"}},[e._v(".")]),e._v("\n")])])]),t("p",[e._v("If any search attribute is missing, add the missing search attributes to target cluster.")]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("newClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" adm cluster add-search-attr "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--search_attr_key")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<>")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--search_attr_type")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<>")]),e._v("\n")])])]),t("h3",{attrs:{id:"step-1-connect-the-two-clusters-using-global-domain-replication-feature"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-1-connect-the-two-clusters-using-global-domain-replication-feature"}},[e._v("#")]),e._v(" Step 1 - Connect the two clusters using global domain(replication) feature")]),e._v(" "),t("p",[e._v("Include the Cluster Information for both the old and new clusters in the ClusterMetadata config of both clusters.\nExample config for currentCluster")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"all-domain-apis-forwarding"')]),e._v(" "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use selected-apis-forwarding if using older versions don't support this policy")]),e._v("\n\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n")])])]),t("p",[e._v("for newClusterName:")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"all-domain-apis-forwarding"')]),e._v("\n\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n")])])]),t("p",[e._v("Deploy the config.\nIn older versions(<= v0.22), only "),t("code",[e._v("selected-apis-forwarding")]),e._v(" is supported. This would require you to deploy a different set of workflow/activity connected to the new Cadence cluster during migration, if high availability/seamless migration is required. Because "),t("code",[e._v("selected-apis-forwarding")]),e._v(" only forwarding the non-worker APIs.")]),e._v(" "),t("p",[e._v("With "),t("code",[e._v("all-domain-apis-forwarding")]),e._v(" policy, all worker + non-worker APIs are forwarded by Cadence cluster. You don't need to make any deployment change to your workflow/activity workers during migration. Once migration, let all workers connect to the new Cadence cluster before removing/shutdown the old cluster.")]),e._v(" "),t("p",[e._v("Therefore, it's recommended to upgrade your Cadence cluster to a higher version with "),t("code",[e._v("all-domain-apis-forwarding")]),e._v(" policy supported. The below steps assuming you are using this policy.")]),e._v(" "),t("h3",{attrs:{id:"step-2-test-replicating-one-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-2-test-replicating-one-domain"}},[e._v("#")]),e._v(" Step 2 - Test Replicating one domain")]),e._v(" "),t("p",[e._v("First of all, try replicating a single domain to make sure everything work. Here uses "),t("code",[e._v("domain update")]),e._v(" to failover, you can also use "),t("code",[e._v("managed failover")]),e._v(" feature to failover. You may use some testing domains for this like "),t("code",[e._v("cadence-canary")]),e._v(".")]),e._v(" "),t("ul",[t("li",[e._v("2.1 Assuming the domain only contain "),t("code",[e._v("currentCluster")]),e._v(" in the cluster list, let's add the new cluster to the domain.")])]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("currentClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("domain_name"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" domain update "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--clusters")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("currentClusterName"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("newClusterName"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),t("p",[e._v("Run the command below to refresh the domain after adding a new cluster to the cluster list; we need to update the active_cluster to the same value that it appears to be.")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do domain update --active_cluster \n")])])]),t("ul",[t("li",[e._v("2.2 failover the domain to be active in new cluster")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow-prototype domain update --active_cluster \n")])])]),t("p",[e._v("Use the domain describe command to verify the entire domain is replicated to the new cluster.")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do domain describe\n")])])]),t("p",[e._v("Find an open workflowID that we want to replicate (you can get it from the UI). Use this command to describe it to make sure it’s open and running:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow describe --workflow_id \n")])])]),t("p",[e._v("Run a signal command against any workflow and check that it was replicated to the new cluster. Example:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow signal --workflow_id --name \n")])])]),t("p",[e._v("This command will send a noop signal to workflows to trigger a decision, which will trigger history replication if needed.")]),e._v(" "),t("p",[e._v("Verify the workflow is replicated in the new cluster")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --st --do workflow describe --workflow_id \n")])])]),t("p",[e._v("Also compare the history between the two clusters:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow show --workflow_id \n")])])]),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow show --workflow_id \n")])])]),t("h3",{attrs:{id:"step-3-start-to-replicate-all-domains"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-3-start-to-replicate-all-domains"}},[e._v("#")]),e._v(" Step 3 - Start to replicate all domains")]),e._v(" "),t("p",[e._v("You can repeat Step 2 for all the domains. Or you can use the managed failover feature to failover all the domains in the cluster with a single command. See more details in the "),t("a",{attrs:{href:"/docs/concepts/cross-dc-replication"}},[e._v("global domain documentation")]),e._v(".")]),e._v(" "),t("p",[e._v("Because replication cannot be triggered without a decision. Again best way is to send a garbage signal to all the workflows.")]),e._v(" "),t("p",[e._v("If advanced visibility is enabled, then use batch signal command to start a batch job to trigger replication for all open workflows:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow batch start --batch_type signal --query “CloseTime = missing” --signal_name --reason --input --yes\n")])])]),t("p",[e._v("Watch metrics & dashboard while this is happening. Also observe the signal batch job to make sure it's completed.")]),e._v(" "),t("h3",{attrs:{id:"step-4-complete-the-migration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-4-complete-the-migration"}},[e._v("#")]),e._v(" Step 4 - Complete the migration")]),e._v(" "),t("p",[e._v("After a few days, make sure everything is stable on the new cluster. The old cluster should only be forwarding requests to new cluster.")]),e._v(" "),t("p",[e._v("A few things need to do in order to shutdown the old cluster.")]),e._v(" "),t("ul",[t("li",[e._v("Migrate all applications to connect to the frontend of new cluster instead of relying on the forwarding")]),e._v(" "),t("li",[e._v("Watch metric dashboard to make sure no any traffic is happening on the old cluster")]),e._v(" "),t("li",[e._v("Delete the old cluster from domain cluster list. This needs to be done for every domain.")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do domain update --clusters \n")])])]),t("ul",[t("li",[e._v("Delete the old cluster from the configuration of the new cluster.")])]),e._v(" "),t("p",[e._v("Once above is done, you can shutdown the old cluster safely.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file +(window.webpackJsonp=window.webpackJsonp||[]).push([[99],{406:function(e,t,a){"use strict";a.r(t);var s=a(0),r=Object(s.a)({},(function(){var e=this,t=e._self._c;return t("ContentSlotsDistributor",{attrs:{"slot-key":e.$parent.slotKey}},[t("h1",{attrs:{id:"migrate-cadence-cluster"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migrate-cadence-cluster"}},[e._v("#")]),e._v(" Migrate Cadence cluster.")]),e._v(" "),t("p",[e._v("There could be some reasons that you need to migrate Cadence clusters:")]),e._v(" "),t("ul",[t("li",[e._v("Migrate to different storage, for example from Postgres/MySQL to Cassandra, or using multiple SQL database as a sharded SQL cluster for Cadence")]),e._v(" "),t("li",[e._v("Split traffic")]),e._v(" "),t("li",[e._v("Datacenter migration")]),e._v(" "),t("li",[e._v("Scale up -- to change numOfHistoryShards.")])]),e._v(" "),t("p",[e._v("Below is two different approaches for migrating a cluster.")]),e._v(" "),t("h2",{attrs:{id:"migrate-with-naive-approach"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migrate-with-naive-approach"}},[e._v("#")]),e._v(" Migrate with naive approach")]),e._v(" "),t("ol",[t("li",[e._v("Set up a new Cadence cluster")]),e._v(" "),t("li",[e._v("Connect client workers to both old and new clusters")]),e._v(" "),t("li",[e._v("Change workflow code to start new workflows only in the new cluster")]),e._v(" "),t("li",[e._v("Wait for all old workflows to finish in the old cluster")]),e._v(" "),t("li",[e._v("Shutdown the old Cadence cluster and stop the client workers from connecting to it.")])]),e._v(" "),t("p",[e._v("NOTE 1: With this approach, workflow history/visibility will not be migrated to new cluster.")]),e._v(" "),t("p",[e._v("NOTE 2: This is the only way to migrate a local domain, because a local domain cannot be converted to a global domain, even after a cluster enables XDC feature.")]),e._v(" "),t("p",[e._v("NOTE 3: Starting from "),t("a",{attrs:{href:"https://github.com/uber/cadence/releases/tag/v0.22.0",target:"_blank",rel:"noopener noreferrer"}},[e._v("version 0.22.0"),t("OutboundLink")],1),e._v(", global domain is preferred/recommended. Please ensure you create and use global domains only.\nIf you are using local domains, an easy way is to create a global domain and migrate to the new global domain using the above steps.")]),e._v(" "),t("h2",{attrs:{id:"migrate-with-global-domain-replication-feature"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#migrate-with-global-domain-replication-feature"}},[e._v("#")]),e._v(" Migrate with "),t("RouterLink",{attrs:{to:"/docs/concepts/cross-dc-replication/#running-in-production"}},[e._v("Global Domain Replication")]),e._v(" feature")],1),e._v(" "),t("p",[e._v("NOTE 1: If a domain are NOT a global domain, you cannot use the XDC feature to migrate. The only way is to migrate in a "),t("RouterLink",{attrs:{to:"/docs/operation-guide/maintain/#migrate-cadence-cluster"}},[e._v("naive approach")])],1),e._v(" "),t("p",[e._v("NOTE 2: Only migrating to the same numHistoryShards is allowed.")]),e._v(" "),t("h3",{attrs:{id:"step-0-verify-clusters-setup-is-correct"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-0-verify-clusters-setup-is-correct"}},[e._v("#")]),e._v(" Step 0 - Verify clusters' setup is correct")]),e._v(" "),t("ul",[t("li",[e._v("Make sure the new cluster doesn’t already have the domain names that needs to be migrated (otherwise domain replication would fail).")])]),e._v(" "),t("p",[e._v("To get all the domains from current cluster:")]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("currentClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" admin domain list\n")])])]),t("p",[e._v("Then\nFor each global domain")]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("newClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("domain_name"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" domain describe\n")])])]),t("p",[e._v("to make sure it doesn't exist in the new cluster.")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Target replication cluster should have numHistoryShards >= source cluster")])]),e._v(" "),t("li",[t("p",[e._v("Target cluster should have the same search attributes enabled in dynamic configuration and in ElasticSearch.")]),e._v(" "),t("ul",[t("li",[t("p",[e._v("Check the dynamic configuration to see if they have the same list of "),t("code",[e._v("frontend.validSearchAttributes")]),e._v(". If any is missing in the new cluster, update the dynamic config for the new cluster.")])]),e._v(" "),t("li",[t("p",[e._v("Check results of the below command to make sure that the ES fields matched with the dynamic configuration")])])])])]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[t("span",{pre:!0,attrs:{class:"token function"}},[e._v("curl")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-u")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("UNAME"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(":"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("PW"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-X")]),e._v(" GET https://"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("ES_HOST_OF_NEW_CLUSTER"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("/cadence-visibility-index "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("-H")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v("'Content-Type: application/json'")]),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("|")]),e._v(" jq "),t("span",{pre:!0,attrs:{class:"token builtin class-name"}},[e._v(".")]),e._v("\n")])])]),t("p",[e._v("If any search attribute is missing, add the missing search attributes to target cluster.")]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("newClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" adm cluster add-search-attr "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--search_attr_key")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<>")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--search_attr_type")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<>")]),e._v("\n")])])]),t("h3",{attrs:{id:"step-1-connect-the-two-clusters-using-global-domain-replication-feature"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-1-connect-the-two-clusters-using-global-domain-replication-feature"}},[e._v("#")]),e._v(" Step 1 - Connect the two clusters using global domain(replication) feature")]),e._v(" "),t("p",[e._v("Include the Cluster Information for both the old and new clusters in the ClusterMetadata config of both clusters.\nExample config for currentCluster")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"all-domain-apis-forwarding"')]),e._v(" "),t("span",{pre:!0,attrs:{class:"token comment"}},[e._v("# use selected-apis-forwarding if using older versions don't support this policy")]),e._v("\n\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n")])])]),t("p",[e._v("for newClusterName:")]),e._v(" "),t("div",{staticClass:"language-yaml extra-class"},[t("pre",{pre:!0,attrs:{class:"language-yaml"}},[t("code",[t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("dcRedirectionPolicy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("policy")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"all-domain-apis-forwarding"')]),e._v("\n\n"),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterMetadata")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enableGlobalDomain")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("failoverVersionIncrement")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("10")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("masterClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("currentClusterName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("clusterInformation")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("1")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("enabled")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token boolean important"}},[e._v("true")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("initialFailoverVersion")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token number"}},[e._v("0")]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcName")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('"cadence-frontend"')]),e._v("\n "),t("span",{pre:!0,attrs:{class:"token key atrule"}},[e._v("rpcAddress")]),t("span",{pre:!0,attrs:{class:"token punctuation"}},[e._v(":")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token string"}},[e._v('""')]),e._v("\n")])])]),t("p",[e._v("Deploy the config.\nIn older versions(<= v0.22), only "),t("code",[e._v("selected-apis-forwarding")]),e._v(" is supported. This would require you to deploy a different set of workflow/activity connected to the new Cadence cluster during migration, if high availability/seamless migration is required. Because "),t("code",[e._v("selected-apis-forwarding")]),e._v(" only forwarding the non-worker APIs.")]),e._v(" "),t("p",[e._v("With "),t("code",[e._v("all-domain-apis-forwarding")]),e._v(" policy, all worker + non-worker APIs are forwarded by Cadence cluster. You don't need to make any deployment change to your workflow/activity workers during migration. Once migration, let all workers connect to the new Cadence cluster before removing/shutdown the old cluster.")]),e._v(" "),t("p",[e._v("Therefore, it's recommended to upgrade your Cadence cluster to a higher version with "),t("code",[e._v("all-domain-apis-forwarding")]),e._v(" policy supported. The below steps assuming you are using this policy.")]),e._v(" "),t("h3",{attrs:{id:"step-2-test-replicating-one-domain"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-2-test-replicating-one-domain"}},[e._v("#")]),e._v(" Step 2 - Test Replicating one domain")]),e._v(" "),t("p",[e._v("First of all, try replicating a single domain to make sure everything work. Here uses "),t("code",[e._v("domain update")]),e._v(" to failover, you can also use "),t("code",[e._v("managed failover")]),e._v(" feature to failover. You may use some testing domains for this like "),t("code",[e._v("cadence-canary")]),e._v(".")]),e._v(" "),t("ul",[t("li",[e._v("2.1 Assuming the domain only contain "),t("code",[e._v("currentCluster")]),e._v(" in the cluster list, let's add the new cluster to the domain.")])]),e._v(" "),t("div",{staticClass:"language-bash extra-class"},[t("pre",{pre:!0,attrs:{class:"language-bash"}},[t("code",[e._v("cadence "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--address")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("currentClusterAddress"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--do")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("domain_name"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" domain update "),t("span",{pre:!0,attrs:{class:"token parameter variable"}},[e._v("--clusters")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("currentClusterName"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v(" "),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v("<")]),e._v("newClusterName"),t("span",{pre:!0,attrs:{class:"token operator"}},[e._v(">")]),e._v("\n")])])]),t("p",[e._v("Run the command below to refresh the domain after adding a new cluster to the cluster list; we need to update the active_cluster to the same value that it appears to be.")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do domain update --active_cluster \n")])])]),t("ul",[t("li",[e._v("2.2 failover the domain to be active in new cluster")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow-prototype domain update --active_cluster \n")])])]),t("p",[e._v("Use the domain describe command to verify the entire domain is replicated to the new cluster.")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do domain describe\n")])])]),t("p",[e._v("Find an open workflowID that we want to replicate (you can get it from the UI). Use this command to describe it to make sure it’s open and running:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow describe --workflow_id \n")])])]),t("p",[e._v("Run a signal command against any workflow and check that it was replicated to the new cluster. Example:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow signal --workflow_id --name \n")])])]),t("p",[e._v("This command will send a noop signal to workflows to trigger a decision, which will trigger history replication if needed.")]),e._v(" "),t("p",[e._v("Verify the workflow is replicated in the new cluster")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --st --do workflow describe --workflow_id \n")])])]),t("p",[e._v("Also compare the history between the two clusters:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow show --workflow_id \n")])])]),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow show --workflow_id \n")])])]),t("h3",{attrs:{id:"step-3-start-to-replicate-all-domains"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-3-start-to-replicate-all-domains"}},[e._v("#")]),e._v(" Step 3 - Start to replicate all domains")]),e._v(" "),t("p",[e._v("You can repeat Step 2 for all the domains. Or you can use the managed failover feature to failover all the domains in the cluster with a single command. See more details in the "),t("a",{attrs:{href:"/docs/concepts/cross-dc-replication"}},[e._v("global domain documentation")]),e._v(".")]),e._v(" "),t("p",[e._v("Because replication cannot be triggered without a decision. Again best way is to send a garbage signal to all the workflows.")]),e._v(" "),t("p",[e._v("If advanced visibility is enabled, then use batch signal command to start a batch job to trigger replication for all open workflows:")]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do workflow batch start --batch_type signal --query “CloseTime = missing” --signal_name --reason --input --yes\n")])])]),t("p",[e._v("Watch metrics & dashboard while this is happening. Also observe the signal batch job to make sure it's completed.")]),e._v(" "),t("h3",{attrs:{id:"step-4-complete-the-migration"}},[t("a",{staticClass:"header-anchor",attrs:{href:"#step-4-complete-the-migration"}},[e._v("#")]),e._v(" Step 4 - Complete the migration")]),e._v(" "),t("p",[e._v("After a few days, make sure everything is stable on the new cluster. The old cluster should only be forwarding requests to new cluster.")]),e._v(" "),t("p",[e._v("A few things need to do in order to shutdown the old cluster.")]),e._v(" "),t("ul",[t("li",[e._v("Migrate all applications to connect to the frontend of new cluster instead of relying on the forwarding")]),e._v(" "),t("li",[e._v("Watch metric dashboard to make sure no any traffic is happening on the old cluster")]),e._v(" "),t("li",[e._v("Delete the old cluster from domain cluster list. This needs to be done for every domain.")])]),e._v(" "),t("div",{staticClass:"language- extra-class"},[t("pre",{pre:!0,attrs:{class:"language-text"}},[t("code",[e._v("cadence --address --do domain update --clusters \n")])])]),t("ul",[t("li",[e._v("Delete the old cluster from the configuration of the new cluster.")])]),e._v(" "),t("p",[e._v("Once above is done, you can shutdown the old cluster safely.")])])}),[],!1,null,null,null);t.default=r.exports}}]); \ No newline at end of file diff --git a/assets/js/app.61e35d83.js b/assets/js/app.1d062fe9.js similarity index 67% rename from assets/js/app.61e35d83.js rename to assets/js/app.1d062fe9.js index d32a67ba3..3fdb57a82 100644 --- a/assets/js/app.61e35d83.js +++ b/assets/js/app.1d062fe9.js @@ -1,10 +1,10 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[0],[]]);!function(e){function t(t){for(var o,a,s=t[0],c=t[1],u=t[2],d=0,h=[];dPromise.all([n.e(0),n.e(18)]).then(n.bind(null,292)),"components/Footer":()=>Promise.resolve().then(n.bind(null,116)),"components/Header":()=>Promise.all([n.e(0),n.e(13)]).then(n.bind(null,354)),"components/MobileHeader":()=>Promise.all([n.e(0),n.e(16)]).then(n.bind(null,355)),"components/Newsletter":()=>Promise.all([n.e(0),n.e(3)]).then(n.bind(null,353)),"components/PostMeta":()=>Promise.all([n.e(0),n.e(14)]).then(n.bind(null,321)),"components/PostTag":()=>Promise.all([n.e(0),n.e(19)]).then(n.bind(null,293)),"components/Sticker":()=>Promise.all([n.e(0),n.e(17)]).then(n.bind(null,294)),"components/Toc":()=>Promise.all([n.e(0),n.e(15)]).then(n.bind(null,322)),"global-components/BaseListLayout":()=>Promise.all([n.e(0),n.e(2)]).then(n.bind(null,356)),"global-components/BlogTag":()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,357)),"global-components/BlogTags":()=>Promise.all([n.e(0),n.e(7)]).then(n.bind(null,358)),"global-components/NavLink":()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,359)),"layouts/FrontmatterKey":()=>n.e(8).then(n.bind(null,362)),"layouts/GlobalLayout":()=>Promise.resolve().then(n.bind(null,2)),"layouts/Layout":()=>n.e(9).then(n.bind(null,363)),"layouts/Post":()=>Promise.all([n.e(0),n.e(1),n.e(4)]).then(n.bind(null,360)),FrontmatterKey:()=>n.e(8).then(n.bind(null,362)),GlobalLayout:()=>Promise.resolve().then(n.bind(null,2)),Layout:()=>n.e(9).then(n.bind(null,363)),Post:()=>Promise.all([n.e(0),n.e(1),n.e(4)]).then(n.bind(null,360)),"components/DropdownLink":()=>Promise.resolve().then(n.bind(null,113)),"components/DropdownTransition":()=>Promise.resolve().then(n.bind(null,30)),"components/NavLink":()=>Promise.resolve().then(n.bind(null,29)),"components/NavLinks":()=>Promise.resolve().then(n.bind(null,28)),"components/Navbar":()=>Promise.resolve().then(n.bind(null,112)),"components/Sidebar":()=>Promise.resolve().then(n.bind(null,114)),"components/SidebarButton":()=>Promise.resolve().then(n.bind(null,117)),"components/SidebarGroup":()=>Promise.resolve().then(n.bind(null,115)),"components/SidebarLink":()=>Promise.resolve().then(n.bind(null,118)),"components/SidebarLinks":()=>Promise.resolve().then(n.bind(null,56)),NotFound:()=>n.e(22).then(n.bind(null,361))},i={"v-0dc9b01d":()=>n.e(23).then(n.bind(null,364)),"v-dd6fb5d2":()=>n.e(24).then(n.bind(null,365)),"v-4100b969":()=>n.e(25).then(n.bind(null,366)),"v-5d913a79":()=>n.e(26).then(n.bind(null,367)),"v-5bc86237":()=>n.e(27).then(n.bind(null,368)),"v-52ad8f77":()=>n.e(28).then(n.bind(null,369)),"v-59a2ac57":()=>n.e(29).then(n.bind(null,370)),"v-586fa1f7":()=>n.e(30).then(n.bind(null,371)),"v-46e2ddd1":()=>n.e(32).then(n.bind(null,372)),"v-2a9dfbe5":()=>n.e(31).then(n.bind(null,373)),"v-151d3dd2":()=>n.e(33).then(n.bind(null,374)),"v-793e7375":()=>n.e(34).then(n.bind(null,375)),"v-5f5271a9":()=>n.e(35).then(n.bind(null,376)),"v-185e9f52":()=>n.e(36).then(n.bind(null,377)),"v-6582ae57":()=>n.e(37).then(n.bind(null,378)),"v-55690947":()=>n.e(39).then(n.bind(null,379)),"v-2315d60a":()=>n.e(12).then(n.bind(null,380)),"v-9e2dfeb2":()=>n.e(40).then(n.bind(null,381)),"v-1ea4d8b9":()=>n.e(38).then(n.bind(null,382)),"v-4ff003f7":()=>n.e(41).then(n.bind(null,383)),"v-7ca21f57":()=>n.e(42).then(n.bind(null,384)),"v-6df5dc97":()=>n.e(43).then(n.bind(null,385)),"v-45466bdb":()=>n.e(20).then(n.bind(null,386)),"v-bed2d0d2":()=>n.e(45).then(n.bind(null,387)),"v-54c8d717":()=>n.e(46).then(n.bind(null,388)),"v-6e3f5451":()=>n.e(48).then(n.bind(null,389)),"v-32adf8e6":()=>n.e(44).then(n.bind(null,390)),"v-0b00b852":()=>n.e(47).then(n.bind(null,391)),"v-39909852":()=>n.e(49).then(n.bind(null,392)),"v-44d49837":()=>n.e(21).then(n.bind(null,393)),"v-15401a12":()=>n.e(50).then(n.bind(null,394)),"v-480f0a7a":()=>n.e(51).then(n.bind(null,395))};function a(e){const t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}const s=/-(\w)/g,c=a(e=>e.replace(s,(e,t)=>t?t.toUpperCase():"")),u=/\B([A-Z])/g,l=a(e=>e.replace(u,"-$1").toLowerCase()),d=a(e=>e.charAt(0).toUpperCase()+e.slice(1));function h(e,t){if(!t)return;if(e(t))return e(t);return t.includes("-")?e(d(c(t))):e(d(t))||e(l(t))}const p=Object.assign({},r,i),f=e=>p[e],m=e=>i[e],g=e=>r[e],v=e=>o.a.component(e);function y(e){return h(m,e)}function b(e){return h(g,e)}function w(e){return h(f,e)}function k(e){return h(v,e)}function C(...e){return Promise.all(e.filter(e=>e).map(async e=>{if(!k(e)&&w(e)){const t=await w(e)();o.a.component(e,t.default)}}))}function _(e,t,n){switch(t){case"components":e[t]||(e[t]={}),Object.assign(e[t],n);break;case"mixins":e[t]||(e[t]=[]),e[t].push(...n);break;default:throw new Error("Unknown option name.")}}function x(e,t){for(let n=0;n({isMobileHeaderOpen:!1}),mounted(){this.$router.afterEach(()=>{this.isMobileHeaderOpen=!1})}},h=(n(272),Object(a.a)(d,(function(){var e=this,t=e._self._c;return t("div",{attrs:{id:"vuepress-theme-blog__global-layout"}},[t("Navbar",{on:{"toggle-sidebar":function(t){e.isMobileHeaderOpen=!e.isMobileHeaderOpen}}}),e._v(" "),t("Sidebar",{attrs:{"is-open":e.isMobileHeaderOpen}}),e._v(" "),t("div",{staticClass:"content-wrapper",on:{click:function(t){e.isMobileHeaderOpen=!1}}},[t("DefaultGlobalLayout")],1),e._v(" "),t("Footer")],1)}),[],!1,null,null,null));t.default=h.exports},function(e,t,n){"use strict";n.d(t,"a",(function(){return Gn})); +(window.webpackJsonp=window.webpackJsonp||[]).push([[0],[]]);!function(e){function t(t){for(var o,a,s=t[0],c=t[1],u=t[2],d=0,h=[];dPromise.all([n.e(0),n.e(18)]).then(n.bind(null,292)),"components/Footer":()=>Promise.resolve().then(n.bind(null,116)),"components/Header":()=>Promise.all([n.e(0),n.e(13)]).then(n.bind(null,354)),"components/MobileHeader":()=>Promise.all([n.e(0),n.e(16)]).then(n.bind(null,355)),"components/Newsletter":()=>Promise.all([n.e(0),n.e(3)]).then(n.bind(null,353)),"components/PostMeta":()=>Promise.all([n.e(0),n.e(14)]).then(n.bind(null,321)),"components/PostTag":()=>Promise.all([n.e(0),n.e(19)]).then(n.bind(null,293)),"components/Sticker":()=>Promise.all([n.e(0),n.e(17)]).then(n.bind(null,294)),"components/Toc":()=>Promise.all([n.e(0),n.e(15)]).then(n.bind(null,322)),"global-components/BaseListLayout":()=>Promise.all([n.e(0),n.e(2)]).then(n.bind(null,356)),"global-components/BlogTag":()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,357)),"global-components/BlogTags":()=>Promise.all([n.e(0),n.e(7)]).then(n.bind(null,358)),"global-components/NavLink":()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,359)),"layouts/FrontmatterKey":()=>n.e(8).then(n.bind(null,362)),"layouts/GlobalLayout":()=>Promise.resolve().then(n.bind(null,2)),"layouts/Layout":()=>n.e(9).then(n.bind(null,363)),"layouts/Post":()=>Promise.all([n.e(0),n.e(1),n.e(4)]).then(n.bind(null,360)),FrontmatterKey:()=>n.e(8).then(n.bind(null,362)),GlobalLayout:()=>Promise.resolve().then(n.bind(null,2)),Layout:()=>n.e(9).then(n.bind(null,363)),Post:()=>Promise.all([n.e(0),n.e(1),n.e(4)]).then(n.bind(null,360)),"components/DropdownLink":()=>Promise.resolve().then(n.bind(null,113)),"components/DropdownTransition":()=>Promise.resolve().then(n.bind(null,30)),"components/NavLink":()=>Promise.resolve().then(n.bind(null,29)),"components/NavLinks":()=>Promise.resolve().then(n.bind(null,28)),"components/Navbar":()=>Promise.resolve().then(n.bind(null,112)),"components/Sidebar":()=>Promise.resolve().then(n.bind(null,114)),"components/SidebarButton":()=>Promise.resolve().then(n.bind(null,117)),"components/SidebarGroup":()=>Promise.resolve().then(n.bind(null,115)),"components/SidebarLink":()=>Promise.resolve().then(n.bind(null,118)),"components/SidebarLinks":()=>Promise.resolve().then(n.bind(null,56)),NotFound:()=>n.e(22).then(n.bind(null,361))},i={"v-0dc9b01d":()=>n.e(23).then(n.bind(null,364)),"v-dd6fb5d2":()=>n.e(24).then(n.bind(null,365)),"v-5d913a79":()=>n.e(26).then(n.bind(null,366)),"v-5bc86237":()=>n.e(27).then(n.bind(null,367)),"v-4100b969":()=>n.e(25).then(n.bind(null,368)),"v-52ad8f77":()=>n.e(28).then(n.bind(null,369)),"v-586fa1f7":()=>n.e(30).then(n.bind(null,370)),"v-59a2ac57":()=>n.e(29).then(n.bind(null,371)),"v-2a9dfbe5":()=>n.e(31).then(n.bind(null,372)),"v-46e2ddd1":()=>n.e(32).then(n.bind(null,373)),"v-151d3dd2":()=>n.e(33).then(n.bind(null,374)),"v-793e7375":()=>n.e(34).then(n.bind(null,375)),"v-5f5271a9":()=>n.e(35).then(n.bind(null,376)),"v-185e9f52":()=>n.e(36).then(n.bind(null,377)),"v-55690947":()=>n.e(39).then(n.bind(null,378)),"v-9e2dfeb2":()=>n.e(40).then(n.bind(null,379)),"v-1ea4d8b9":()=>n.e(38).then(n.bind(null,380)),"v-2315d60a":()=>n.e(12).then(n.bind(null,381)),"v-6582ae57":()=>n.e(37).then(n.bind(null,382)),"v-4ff003f7":()=>n.e(41).then(n.bind(null,383)),"v-7ca21f57":()=>n.e(42).then(n.bind(null,384)),"v-6df5dc97":()=>n.e(43).then(n.bind(null,385)),"v-45466bdb":()=>n.e(20).then(n.bind(null,386)),"v-bed2d0d2":()=>n.e(45).then(n.bind(null,387)),"v-32adf8e6":()=>n.e(44).then(n.bind(null,388)),"v-54c8d717":()=>n.e(46).then(n.bind(null,389)),"v-0b00b852":()=>n.e(47).then(n.bind(null,390)),"v-6e3f5451":()=>n.e(48).then(n.bind(null,391)),"v-44d49837":()=>n.e(21).then(n.bind(null,392)),"v-39909852":()=>n.e(49).then(n.bind(null,393)),"v-15401a12":()=>n.e(50).then(n.bind(null,394)),"v-480f0a7a":()=>n.e(51).then(n.bind(null,395))};function a(e){const t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}const s=/-(\w)/g,c=a(e=>e.replace(s,(e,t)=>t?t.toUpperCase():"")),u=/\B([A-Z])/g,l=a(e=>e.replace(u,"-$1").toLowerCase()),d=a(e=>e.charAt(0).toUpperCase()+e.slice(1));function h(e,t){if(!t)return;if(e(t))return e(t);return t.includes("-")?e(d(c(t))):e(d(t))||e(l(t))}const p=Object.assign({},r,i),f=e=>p[e],m=e=>i[e],g=e=>r[e],v=e=>o.a.component(e);function y(e){return h(m,e)}function b(e){return h(g,e)}function w(e){return h(f,e)}function k(e){return h(v,e)}function C(...e){return Promise.all(e.filter(e=>e).map(async e=>{if(!k(e)&&w(e)){const t=await w(e)();o.a.component(e,t.default)}}))}function _(e,t,n){switch(t){case"components":e[t]||(e[t]={}),Object.assign(e[t],n);break;case"mixins":e[t]||(e[t]=[]),e[t].push(...n);break;default:throw new Error("Unknown option name.")}}function x(e,t){for(let n=0;n({isMobileHeaderOpen:!1}),mounted(){this.$router.afterEach(()=>{this.isMobileHeaderOpen=!1})}},h=(n(272),Object(a.a)(d,(function(){var e=this,t=e._self._c;return t("div",{attrs:{id:"vuepress-theme-blog__global-layout"}},[t("Navbar",{on:{"toggle-sidebar":function(t){e.isMobileHeaderOpen=!e.isMobileHeaderOpen}}}),e._v(" "),t("Sidebar",{attrs:{"is-open":e.isMobileHeaderOpen}}),e._v(" "),t("div",{staticClass:"content-wrapper",on:{click:function(t){e.isMobileHeaderOpen=!1}}},[t("DefaultGlobalLayout")],1),e._v(" "),t("Footer")],1)}),[],!1,null,null,null));t.default=h.exports},function(e,t,n){"use strict";n.d(t,"a",(function(){return Gn})); /*! * Vue.js v2.7.16 * (c) 2014-2023 Evan You * Released under the MIT License. */ -var o=Object.freeze({}),r=Array.isArray;function i(e){return null==e}function a(e){return null!=e}function s(e){return!0===e}function c(e){return"string"==typeof e||"number"==typeof e||"symbol"==typeof e||"boolean"==typeof e}function u(e){return"function"==typeof e}function l(e){return null!==e&&"object"==typeof e}var d=Object.prototype.toString;function h(e){return"[object Object]"===d.call(e)}function p(e){return"[object RegExp]"===d.call(e)}function f(e){var t=parseFloat(String(e));return t>=0&&Math.floor(t)===t&&isFinite(e)}function m(e){return a(e)&&"function"==typeof e.then&&"function"==typeof e.catch}function g(e){return null==e?"":Array.isArray(e)||h(e)&&e.toString===d?JSON.stringify(e,v,2):String(e)}function v(e,t){return t&&t.__v_isRef?t.value:t}function y(e){var t=parseFloat(e);return isNaN(t)?e:t}function b(e,t){for(var n=Object.create(null),o=e.split(","),r=0;r-1)return e.splice(o,1)}}var C=Object.prototype.hasOwnProperty;function _(e,t){return C.call(e,t)}function x(e){var t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}var S=/-(\w)/g,O=x((function(e){return e.replace(S,(function(e,t){return t?t.toUpperCase():""}))})),j=x((function(e){return e.charAt(0).toUpperCase()+e.slice(1)})),P=/\B([A-Z])/g,$=x((function(e){return e.replace(P,"-$1").toLowerCase()}));var T=Function.prototype.bind?function(e,t){return e.bind(t)}:function(e,t){function n(n){var o=arguments.length;return o?o>1?e.apply(t,arguments):e.call(t,n):e.call(t)}return n._length=e.length,n};function A(e,t){t=t||0;for(var n=e.length-t,o=new Array(n);n--;)o[n]=e[n+t];return o}function E(e,t){for(var n in t)e[n]=t[n];return e}function I(e){for(var t={},n=0;n0,X=J&&J.indexOf("edge/")>0;J&&J.indexOf("android");var ee=J&&/iphone|ipad|ipod|ios/.test(J);J&&/chrome\/\d+/.test(J),J&&/phantomjs/.test(J);var te,ne=J&&J.match(/firefox\/(\d+)/),oe={}.watch,re=!1;if(Z)try{var ie={};Object.defineProperty(ie,"passive",{get:function(){re=!0}}),window.addEventListener("test-passive",null,ie)}catch(e){}var ae=function(){return void 0===te&&(te=!Z&&"undefined"!=typeof global&&(global.process&&"server"===global.process.env.VUE_ENV)),te},se=Z&&window.__VUE_DEVTOOLS_GLOBAL_HOOK__;function ce(e){return"function"==typeof e&&/native code/.test(e.toString())}var ue,le="undefined"!=typeof Symbol&&ce(Symbol)&&"undefined"!=typeof Reflect&&ce(Reflect.ownKeys);ue="undefined"!=typeof Set&&ce(Set)?Set:function(){function e(){this.set=Object.create(null)}return e.prototype.has=function(e){return!0===this.set[e]},e.prototype.add=function(e){this.set[e]=!0},e.prototype.clear=function(){this.set=Object.create(null)},e}();var de=null;function he(e){void 0===e&&(e=null),e||de&&de._scope.off(),de=e,e&&e._scope.on()}var pe=function(){function e(e,t,n,o,r,i,a,s){this.tag=e,this.data=t,this.children=n,this.text=o,this.elm=r,this.ns=void 0,this.context=i,this.fnContext=void 0,this.fnOptions=void 0,this.fnScopeId=void 0,this.key=t&&t.key,this.componentOptions=a,this.componentInstance=void 0,this.parent=void 0,this.raw=!1,this.isStatic=!1,this.isRootInsert=!0,this.isComment=!1,this.isCloned=!1,this.isOnce=!1,this.asyncFactory=s,this.asyncMeta=void 0,this.isAsyncPlaceholder=!1}return Object.defineProperty(e.prototype,"child",{get:function(){return this.componentInstance},enumerable:!1,configurable:!0}),e}(),fe=function(e){void 0===e&&(e="");var t=new pe;return t.text=e,t.isComment=!0,t};function me(e){return new pe(void 0,void 0,void 0,String(e))}function ge(e){var t=new pe(e.tag,e.data,e.children&&e.children.slice(),e.text,e.elm,e.context,e.componentOptions,e.asyncFactory);return t.ns=e.ns,t.isStatic=e.isStatic,t.key=e.key,t.isComment=e.isComment,t.fnContext=e.fnContext,t.fnOptions=e.fnOptions,t.fnScopeId=e.fnScopeId,t.asyncMeta=e.asyncMeta,t.isCloned=!0,t}"function"==typeof SuppressedError&&SuppressedError;var ve=0,ye=[],be=function(){function e(){this._pending=!1,this.id=ve++,this.subs=[]}return e.prototype.addSub=function(e){this.subs.push(e)},e.prototype.removeSub=function(e){this.subs[this.subs.indexOf(e)]=null,this._pending||(this._pending=!0,ye.push(this))},e.prototype.depend=function(t){e.target&&e.target.addDep(this)},e.prototype.notify=function(e){var t=this.subs.filter((function(e){return e}));for(var n=0,o=t.length;n0&&(Je((u=e(u,"".concat(n||"","_").concat(o)))[0])&&Je(d)&&(h[l]=me(d.text+u[0].text),u.shift()),h.push.apply(h,u)):c(u)?Je(d)?h[l]=me(d.text+u):""!==u&&h.push(me(u)):Je(u)&&Je(d)?h[l]=me(d.text+u.text):(s(t._isVList)&&a(u.tag)&&i(u.key)&&a(n)&&(u.key="__vlist".concat(n,"_").concat(o,"__")),h.push(u)));return h}(e):void 0}function Je(e){return a(e)&&a(e.text)&&!1===e.isComment}function Ke(e,t){var n,o,i,s,c=null;if(r(e)||"string"==typeof e)for(c=new Array(e.length),n=0,o=e.length;n0,s=t?!!t.$stable:!a,c=t&&t.$key;if(t){if(t._normalized)return t._normalized;if(s&&r&&r!==o&&c===r.$key&&!a&&!r.$hasNormal)return r;for(var u in i={},t)t[u]&&"$"!==u[0]&&(i[u]=gt(e,n,u,t[u]))}else i={};for(var l in n)l in i||(i[l]=vt(n,l));return t&&Object.isExtensible(t)&&(t._normalized=i),V(i,"$stable",s),V(i,"$key",c),V(i,"$hasNormal",a),i}function gt(e,t,n,o){var i=function(){var t=de;he(e);var n=arguments.length?o.apply(null,arguments):o({}),i=(n=n&&"object"==typeof n&&!r(n)?[n]:Ze(n))&&n[0];return he(t),n&&(!i||1===n.length&&i.isComment&&!ft(i))?void 0:n};return o.proxy&&Object.defineProperty(t,n,{get:i,enumerable:!0,configurable:!0}),i}function vt(e,t){return function(){return e[t]}}function yt(e){return{get attrs(){if(!e._attrsProxy){var t=e._attrsProxy={};V(t,"_v_attr_proxy",!0),bt(t,e.$attrs,o,e,"$attrs")}return e._attrsProxy},get listeners(){e._listenersProxy||bt(e._listenersProxy={},e.$listeners,o,e,"$listeners");return e._listenersProxy},get slots(){return function(e){e._slotsProxy||kt(e._slotsProxy={},e.$scopedSlots);return e._slotsProxy}(e)},emit:T(e.$emit,e),expose:function(t){t&&Object.keys(t).forEach((function(n){return We(e,t,n)}))}}}function bt(e,t,n,o,r){var i=!1;for(var a in t)a in e?t[a]!==n[a]&&(i=!0):(i=!0,wt(e,a,o,r));for(var a in e)a in t||(i=!0,delete e[a]);return i}function wt(e,t,n,o){Object.defineProperty(e,t,{enumerable:!0,configurable:!0,get:function(){return n[o][t]}})}function kt(e,t){for(var n in t)e[n]=t[n];for(var n in e)n in t||delete e[n]}var Ct=null;function _t(e,t){return(e.__esModule||le&&"Module"===e[Symbol.toStringTag])&&(e=e.default),l(e)?t.extend(e):e}function xt(e){if(r(e))for(var t=0;tdocument.createEvent("Event").timeStamp&&(un=function(){return ln.now()})}var dn=function(e,t){if(e.post){if(!t.post)return 1}else if(t.post)return-1;return e.id-t.id};function hn(){var e,t;for(cn=un(),an=!0,tn.sort(dn),sn=0;snsn&&tn[n].id>e.id;)n--;tn.splice(n+1,0,e)}else tn.push(e);rn||(rn=!0,Rt(hn))}}function fn(e,t){if(e){for(var n=Object.create(null),o=le?Reflect.ownKeys(e):Object.keys(e),r=0;r-1)if(i&&!_(r,"default"))a=!1;else if(""===a||a===$(e)){var c=Dn(String,r.type);(c<0||s-1:"string"==typeof e?e.split(",").indexOf(t)>-1:!!p(e)&&e.test(t)}function Kn(e,t){var n=e.cache,o=e.keys,r=e._vnode,i=e.$vnode;for(var a in n){var s=n[a];if(s){var c=s.name;c&&!t(c)&&Qn(n,a,o,r)}}i.componentOptions.children=void 0}function Qn(e,t,n,o){var r=e[t];!r||o&&r.tag===o.tag||r.componentInstance.$destroy(),e[t]=null,k(n,t)}!function(e){e.prototype._init=function(e){var t=this;t._uid=qn++,t._isVue=!0,t.__v_skip=!0,t._scope=new ze(!0),t._scope.parent=void 0,t._scope._vm=!0,e&&e._isComponent?function(e,t){var n=e.$options=Object.create(e.constructor.options),o=t._parentVnode;n.parent=t.parent,n._parentVnode=o;var r=o.componentOptions;n.propsData=r.propsData,n._parentListeners=r.listeners,n._renderChildren=r.children,n._componentTag=r.tag,t.render&&(n.render=t.render,n.staticRenderFns=t.staticRenderFns)}(t,e):t.$options=Tn(Vn(t.constructor),e||{},t),t._renderProxy=t,t._self=t,function(e){var t=e.$options,n=t.parent;if(n&&!t.abstract){for(;n.$options.abstract&&n.$parent;)n=n.$parent;n.$children.push(e)}e.$parent=n,e.$root=n?n.$root:e,e.$children=[],e.$refs={},e._provided=n?n._provided:Object.create(null),e._watcher=null,e._inactive=null,e._directInactive=!1,e._isMounted=!1,e._isDestroyed=!1,e._isBeingDestroyed=!1}(t),function(e){e._events=Object.create(null),e._hasHookEvent=!1;var t=e.$options._parentListeners;t&&Zt(e,t)}(t),function(e){e._vnode=null,e._staticTrees=null;var t=e.$options,n=e.$vnode=t._parentVnode,r=n&&n.context;e.$slots=ht(t._renderChildren,r),e.$scopedSlots=n?mt(e.$parent,n.data.scopedSlots,e.$slots):o,e._c=function(t,n,o,r){return St(e,t,n,o,r,!1)},e.$createElement=function(t,n,o,r){return St(e,t,n,o,r,!0)};var i=n&&n.data;Ee(e,"$attrs",i&&i.attrs||o,null,!0),Ee(e,"$listeners",t._parentListeners||o,null,!0)}(t),en(t,"beforeCreate",void 0,!1),function(e){var t=fn(e.$options.inject,e);t&&(Pe(!1),Object.keys(t).forEach((function(n){Ee(e,n,t[n])})),Pe(!0))}(t),Rn(t),function(e){var t=e.$options.provide;if(t){var n=u(t)?t.call(e):t;if(!l(n))return;for(var o=He(e),r=le?Reflect.ownKeys(n):Object.keys(n),i=0;i1?A(n):n;for(var o=A(arguments,1),r='event handler for "'.concat(e,'"'),i=0,a=n.length;iparseInt(this.max)&&Qn(e,t[0],t,this._vnode),this.vnodeToCache=null}}},created:function(){this.cache=Object.create(null),this.keys=[]},destroyed:function(){for(var e in this.cache)Qn(this.cache,e,this.keys)},mounted:function(){var e=this;this.cacheVNode(),this.$watch("include",(function(t){Kn(e,(function(e){return Jn(t,e)}))})),this.$watch("exclude",(function(t){Kn(e,(function(e){return!Jn(t,e)}))}))},updated:function(){this.cacheVNode()},render:function(){var e=this.$slots.default,t=xt(e),n=t&&t.componentOptions;if(n){var o=Zn(n),r=this.include,i=this.exclude;if(r&&(!o||!Jn(r,o))||i&&o&&Jn(i,o))return t;var a=this.cache,s=this.keys,c=null==t.key?n.Ctor.cid+(n.tag?"::".concat(n.tag):""):t.key;a[c]?(t.componentInstance=a[c].componentInstance,k(s,c),s.push(c)):(this.vnodeToCache=t,this.keyToCache=c),t.data.keepAlive=!0}return t||e&&e[0]}}};!function(e){var t={get:function(){return H}};Object.defineProperty(e,"config",t),e.util={warn:_n,extend:E,mergeOptions:Tn,defineReactive:Ee},e.set=Ie,e.delete=Le,e.nextTick=Rt,e.observable=function(e){return Ae(e),e},e.options=Object.create(null),U.forEach((function(t){e.options[t+"s"]=Object.create(null)})),e.options._base=e,E(e.options.components,eo),function(e){e.use=function(e){var t=this._installedPlugins||(this._installedPlugins=[]);if(t.indexOf(e)>-1)return this;var n=A(arguments,1);return n.unshift(this),u(e.install)?e.install.apply(e,n):u(e)&&e.apply(null,n),t.push(e),this}}(e),function(e){e.mixin=function(e){return this.options=Tn(this.options,e),this}}(e),Yn(e),function(e){U.forEach((function(t){e[t]=function(e,n){return n?("component"===t&&h(n)&&(n.name=n.name||e,n=this.options._base.extend(n)),"directive"===t&&u(n)&&(n={bind:n,update:n}),this.options[t+"s"][e]=n,n):this.options[t+"s"][e]}}))}(e)}(Gn),Object.defineProperty(Gn.prototype,"$isServer",{get:ae}),Object.defineProperty(Gn.prototype,"$ssrContext",{get:function(){return this.$vnode&&this.$vnode.ssrContext}}),Object.defineProperty(Gn,"FunctionalRenderContext",{value:mn}),Gn.version="2.7.16";var to=b("style,class"),no=b("input,textarea,option,select,progress"),oo=b("contenteditable,draggable,spellcheck"),ro=b("events,caret,typing,plaintext-only"),io=b("allowfullscreen,async,autofocus,autoplay,checked,compact,controls,declare,default,defaultchecked,defaultmuted,defaultselected,defer,disabled,enabled,formnovalidate,hidden,indeterminate,inert,ismap,itemscope,loop,multiple,muted,nohref,noresize,noshade,novalidate,nowrap,open,pauseonexit,readonly,required,reversed,scoped,seamless,selected,sortable,truespeed,typemustmatch,visible"),ao="http://www.w3.org/1999/xlink",so=function(e){return":"===e.charAt(5)&&"xlink"===e.slice(0,5)},co=function(e){return so(e)?e.slice(6,e.length):""},uo=function(e){return null==e||!1===e};function lo(e){for(var t=e.data,n=e,o=e;a(o.componentInstance);)(o=o.componentInstance._vnode)&&o.data&&(t=ho(o.data,t));for(;a(n=n.parent);)n&&n.data&&(t=ho(t,n.data));return function(e,t){if(a(e)||a(t))return po(e,fo(t));return""}(t.staticClass,t.class)}function ho(e,t){return{staticClass:po(e.staticClass,t.staticClass),class:a(e.class)?[e.class,t.class]:t.class}}function po(e,t){return e?t?e+" "+t:e:t||""}function fo(e){return Array.isArray(e)?function(e){for(var t,n="",o=0,r=e.length;o-1?Fo(e,t,n):io(t)?uo(n)?e.removeAttribute(t):(n="allowfullscreen"===t&&"EMBED"===e.tagName?"true":t,e.setAttribute(t,n)):oo(t)?e.setAttribute(t,function(e,t){return uo(t)||"false"===t?"false":"contenteditable"===e&&ro(t)?t:"true"}(t,n)):so(t)?uo(n)?e.removeAttributeNS(ao,co(t)):e.setAttributeNS(ao,t,n):Fo(e,t,n)}function Fo(e,t,n){if(uo(n))e.removeAttribute(t);else{if(K&&!Q&&"TEXTAREA"===e.tagName&&"placeholder"===t&&""!==n&&!e.__ieph){var o=function(t){t.stopImmediatePropagation(),e.removeEventListener("input",o)};e.addEventListener("input",o),e.__ieph=!0}e.setAttribute(t,n)}}var Ro={create:Do,update:Do};function Wo(e,t){var n=t.elm,o=t.data,r=e.data;if(!(i(o.staticClass)&&i(o.class)&&(i(r)||i(r.staticClass)&&i(r.class)))){var s=lo(t),c=n._transitionClasses;a(c)&&(s=po(s,fo(c))),s!==n._prevClass&&(n.setAttribute("class",s),n._prevClass=s)}}var Uo,zo={create:Wo,update:Wo};function Ho(e,t,n){var o=Uo;return function r(){var i=t.apply(null,arguments);null!==i&&Vo(e,r,n,o)}}var Bo=At&&!(ne&&Number(ne[1])<=53);function qo(e,t,n,o){if(Bo){var r=cn,i=t;t=i._wrapper=function(e){if(e.target===e.currentTarget||e.timeStamp>=r||e.timeStamp<=0||e.target.ownerDocument!==document)return i.apply(this,arguments)}}Uo.addEventListener(e,t,re?{capture:n,passive:o}:n)}function Vo(e,t,n,o){(o||Uo).removeEventListener(e,t._wrapper||t,n)}function Go(e,t){if(!i(e.data.on)||!i(t.data.on)){var n=t.data.on||{},o=e.data.on||{};Uo=t.elm||e.elm,function(e){if(a(e.__r)){var t=K?"change":"input";e[t]=[].concat(e.__r,e[t]||[]),delete e.__r}a(e.__c)&&(e.change=[].concat(e.__c,e.change||[]),delete e.__c)}(n),Ve(n,o,qo,Vo,Ho,t.context),Uo=void 0}}var Yo,Zo={create:Go,update:Go,destroy:function(e){return Go(e,So)}};function Jo(e,t){if(!i(e.data.domProps)||!i(t.data.domProps)){var n,o,r=t.elm,c=e.data.domProps||{},u=t.data.domProps||{};for(n in(a(u.__ob__)||s(u._v_attr_proxy))&&(u=t.data.domProps=E({},u)),c)n in u||(r[n]="");for(n in u){if(o=u[n],"textContent"===n||"innerHTML"===n){if(t.children&&(t.children.length=0),o===c[n])continue;1===r.childNodes.length&&r.removeChild(r.childNodes[0])}if("value"===n&&"PROGRESS"!==r.tagName){r._value=o;var l=i(o)?"":String(o);Ko(r,l)&&(r.value=l)}else if("innerHTML"===n&&vo(r.tagName)&&i(r.innerHTML)){(Yo=Yo||document.createElement("div")).innerHTML="".concat(o,"");for(var d=Yo.firstChild;r.firstChild;)r.removeChild(r.firstChild);for(;d.firstChild;)r.appendChild(d.firstChild)}else if(o!==c[n])try{r[n]=o}catch(e){}}}}function Ko(e,t){return!e.composing&&("OPTION"===e.tagName||function(e,t){var n=!0;try{n=document.activeElement!==e}catch(e){}return n&&e.value!==t}(e,t)||function(e,t){var n=e.value,o=e._vModifiers;if(a(o)){if(o.number)return y(n)!==y(t);if(o.trim)return n.trim()!==t.trim()}return n!==t}(e,t))}var Qo={create:Jo,update:Jo},Xo=x((function(e){var t={},n=/:(.+)/;return e.split(/;(?![^(]*\))/g).forEach((function(e){if(e){var o=e.split(n);o.length>1&&(t[o[0].trim()]=o[1].trim())}})),t}));function er(e){var t=tr(e.style);return e.staticStyle?E(e.staticStyle,t):t}function tr(e){return Array.isArray(e)?I(e):"string"==typeof e?Xo(e):e}var nr,or=/^--/,rr=/\s*!important$/,ir=function(e,t,n){if(or.test(t))e.style.setProperty(t,n);else if(rr.test(n))e.style.setProperty($(t),n.replace(rr,""),"important");else{var o=sr(t);if(Array.isArray(n))for(var r=0,i=n.length;r-1?t.split(lr).forEach((function(t){return e.classList.add(t)})):e.classList.add(t);else{var n=" ".concat(e.getAttribute("class")||""," ");n.indexOf(" "+t+" ")<0&&e.setAttribute("class",(n+t).trim())}}function hr(e,t){if(t&&(t=t.trim()))if(e.classList)t.indexOf(" ")>-1?t.split(lr).forEach((function(t){return e.classList.remove(t)})):e.classList.remove(t),e.classList.length||e.removeAttribute("class");else{for(var n=" ".concat(e.getAttribute("class")||""," "),o=" "+t+" ";n.indexOf(o)>=0;)n=n.replace(o," ");(n=n.trim())?e.setAttribute("class",n):e.removeAttribute("class")}}function pr(e){if(e){if("object"==typeof e){var t={};return!1!==e.css&&E(t,fr(e.name||"v")),E(t,e),t}return"string"==typeof e?fr(e):void 0}}var fr=x((function(e){return{enterClass:"".concat(e,"-enter"),enterToClass:"".concat(e,"-enter-to"),enterActiveClass:"".concat(e,"-enter-active"),leaveClass:"".concat(e,"-leave"),leaveToClass:"".concat(e,"-leave-to"),leaveActiveClass:"".concat(e,"-leave-active")}})),mr=Z&&!Q,gr="transition",vr="transitionend",yr="animation",br="animationend";mr&&(void 0===window.ontransitionend&&void 0!==window.onwebkittransitionend&&(gr="WebkitTransition",vr="webkitTransitionEnd"),void 0===window.onanimationend&&void 0!==window.onwebkitanimationend&&(yr="WebkitAnimation",br="webkitAnimationEnd"));var wr=Z?window.requestAnimationFrame?window.requestAnimationFrame.bind(window):setTimeout:function(e){return e()};function kr(e){wr((function(){wr(e)}))}function Cr(e,t){var n=e._transitionClasses||(e._transitionClasses=[]);n.indexOf(t)<0&&(n.push(t),dr(e,t))}function _r(e,t){e._transitionClasses&&k(e._transitionClasses,t),hr(e,t)}function xr(e,t,n){var o=Or(e,t),r=o.type,i=o.timeout,a=o.propCount;if(!r)return n();var s="transition"===r?vr:br,c=0,u=function(){e.removeEventListener(s,l),n()},l=function(t){t.target===e&&++c>=a&&u()};setTimeout((function(){c0&&(n="transition",l=a,d=i.length):"animation"===t?u>0&&(n="animation",l=u,d=c.length):d=(n=(l=Math.max(a,u))>0?a>u?"transition":"animation":null)?"transition"===n?i.length:c.length:0,{type:n,timeout:l,propCount:d,hasTransform:"transition"===n&&Sr.test(o[gr+"Property"])}}function jr(e,t){for(;e.length1}function Ir(e,t){!0!==t.data.show&&$r(t)}var Lr=function(e){var t,n,o={},u=e.modules,l=e.nodeOps;for(t=0;tf?w(e,i(n[v+1])?null:n[v+1].elm,n,p,v,o):p>v&&C(t,d,f)}(d,m,v,n,u):a(v)?(a(e.text)&&l.setTextContent(d,""),w(d,null,v,0,v.length-1,n)):a(m)?C(m,0,m.length-1):a(e.text)&&l.setTextContent(d,""):e.text!==t.text&&l.setTextContent(d,t.text),a(f)&&a(p=f.hook)&&a(p=p.postpatch)&&p(e,t)}}}function O(e,t,n){if(s(n)&&a(e.parent))e.parent.data.pendingInsert=t;else for(var o=0;o-1,a.selected!==i&&(a.selected=i);else if(N(Rr(a),o))return void(e.selectedIndex!==s&&(e.selectedIndex=s));r||(e.selectedIndex=-1)}}function Fr(e,t){return t.every((function(t){return!N(t,e)}))}function Rr(e){return"_value"in e?e._value:e.value}function Wr(e){e.target.composing=!0}function Ur(e){e.target.composing&&(e.target.composing=!1,zr(e.target,"input"))}function zr(e,t){var n=document.createEvent("HTMLEvents");n.initEvent(t,!0,!0),e.dispatchEvent(n)}function Hr(e){return!e.componentInstance||e.data&&e.data.transition?e:Hr(e.componentInstance._vnode)}var Br={model:Mr,show:{bind:function(e,t,n){var o=t.value,r=(n=Hr(n)).data&&n.data.transition,i=e.__vOriginalDisplay="none"===e.style.display?"":e.style.display;o&&r?(n.data.show=!0,$r(n,(function(){e.style.display=i}))):e.style.display=o?i:"none"},update:function(e,t,n){var o=t.value;!o!=!t.oldValue&&((n=Hr(n)).data&&n.data.transition?(n.data.show=!0,o?$r(n,(function(){e.style.display=e.__vOriginalDisplay})):Tr(n,(function(){e.style.display="none"}))):e.style.display=o?e.__vOriginalDisplay:"none")},unbind:function(e,t,n,o,r){r||(e.style.display=e.__vOriginalDisplay)}}},qr={name:String,appear:Boolean,css:Boolean,mode:String,type:String,enterClass:String,leaveClass:String,enterToClass:String,leaveToClass:String,enterActiveClass:String,leaveActiveClass:String,appearClass:String,appearActiveClass:String,appearToClass:String,duration:[Number,String,Object]};function Vr(e){var t=e&&e.componentOptions;return t&&t.Ctor.options.abstract?Vr(xt(t.children)):e}function Gr(e){var t={},n=e.$options;for(var o in n.propsData)t[o]=e[o];var r=n._parentListeners;for(var o in r)t[O(o)]=r[o];return t}function Yr(e,t){if(/\d-keep-alive$/.test(t.tag))return e("keep-alive",{props:t.componentOptions.propsData})}var Zr=function(e){return e.tag||ft(e)},Jr=function(e){return"show"===e.name},Kr={name:"transition",props:qr,abstract:!0,render:function(e){var t=this,n=this.$slots.default;if(n&&(n=n.filter(Zr)).length){0;var o=this.mode;0;var r=n[0];if(function(e){for(;e=e.parent;)if(e.data.transition)return!0}(this.$vnode))return r;var i=Vr(r);if(!i)return r;if(this._leaving)return Yr(e,r);var a="__transition-".concat(this._uid,"-");i.key=null==i.key?i.isComment?a+"comment":a+i.tag:c(i.key)?0===String(i.key).indexOf(a)?i.key:a+i.key:i.key;var s=(i.data||(i.data={})).transition=Gr(this),u=this._vnode,l=Vr(u);if(i.data.directives&&i.data.directives.some(Jr)&&(i.data.show=!0),l&&l.data&&!function(e,t){return t.key===e.key&&t.tag===e.tag}(i,l)&&!ft(l)&&(!l.componentInstance||!l.componentInstance._vnode.isComment)){var d=l.data.transition=E({},s);if("out-in"===o)return this._leaving=!0,Ge(d,"afterLeave",(function(){t._leaving=!1,t.$forceUpdate()})),Yr(e,r);if("in-out"===o){if(ft(i))return u;var h,p=function(){h()};Ge(s,"afterEnter",p),Ge(s,"enterCancelled",p),Ge(d,"delayLeave",(function(e){h=e}))}}return r}}},Qr=E({tag:String,moveClass:String},qr);function Xr(e){e.elm._moveCb&&e.elm._moveCb(),e.elm._enterCb&&e.elm._enterCb()}function ei(e){e.data.newPos=e.elm.getBoundingClientRect()}function ti(e){var t=e.data.pos,n=e.data.newPos,o=t.left-n.left,r=t.top-n.top;if(o||r){e.data.moved=!0;var i=e.elm.style;i.transform=i.WebkitTransform="translate(".concat(o,"px,").concat(r,"px)"),i.transitionDuration="0s"}}delete Qr.mode;var ni={Transition:Kr,TransitionGroup:{props:Qr,beforeMount:function(){var e=this,t=this._update;this._update=function(n,o){var r=Kt(e);e.__patch__(e._vnode,e.kept,!1,!0),e._vnode=e.kept,r(),t.call(e,n,o)}},render:function(e){for(var t=this.tag||this.$vnode.data.tag||"span",n=Object.create(null),o=this.prevChildren=this.children,r=this.$slots.default||[],i=this.children=[],a=Gr(this),s=0;s-1?bo[e]=t.constructor===window.HTMLUnknownElement||t.constructor===window.HTMLElement:bo[e]=/HTMLUnknownElement/.test(t.toString())},E(Gn.options.directives,Br),E(Gn.options.components,ni),Gn.prototype.__patch__=Z?Lr:L,Gn.prototype.$mount=function(e,t){return function(e,t,n){var o;e.$el=t,e.$options.render||(e.$options.render=fe),en(e,"beforeMount"),o=function(){e._update(e._render(),n)},new qt(e,o,L,{before:function(){e._isMounted&&!e._isDestroyed&&en(e,"beforeUpdate")}},!0),n=!1;var r=e._preWatchers;if(r)for(var i=0;iObject.assign({},e))).forEach(e=>{2===e.level?t=e:t&&(t.children||(t.children=[])).push(e)}),e.filter(e=>2===e.level)}function f(e){return Object.assign(e,{type:e.items&&e.items.length?"links":"link"})}},function(e,t,n){"use strict";n.d(t,"a",(function(){return i})),n.d(t,"b",(function(){return a})),n.d(t,"c",(function(){return s})),n.d(t,"d",(function(){return c})),n.d(t,"e",(function(){return u})),n.d(t,"f",(function(){return l})),n.d(t,"g",(function(){return d})),n.d(t,"h",(function(){return h})),n.d(t,"i",(function(){return p})),n.d(t,"j",(function(){return f})),n.d(t,"k",(function(){return m})),n.d(t,"l",(function(){return g})),n.d(t,"m",(function(){return v})),n.d(t,"n",(function(){return y})),n.d(t,"o",(function(){return b})),n.d(t,"p",(function(){return w})),n.d(t,"q",(function(){return k})),n.d(t,"r",(function(){return C})),n.d(t,"s",(function(){return _})),n.d(t,"t",(function(){return x})),n.d(t,"u",(function(){return S}));var o=n(0),r=n.n(o),i={name:"ClockIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-clock"},t.data]),[e("circle",{attrs:{cx:"12",cy:"12",r:"10"}}),e("polyline",{attrs:{points:"12 6 12 12 16 14"}})])}},a={name:"CodepenIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-codepen"},t.data]),[e("polygon",{attrs:{points:"12 2 22 8.5 22 15.5 12 22 2 15.5 2 8.5 12 2"}}),e("line",{attrs:{x1:"12",y1:"22",x2:"12",y2:"15.5"}}),e("polyline",{attrs:{points:"22 8.5 12 15.5 2 8.5"}}),e("polyline",{attrs:{points:"2 15.5 12 8.5 22 15.5"}}),e("line",{attrs:{x1:"12",y1:"2",x2:"12",y2:"8.5"}})])}},s={name:"CodesandboxIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-codesandbox"},t.data]),[e("path",{attrs:{d:"M21 16V8a2 2 0 0 0-1-1.73l-7-4a2 2 0 0 0-2 0l-7 4A2 2 0 0 0 3 8v8a2 2 0 0 0 1 1.73l7 4a2 2 0 0 0 2 0l7-4A2 2 0 0 0 21 16z"}}),e("polyline",{attrs:{points:"7.5 4.21 12 6.81 16.5 4.21"}}),e("polyline",{attrs:{points:"7.5 19.79 7.5 14.6 3 12"}}),e("polyline",{attrs:{points:"21 12 16.5 14.6 16.5 19.79"}}),e("polyline",{attrs:{points:"3.27 6.96 12 12.01 20.73 6.96"}}),e("line",{attrs:{x1:"12",y1:"22.08",x2:"12",y2:"12"}})])}},c={name:"FacebookIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-facebook"},t.data]),[e("path",{attrs:{d:"M18 2h-3a5 5 0 0 0-5 5v3H7v4h3v8h4v-8h3l1-4h-4V7a1 1 0 0 1 1-1h3z"}})])}},u={name:"GithubIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-github"},t.data]),[e("path",{attrs:{d:"M9 19c-5 1.5-5-2.5-7-3m14 6v-3.87a3.37 3.37 0 0 0-.94-2.61c3.14-.35 6.44-1.54 6.44-7A5.44 5.44 0 0 0 20 4.77 5.07 5.07 0 0 0 19.91 1S18.73.65 16 2.48a13.38 13.38 0 0 0-7 0C6.27.65 5.09 1 5.09 1A5.07 5.07 0 0 0 5 4.77a5.44 5.44 0 0 0-1.5 3.78c0 5.42 3.3 6.61 6.44 7A3.37 3.37 0 0 0 9 18.13V22"}})])}},l={name:"GitlabIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-gitlab"},t.data]),[e("path",{attrs:{d:"M22.65 14.39L12 22.13 1.35 14.39a.84.84 0 0 1-.3-.94l1.22-3.78 2.44-7.51A.42.42 0 0 1 4.82 2a.43.43 0 0 1 .58 0 .42.42 0 0 1 .11.18l2.44 7.49h8.1l2.44-7.51A.42.42 0 0 1 18.6 2a.43.43 0 0 1 .58 0 .42.42 0 0 1 .11.18l2.44 7.51L23 13.45a.84.84 0 0 1-.35.94z"}})])}},d={name:"GlobeIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-globe"},t.data]),[e("circle",{attrs:{cx:"12",cy:"12",r:"10"}}),e("line",{attrs:{x1:"2",y1:"12",x2:"22",y2:"12"}}),e("path",{attrs:{d:"M12 2a15.3 15.3 0 0 1 4 10 15.3 15.3 0 0 1-4 10 15.3 15.3 0 0 1-4-10 15.3 15.3 0 0 1 4-10z"}})])}},h={name:"InstagramIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-instagram"},t.data]),[e("rect",{attrs:{x:"2",y:"2",width:"20",height:"20",rx:"5",ry:"5"}}),e("path",{attrs:{d:"M16 11.37A4 4 0 1 1 12.63 8 4 4 0 0 1 16 11.37z"}}),e("line",{attrs:{x1:"17.5",y1:"6.5",x2:"17.5",y2:"6.5"}})])}},p={name:"LinkedinIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-linkedin"},t.data]),[e("path",{attrs:{d:"M16 8a6 6 0 0 1 6 6v7h-4v-7a2 2 0 0 0-2-2 2 2 0 0 0-2 2v7h-4v-7a6 6 0 0 1 6-6z"}}),e("rect",{attrs:{x:"2",y:"9",width:"4",height:"12"}}),e("circle",{attrs:{cx:"4",cy:"4",r:"2"}})])}},f={name:"MailIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-mail"},t.data]),[e("path",{attrs:{d:"M4 4h16c1.1 0 2 .9 2 2v12c0 1.1-.9 2-2 2H4c-1.1 0-2-.9-2-2V6c0-1.1.9-2 2-2z"}}),e("polyline",{attrs:{points:"22,6 12,13 2,6"}})])}},m={name:"MenuIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-menu"},t.data]),[e("line",{attrs:{x1:"3",y1:"12",x2:"21",y2:"12"}}),e("line",{attrs:{x1:"3",y1:"6",x2:"21",y2:"6"}}),e("line",{attrs:{x1:"3",y1:"18",x2:"21",y2:"18"}})])}},g={name:"MessageSquareIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-message-square"},t.data]),[e("path",{attrs:{d:"M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z"}})])}},v={name:"MusicIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-music"},t.data]),[e("path",{attrs:{d:"M9 18V5l12-2v13"}}),e("circle",{attrs:{cx:"6",cy:"18",r:"3"}}),e("circle",{attrs:{cx:"18",cy:"16",r:"3"}})])}},y={name:"NavigationIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-navigation"},t.data]),[e("polygon",{attrs:{points:"3 11 22 2 13 21 11 13 3 11"}})])}},b={name:"PhoneIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-phone"},t.data]),[e("path",{attrs:{d:"M22 16.92v3a2 2 0 0 1-2.18 2 19.79 19.79 0 0 1-8.63-3.07 19.5 19.5 0 0 1-6-6 19.79 19.79 0 0 1-3.07-8.67A2 2 0 0 1 4.11 2h3a2 2 0 0 1 2 1.72 12.84 12.84 0 0 0 .7 2.81 2 2 0 0 1-.45 2.11L8.09 9.91a16 16 0 0 0 6 6l1.27-1.27a2 2 0 0 1 2.11-.45 12.84 12.84 0 0 0 2.81.7A2 2 0 0 1 22 16.92z"}})])}},w={name:"RssIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-rss"},t.data]),[e("path",{attrs:{d:"M4 11a9 9 0 0 1 9 9"}}),e("path",{attrs:{d:"M4 4a16 16 0 0 1 16 16"}}),e("circle",{attrs:{cx:"5",cy:"19",r:"1"}})])}},k={name:"TagIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-tag"},t.data]),[e("path",{attrs:{d:"M20.59 13.41l-7.17 7.17a2 2 0 0 1-2.83 0L2 12V2h10l8.59 8.59a2 2 0 0 1 0 2.82z"}}),e("line",{attrs:{x1:"7",y1:"7",x2:"7",y2:"7"}})])}},C={name:"TwitterIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-twitter"},t.data]),[e("path",{attrs:{d:"M23 3a10.9 10.9 0 0 1-3.14 1.53 4.48 4.48 0 0 0-7.86 3v1A10.66 10.66 0 0 1 3 4s-4 9 5 13a11.64 11.64 0 0 1-7 2c9 5 20 0 20-11.5a4.5 4.5 0 0 0-.08-.83A7.72 7.72 0 0 0 23 3z"}})])}},_={name:"VideoIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-video"},t.data]),[e("polygon",{attrs:{points:"23 7 16 12 23 17 23 7"}}),e("rect",{attrs:{x:"1",y:"5",width:"15",height:"14",rx:"2",ry:"2"}})])}},x={name:"XIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-x"},t.data]),[e("line",{attrs:{x1:"18",y1:"6",x2:"6",y2:"18"}}),e("line",{attrs:{x1:"6",y1:"6",x2:"18",y2:"18"}})])}},S={name:"YoutubeIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-youtube"},t.data]),[e("path",{attrs:{d:"M22.54 6.42a2.78 2.78 0 0 0-1.94-2C18.88 4 12 4 12 4s-6.88 0-8.6.46a2.78 2.78 0 0 0-1.94 2A29 29 0 0 0 1 11.75a29 29 0 0 0 .46 5.33A2.78 2.78 0 0 0 3.4 19c1.72.46 8.6.46 8.6.46s6.88 0 8.6-.46a2.78 2.78 0 0 0 1.94-2 29 29 0 0 0 .46-5.25 29 29 0 0 0-.46-5.33z"}}),e("polygon",{attrs:{points:"9.75 15.02 15.5 11.75 9.75 8.48 9.75 15.02"}})])}}},function(e,t,n){"use strict";var o=function(e){return e&&e.Math===Math&&e};e.exports=o("object"==typeof globalThis&&globalThis)||o("object"==typeof window&&window)||o("object"==typeof self&&self)||o("object"==typeof global&&global)||o("object"==typeof this&&this)||function(){return this}()||Function("return this")()},function(e,t,n){"use strict";var o="object"==typeof document&&document.all;e.exports=void 0===o&&void 0!==o?function(e){return"function"==typeof e||e===o}:function(e){return"function"==typeof e}},function(e,t,n){"use strict";e.exports=function(e){try{return!!e()}catch(e){return!0}}},function(e,t,n){"use strict";var o=n(37),r=Function.prototype,i=r.call,a=o&&r.bind.bind(i,i);e.exports=o?a:function(e){return function(){return i.apply(e,arguments)}}},function(e,t,n){"use strict";var o=n(9);e.exports=!o((function(){return 7!==Object.defineProperty({},1,{get:function(){return 7}})[1]}))},function(e,t,n){var o=n(83),r="object"==typeof self&&self&&self.Object===Object&&self,i=o||r||Function("return this")();e.exports=i},function(e,t){var n=Array.isArray;e.exports=n},function(e,t,n){"use strict";var o=n(8);e.exports=function(e){return"object"==typeof e?null!==e:o(e)}},function(e,t,n){"use strict";var o=n(10),r=n(45),i=o({}.hasOwnProperty);e.exports=Object.hasOwn||function(e,t){return i(r(e),t)}},function(e,t,n){var o=n(195),r=n(198);e.exports=function(e,t){var n=r(e,t);return o(n)?n:void 0}},function(e,t){e.exports=function(e){return null!=e&&"object"==typeof e}},function(e,t,n){"use strict";var o=n(11),r=n(23),i=n(38);e.exports=o?function(e,t,n){return r.f(e,t,i(1,n))}:function(e,t,n){return e[t]=n,e}},function(e,t,n){var o=n(12).Symbol;e.exports=o},function(e,t,n){var o=n(19),r=n(181),i=n(182),a=o?o.toStringTag:void 0;e.exports=function(e){return null==e?void 0===e?"[object Undefined]":"[object Null]":a&&a in Object(e)?r(e):i(e)}},function(e,t,n){var o=n(54);e.exports=function(e){if("string"==typeof e||o(e))return e;var t=e+"";return"0"==t&&1/e==-1/0?"-0":t}},function(e,t,n){"use strict";var o=n(10),r=o({}.toString),i=o("".slice);e.exports=function(e){return i(r(e),8,-1)}},function(e,t,n){"use strict";var o=n(11),r=n(78),i=n(138),a=n(46),s=n(69),c=TypeError,u=Object.defineProperty,l=Object.getOwnPropertyDescriptor;t.f=o?i?function(e,t,n){if(a(e),t=s(t),a(n),"function"==typeof e&&"prototype"===t&&"value"in n&&"writable"in n&&!n.writable){var o=l(e,t);o&&o.writable&&(e[t]=n.value,n={configurable:"configurable"in n?n.configurable:o.configurable,enumerable:"enumerable"in n?n.enumerable:o.enumerable,writable:!1})}return u(e,t,n)}:u:function(e,t,n){if(a(e),t=s(t),a(n),r)try{return u(e,t,n)}catch(e){}if("get"in n||"set"in n)throw new c("Accessors not supported");return"value"in n&&(e[t]=n.value),e}},function(e,t,n){var o=n(185),r=n(186),i=n(187),a=n(188),s=n(189);function c(e){var t=-1,n=null==e?0:e.length;for(this.clear();++t1){const t=this.$page.path,n=this.$router.options.routes,o=this.$site.themeConfig.locales||{},r={text:this.$themeLocaleConfig.selectText||"Languages",ariaLabel:this.$themeLocaleConfig.ariaLabel||"Select language",items:Object.keys(e).map(r=>{const i=e[r],a=o[r]&&o[r].label||i.lang;let s;return i.lang===this.$lang?s=t:(s=t.replace(this.$localeConfig.path,r),n.some(e=>e.path===s)||(s=r)),{text:a,link:s}})};return[...this.userNav,r]}return this.userNav},userLinks(){return(this.nav||[]).map(e=>Object.assign(Object(r.h)(e),{items:(e.items||[]).map(r.h)}))},repoLink(){const{repo:e}=this.$site.themeConfig;return e?/^https?:/.test(e)?e:"https://github.com/"+e:null},repoLabel(){if(!this.repoLink)return;if(this.$site.themeConfig.repoLabel)return this.$site.themeConfig.repoLabel;const e=this.repoLink.match(/^https?:\/\/[^/]+/)[0],t=["GitHub","GitLab","Bitbucket"];for(let n=0;ne===this.link):"/"===this.link},isNonHttpURI(){return Object(o.f)(this.link)||Object(o.g)(this.link)},isBlankTarget(){return"_blank"===this.target},isInternal(){return!Object(o.e)(this.link)&&!this.isBlankTarget},target(){return this.isNonHttpURI?null:this.item.target?this.item.target:Object(o.e)(this.link)?"_blank":""},rel(){return this.isNonHttpURI||!1===this.item.rel?null:this.item.rel?this.item.rel:this.isBlankTarget?"noopener noreferrer":null}},methods:{focusoutAction(){this.$emit("focusout")}}},i=n(4),a=Object(i.a)(r,(function(){var e=this,t=e._self._c;return e.isInternal?t("RouterLink",{staticClass:"nav-link",attrs:{to:e.link,exact:e.exact},nativeOn:{focusout:function(t){return e.focusoutAction.apply(null,arguments)}}},[e._v("\n "+e._s(e.item.text)+"\n")]):t("a",{staticClass:"nav-link external",attrs:{href:e.link,target:e.target,rel:e.rel},on:{focusout:e.focusoutAction}},[e._v("\n "+e._s(e.item.text)+"\n "),e.isBlankTarget?t("OutboundLink"):e._e()],1)}),[],!1,null,null,null);t.default=a.exports},function(e,t,n){"use strict";n.r(t);var o={name:"DropdownTransition",methods:{setHeight(e){e.style.height=e.scrollHeight+"px"},unsetHeight(e){e.style.height=""}}},r=(n(264),n(4)),i=Object(r.a)(o,(function(){return(0,this._self._c)("transition",{attrs:{name:"dropdown"},on:{enter:this.setHeight,"after-enter":this.unsetHeight,"before-leave":this.setHeight}},[this._t("default")],2)}),[],!1,null,null,null);t.default=i.exports},function(e,t,n){var o,r; +var o=Object.freeze({}),r=Array.isArray;function i(e){return null==e}function a(e){return null!=e}function s(e){return!0===e}function c(e){return"string"==typeof e||"number"==typeof e||"symbol"==typeof e||"boolean"==typeof e}function u(e){return"function"==typeof e}function l(e){return null!==e&&"object"==typeof e}var d=Object.prototype.toString;function h(e){return"[object Object]"===d.call(e)}function p(e){return"[object RegExp]"===d.call(e)}function f(e){var t=parseFloat(String(e));return t>=0&&Math.floor(t)===t&&isFinite(e)}function m(e){return a(e)&&"function"==typeof e.then&&"function"==typeof e.catch}function g(e){return null==e?"":Array.isArray(e)||h(e)&&e.toString===d?JSON.stringify(e,v,2):String(e)}function v(e,t){return t&&t.__v_isRef?t.value:t}function y(e){var t=parseFloat(e);return isNaN(t)?e:t}function b(e,t){for(var n=Object.create(null),o=e.split(","),r=0;r-1)return e.splice(o,1)}}var C=Object.prototype.hasOwnProperty;function _(e,t){return C.call(e,t)}function x(e){var t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}var S=/-(\w)/g,O=x((function(e){return e.replace(S,(function(e,t){return t?t.toUpperCase():""}))})),j=x((function(e){return e.charAt(0).toUpperCase()+e.slice(1)})),P=/\B([A-Z])/g,$=x((function(e){return e.replace(P,"-$1").toLowerCase()}));var T=Function.prototype.bind?function(e,t){return e.bind(t)}:function(e,t){function n(n){var o=arguments.length;return o?o>1?e.apply(t,arguments):e.call(t,n):e.call(t)}return n._length=e.length,n};function A(e,t){t=t||0;for(var n=e.length-t,o=new Array(n);n--;)o[n]=e[n+t];return o}function E(e,t){for(var n in t)e[n]=t[n];return e}function I(e){for(var t={},n=0;n0,X=J&&J.indexOf("edge/")>0;J&&J.indexOf("android");var ee=J&&/iphone|ipad|ipod|ios/.test(J);J&&/chrome\/\d+/.test(J),J&&/phantomjs/.test(J);var te,ne=J&&J.match(/firefox\/(\d+)/),oe={}.watch,re=!1;if(Z)try{var ie={};Object.defineProperty(ie,"passive",{get:function(){re=!0}}),window.addEventListener("test-passive",null,ie)}catch(e){}var ae=function(){return void 0===te&&(te=!Z&&"undefined"!=typeof global&&(global.process&&"server"===global.process.env.VUE_ENV)),te},se=Z&&window.__VUE_DEVTOOLS_GLOBAL_HOOK__;function ce(e){return"function"==typeof e&&/native code/.test(e.toString())}var ue,le="undefined"!=typeof Symbol&&ce(Symbol)&&"undefined"!=typeof Reflect&&ce(Reflect.ownKeys);ue="undefined"!=typeof Set&&ce(Set)?Set:function(){function e(){this.set=Object.create(null)}return e.prototype.has=function(e){return!0===this.set[e]},e.prototype.add=function(e){this.set[e]=!0},e.prototype.clear=function(){this.set=Object.create(null)},e}();var de=null;function he(e){void 0===e&&(e=null),e||de&&de._scope.off(),de=e,e&&e._scope.on()}var pe=function(){function e(e,t,n,o,r,i,a,s){this.tag=e,this.data=t,this.children=n,this.text=o,this.elm=r,this.ns=void 0,this.context=i,this.fnContext=void 0,this.fnOptions=void 0,this.fnScopeId=void 0,this.key=t&&t.key,this.componentOptions=a,this.componentInstance=void 0,this.parent=void 0,this.raw=!1,this.isStatic=!1,this.isRootInsert=!0,this.isComment=!1,this.isCloned=!1,this.isOnce=!1,this.asyncFactory=s,this.asyncMeta=void 0,this.isAsyncPlaceholder=!1}return Object.defineProperty(e.prototype,"child",{get:function(){return this.componentInstance},enumerable:!1,configurable:!0}),e}(),fe=function(e){void 0===e&&(e="");var t=new pe;return t.text=e,t.isComment=!0,t};function me(e){return new pe(void 0,void 0,void 0,String(e))}function ge(e){var t=new pe(e.tag,e.data,e.children&&e.children.slice(),e.text,e.elm,e.context,e.componentOptions,e.asyncFactory);return t.ns=e.ns,t.isStatic=e.isStatic,t.key=e.key,t.isComment=e.isComment,t.fnContext=e.fnContext,t.fnOptions=e.fnOptions,t.fnScopeId=e.fnScopeId,t.asyncMeta=e.asyncMeta,t.isCloned=!0,t}"function"==typeof SuppressedError&&SuppressedError;var ve=0,ye=[],be=function(){function e(){this._pending=!1,this.id=ve++,this.subs=[]}return e.prototype.addSub=function(e){this.subs.push(e)},e.prototype.removeSub=function(e){this.subs[this.subs.indexOf(e)]=null,this._pending||(this._pending=!0,ye.push(this))},e.prototype.depend=function(t){e.target&&e.target.addDep(this)},e.prototype.notify=function(e){var t=this.subs.filter((function(e){return e}));for(var n=0,o=t.length;n0&&(Je((u=e(u,"".concat(n||"","_").concat(o)))[0])&&Je(d)&&(h[l]=me(d.text+u[0].text),u.shift()),h.push.apply(h,u)):c(u)?Je(d)?h[l]=me(d.text+u):""!==u&&h.push(me(u)):Je(u)&&Je(d)?h[l]=me(d.text+u.text):(s(t._isVList)&&a(u.tag)&&i(u.key)&&a(n)&&(u.key="__vlist".concat(n,"_").concat(o,"__")),h.push(u)));return h}(e):void 0}function Je(e){return a(e)&&a(e.text)&&!1===e.isComment}function Ke(e,t){var n,o,i,s,c=null;if(r(e)||"string"==typeof e)for(c=new Array(e.length),n=0,o=e.length;n0,s=t?!!t.$stable:!a,c=t&&t.$key;if(t){if(t._normalized)return t._normalized;if(s&&r&&r!==o&&c===r.$key&&!a&&!r.$hasNormal)return r;for(var u in i={},t)t[u]&&"$"!==u[0]&&(i[u]=gt(e,n,u,t[u]))}else i={};for(var l in n)l in i||(i[l]=vt(n,l));return t&&Object.isExtensible(t)&&(t._normalized=i),V(i,"$stable",s),V(i,"$key",c),V(i,"$hasNormal",a),i}function gt(e,t,n,o){var i=function(){var t=de;he(e);var n=arguments.length?o.apply(null,arguments):o({}),i=(n=n&&"object"==typeof n&&!r(n)?[n]:Ze(n))&&n[0];return he(t),n&&(!i||1===n.length&&i.isComment&&!ft(i))?void 0:n};return o.proxy&&Object.defineProperty(t,n,{get:i,enumerable:!0,configurable:!0}),i}function vt(e,t){return function(){return e[t]}}function yt(e){return{get attrs(){if(!e._attrsProxy){var t=e._attrsProxy={};V(t,"_v_attr_proxy",!0),bt(t,e.$attrs,o,e,"$attrs")}return e._attrsProxy},get listeners(){e._listenersProxy||bt(e._listenersProxy={},e.$listeners,o,e,"$listeners");return e._listenersProxy},get slots(){return function(e){e._slotsProxy||kt(e._slotsProxy={},e.$scopedSlots);return e._slotsProxy}(e)},emit:T(e.$emit,e),expose:function(t){t&&Object.keys(t).forEach((function(n){return Re(e,t,n)}))}}}function bt(e,t,n,o,r){var i=!1;for(var a in t)a in e?t[a]!==n[a]&&(i=!0):(i=!0,wt(e,a,o,r));for(var a in e)a in t||(i=!0,delete e[a]);return i}function wt(e,t,n,o){Object.defineProperty(e,t,{enumerable:!0,configurable:!0,get:function(){return n[o][t]}})}function kt(e,t){for(var n in t)e[n]=t[n];for(var n in e)n in t||delete e[n]}var Ct=null;function _t(e,t){return(e.__esModule||le&&"Module"===e[Symbol.toStringTag])&&(e=e.default),l(e)?t.extend(e):e}function xt(e){if(r(e))for(var t=0;tdocument.createEvent("Event").timeStamp&&(un=function(){return ln.now()})}var dn=function(e,t){if(e.post){if(!t.post)return 1}else if(t.post)return-1;return e.id-t.id};function hn(){var e,t;for(cn=un(),an=!0,tn.sort(dn),sn=0;snsn&&tn[n].id>e.id;)n--;tn.splice(n+1,0,e)}else tn.push(e);rn||(rn=!0,Wt(hn))}}function fn(e,t){if(e){for(var n=Object.create(null),o=le?Reflect.ownKeys(e):Object.keys(e),r=0;r-1)if(i&&!_(r,"default"))a=!1;else if(""===a||a===$(e)){var c=Dn(String,r.type);(c<0||s-1:"string"==typeof e?e.split(",").indexOf(t)>-1:!!p(e)&&e.test(t)}function Kn(e,t){var n=e.cache,o=e.keys,r=e._vnode,i=e.$vnode;for(var a in n){var s=n[a];if(s){var c=s.name;c&&!t(c)&&Qn(n,a,o,r)}}i.componentOptions.children=void 0}function Qn(e,t,n,o){var r=e[t];!r||o&&r.tag===o.tag||r.componentInstance.$destroy(),e[t]=null,k(n,t)}!function(e){e.prototype._init=function(e){var t=this;t._uid=qn++,t._isVue=!0,t.__v_skip=!0,t._scope=new ze(!0),t._scope.parent=void 0,t._scope._vm=!0,e&&e._isComponent?function(e,t){var n=e.$options=Object.create(e.constructor.options),o=t._parentVnode;n.parent=t.parent,n._parentVnode=o;var r=o.componentOptions;n.propsData=r.propsData,n._parentListeners=r.listeners,n._renderChildren=r.children,n._componentTag=r.tag,t.render&&(n.render=t.render,n.staticRenderFns=t.staticRenderFns)}(t,e):t.$options=Tn(Vn(t.constructor),e||{},t),t._renderProxy=t,t._self=t,function(e){var t=e.$options,n=t.parent;if(n&&!t.abstract){for(;n.$options.abstract&&n.$parent;)n=n.$parent;n.$children.push(e)}e.$parent=n,e.$root=n?n.$root:e,e.$children=[],e.$refs={},e._provided=n?n._provided:Object.create(null),e._watcher=null,e._inactive=null,e._directInactive=!1,e._isMounted=!1,e._isDestroyed=!1,e._isBeingDestroyed=!1}(t),function(e){e._events=Object.create(null),e._hasHookEvent=!1;var t=e.$options._parentListeners;t&&Zt(e,t)}(t),function(e){e._vnode=null,e._staticTrees=null;var t=e.$options,n=e.$vnode=t._parentVnode,r=n&&n.context;e.$slots=ht(t._renderChildren,r),e.$scopedSlots=n?mt(e.$parent,n.data.scopedSlots,e.$slots):o,e._c=function(t,n,o,r){return St(e,t,n,o,r,!1)},e.$createElement=function(t,n,o,r){return St(e,t,n,o,r,!0)};var i=n&&n.data;Ee(e,"$attrs",i&&i.attrs||o,null,!0),Ee(e,"$listeners",t._parentListeners||o,null,!0)}(t),en(t,"beforeCreate",void 0,!1),function(e){var t=fn(e.$options.inject,e);t&&(Pe(!1),Object.keys(t).forEach((function(n){Ee(e,n,t[n])})),Pe(!0))}(t),Wn(t),function(e){var t=e.$options.provide;if(t){var n=u(t)?t.call(e):t;if(!l(n))return;for(var o=He(e),r=le?Reflect.ownKeys(n):Object.keys(n),i=0;i1?A(n):n;for(var o=A(arguments,1),r='event handler for "'.concat(e,'"'),i=0,a=n.length;iparseInt(this.max)&&Qn(e,t[0],t,this._vnode),this.vnodeToCache=null}}},created:function(){this.cache=Object.create(null),this.keys=[]},destroyed:function(){for(var e in this.cache)Qn(this.cache,e,this.keys)},mounted:function(){var e=this;this.cacheVNode(),this.$watch("include",(function(t){Kn(e,(function(e){return Jn(t,e)}))})),this.$watch("exclude",(function(t){Kn(e,(function(e){return!Jn(t,e)}))}))},updated:function(){this.cacheVNode()},render:function(){var e=this.$slots.default,t=xt(e),n=t&&t.componentOptions;if(n){var o=Zn(n),r=this.include,i=this.exclude;if(r&&(!o||!Jn(r,o))||i&&o&&Jn(i,o))return t;var a=this.cache,s=this.keys,c=null==t.key?n.Ctor.cid+(n.tag?"::".concat(n.tag):""):t.key;a[c]?(t.componentInstance=a[c].componentInstance,k(s,c),s.push(c)):(this.vnodeToCache=t,this.keyToCache=c),t.data.keepAlive=!0}return t||e&&e[0]}}};!function(e){var t={get:function(){return H}};Object.defineProperty(e,"config",t),e.util={warn:_n,extend:E,mergeOptions:Tn,defineReactive:Ee},e.set=Ie,e.delete=Le,e.nextTick=Wt,e.observable=function(e){return Ae(e),e},e.options=Object.create(null),U.forEach((function(t){e.options[t+"s"]=Object.create(null)})),e.options._base=e,E(e.options.components,eo),function(e){e.use=function(e){var t=this._installedPlugins||(this._installedPlugins=[]);if(t.indexOf(e)>-1)return this;var n=A(arguments,1);return n.unshift(this),u(e.install)?e.install.apply(e,n):u(e)&&e.apply(null,n),t.push(e),this}}(e),function(e){e.mixin=function(e){return this.options=Tn(this.options,e),this}}(e),Yn(e),function(e){U.forEach((function(t){e[t]=function(e,n){return n?("component"===t&&h(n)&&(n.name=n.name||e,n=this.options._base.extend(n)),"directive"===t&&u(n)&&(n={bind:n,update:n}),this.options[t+"s"][e]=n,n):this.options[t+"s"][e]}}))}(e)}(Gn),Object.defineProperty(Gn.prototype,"$isServer",{get:ae}),Object.defineProperty(Gn.prototype,"$ssrContext",{get:function(){return this.$vnode&&this.$vnode.ssrContext}}),Object.defineProperty(Gn,"FunctionalRenderContext",{value:mn}),Gn.version="2.7.16";var to=b("style,class"),no=b("input,textarea,option,select,progress"),oo=b("contenteditable,draggable,spellcheck"),ro=b("events,caret,typing,plaintext-only"),io=b("allowfullscreen,async,autofocus,autoplay,checked,compact,controls,declare,default,defaultchecked,defaultmuted,defaultselected,defer,disabled,enabled,formnovalidate,hidden,indeterminate,inert,ismap,itemscope,loop,multiple,muted,nohref,noresize,noshade,novalidate,nowrap,open,pauseonexit,readonly,required,reversed,scoped,seamless,selected,sortable,truespeed,typemustmatch,visible"),ao="http://www.w3.org/1999/xlink",so=function(e){return":"===e.charAt(5)&&"xlink"===e.slice(0,5)},co=function(e){return so(e)?e.slice(6,e.length):""},uo=function(e){return null==e||!1===e};function lo(e){for(var t=e.data,n=e,o=e;a(o.componentInstance);)(o=o.componentInstance._vnode)&&o.data&&(t=ho(o.data,t));for(;a(n=n.parent);)n&&n.data&&(t=ho(t,n.data));return function(e,t){if(a(e)||a(t))return po(e,fo(t));return""}(t.staticClass,t.class)}function ho(e,t){return{staticClass:po(e.staticClass,t.staticClass),class:a(e.class)?[e.class,t.class]:t.class}}function po(e,t){return e?t?e+" "+t:e:t||""}function fo(e){return Array.isArray(e)?function(e){for(var t,n="",o=0,r=e.length;o-1?Fo(e,t,n):io(t)?uo(n)?e.removeAttribute(t):(n="allowfullscreen"===t&&"EMBED"===e.tagName?"true":t,e.setAttribute(t,n)):oo(t)?e.setAttribute(t,function(e,t){return uo(t)||"false"===t?"false":"contenteditable"===e&&ro(t)?t:"true"}(t,n)):so(t)?uo(n)?e.removeAttributeNS(ao,co(t)):e.setAttributeNS(ao,t,n):Fo(e,t,n)}function Fo(e,t,n){if(uo(n))e.removeAttribute(t);else{if(K&&!Q&&"TEXTAREA"===e.tagName&&"placeholder"===t&&""!==n&&!e.__ieph){var o=function(t){t.stopImmediatePropagation(),e.removeEventListener("input",o)};e.addEventListener("input",o),e.__ieph=!0}e.setAttribute(t,n)}}var Wo={create:Do,update:Do};function Ro(e,t){var n=t.elm,o=t.data,r=e.data;if(!(i(o.staticClass)&&i(o.class)&&(i(r)||i(r.staticClass)&&i(r.class)))){var s=lo(t),c=n._transitionClasses;a(c)&&(s=po(s,fo(c))),s!==n._prevClass&&(n.setAttribute("class",s),n._prevClass=s)}}var Uo,zo={create:Ro,update:Ro};function Ho(e,t,n){var o=Uo;return function r(){var i=t.apply(null,arguments);null!==i&&Vo(e,r,n,o)}}var Bo=At&&!(ne&&Number(ne[1])<=53);function qo(e,t,n,o){if(Bo){var r=cn,i=t;t=i._wrapper=function(e){if(e.target===e.currentTarget||e.timeStamp>=r||e.timeStamp<=0||e.target.ownerDocument!==document)return i.apply(this,arguments)}}Uo.addEventListener(e,t,re?{capture:n,passive:o}:n)}function Vo(e,t,n,o){(o||Uo).removeEventListener(e,t._wrapper||t,n)}function Go(e,t){if(!i(e.data.on)||!i(t.data.on)){var n=t.data.on||{},o=e.data.on||{};Uo=t.elm||e.elm,function(e){if(a(e.__r)){var t=K?"change":"input";e[t]=[].concat(e.__r,e[t]||[]),delete e.__r}a(e.__c)&&(e.change=[].concat(e.__c,e.change||[]),delete e.__c)}(n),Ve(n,o,qo,Vo,Ho,t.context),Uo=void 0}}var Yo,Zo={create:Go,update:Go,destroy:function(e){return Go(e,So)}};function Jo(e,t){if(!i(e.data.domProps)||!i(t.data.domProps)){var n,o,r=t.elm,c=e.data.domProps||{},u=t.data.domProps||{};for(n in(a(u.__ob__)||s(u._v_attr_proxy))&&(u=t.data.domProps=E({},u)),c)n in u||(r[n]="");for(n in u){if(o=u[n],"textContent"===n||"innerHTML"===n){if(t.children&&(t.children.length=0),o===c[n])continue;1===r.childNodes.length&&r.removeChild(r.childNodes[0])}if("value"===n&&"PROGRESS"!==r.tagName){r._value=o;var l=i(o)?"":String(o);Ko(r,l)&&(r.value=l)}else if("innerHTML"===n&&vo(r.tagName)&&i(r.innerHTML)){(Yo=Yo||document.createElement("div")).innerHTML="".concat(o,"");for(var d=Yo.firstChild;r.firstChild;)r.removeChild(r.firstChild);for(;d.firstChild;)r.appendChild(d.firstChild)}else if(o!==c[n])try{r[n]=o}catch(e){}}}}function Ko(e,t){return!e.composing&&("OPTION"===e.tagName||function(e,t){var n=!0;try{n=document.activeElement!==e}catch(e){}return n&&e.value!==t}(e,t)||function(e,t){var n=e.value,o=e._vModifiers;if(a(o)){if(o.number)return y(n)!==y(t);if(o.trim)return n.trim()!==t.trim()}return n!==t}(e,t))}var Qo={create:Jo,update:Jo},Xo=x((function(e){var t={},n=/:(.+)/;return e.split(/;(?![^(]*\))/g).forEach((function(e){if(e){var o=e.split(n);o.length>1&&(t[o[0].trim()]=o[1].trim())}})),t}));function er(e){var t=tr(e.style);return e.staticStyle?E(e.staticStyle,t):t}function tr(e){return Array.isArray(e)?I(e):"string"==typeof e?Xo(e):e}var nr,or=/^--/,rr=/\s*!important$/,ir=function(e,t,n){if(or.test(t))e.style.setProperty(t,n);else if(rr.test(n))e.style.setProperty($(t),n.replace(rr,""),"important");else{var o=sr(t);if(Array.isArray(n))for(var r=0,i=n.length;r-1?t.split(lr).forEach((function(t){return e.classList.add(t)})):e.classList.add(t);else{var n=" ".concat(e.getAttribute("class")||""," ");n.indexOf(" "+t+" ")<0&&e.setAttribute("class",(n+t).trim())}}function hr(e,t){if(t&&(t=t.trim()))if(e.classList)t.indexOf(" ")>-1?t.split(lr).forEach((function(t){return e.classList.remove(t)})):e.classList.remove(t),e.classList.length||e.removeAttribute("class");else{for(var n=" ".concat(e.getAttribute("class")||""," "),o=" "+t+" ";n.indexOf(o)>=0;)n=n.replace(o," ");(n=n.trim())?e.setAttribute("class",n):e.removeAttribute("class")}}function pr(e){if(e){if("object"==typeof e){var t={};return!1!==e.css&&E(t,fr(e.name||"v")),E(t,e),t}return"string"==typeof e?fr(e):void 0}}var fr=x((function(e){return{enterClass:"".concat(e,"-enter"),enterToClass:"".concat(e,"-enter-to"),enterActiveClass:"".concat(e,"-enter-active"),leaveClass:"".concat(e,"-leave"),leaveToClass:"".concat(e,"-leave-to"),leaveActiveClass:"".concat(e,"-leave-active")}})),mr=Z&&!Q,gr="transition",vr="transitionend",yr="animation",br="animationend";mr&&(void 0===window.ontransitionend&&void 0!==window.onwebkittransitionend&&(gr="WebkitTransition",vr="webkitTransitionEnd"),void 0===window.onanimationend&&void 0!==window.onwebkitanimationend&&(yr="WebkitAnimation",br="webkitAnimationEnd"));var wr=Z?window.requestAnimationFrame?window.requestAnimationFrame.bind(window):setTimeout:function(e){return e()};function kr(e){wr((function(){wr(e)}))}function Cr(e,t){var n=e._transitionClasses||(e._transitionClasses=[]);n.indexOf(t)<0&&(n.push(t),dr(e,t))}function _r(e,t){e._transitionClasses&&k(e._transitionClasses,t),hr(e,t)}function xr(e,t,n){var o=Or(e,t),r=o.type,i=o.timeout,a=o.propCount;if(!r)return n();var s="transition"===r?vr:br,c=0,u=function(){e.removeEventListener(s,l),n()},l=function(t){t.target===e&&++c>=a&&u()};setTimeout((function(){c0&&(n="transition",l=a,d=i.length):"animation"===t?u>0&&(n="animation",l=u,d=c.length):d=(n=(l=Math.max(a,u))>0?a>u?"transition":"animation":null)?"transition"===n?i.length:c.length:0,{type:n,timeout:l,propCount:d,hasTransform:"transition"===n&&Sr.test(o[gr+"Property"])}}function jr(e,t){for(;e.length1}function Ir(e,t){!0!==t.data.show&&$r(t)}var Lr=function(e){var t,n,o={},u=e.modules,l=e.nodeOps;for(t=0;tf?w(e,i(n[v+1])?null:n[v+1].elm,n,p,v,o):p>v&&C(t,d,f)}(d,m,v,n,u):a(v)?(a(e.text)&&l.setTextContent(d,""),w(d,null,v,0,v.length-1,n)):a(m)?C(m,0,m.length-1):a(e.text)&&l.setTextContent(d,""):e.text!==t.text&&l.setTextContent(d,t.text),a(f)&&a(p=f.hook)&&a(p=p.postpatch)&&p(e,t)}}}function O(e,t,n){if(s(n)&&a(e.parent))e.parent.data.pendingInsert=t;else for(var o=0;o-1,a.selected!==i&&(a.selected=i);else if(N(Wr(a),o))return void(e.selectedIndex!==s&&(e.selectedIndex=s));r||(e.selectedIndex=-1)}}function Fr(e,t){return t.every((function(t){return!N(t,e)}))}function Wr(e){return"_value"in e?e._value:e.value}function Rr(e){e.target.composing=!0}function Ur(e){e.target.composing&&(e.target.composing=!1,zr(e.target,"input"))}function zr(e,t){var n=document.createEvent("HTMLEvents");n.initEvent(t,!0,!0),e.dispatchEvent(n)}function Hr(e){return!e.componentInstance||e.data&&e.data.transition?e:Hr(e.componentInstance._vnode)}var Br={model:Mr,show:{bind:function(e,t,n){var o=t.value,r=(n=Hr(n)).data&&n.data.transition,i=e.__vOriginalDisplay="none"===e.style.display?"":e.style.display;o&&r?(n.data.show=!0,$r(n,(function(){e.style.display=i}))):e.style.display=o?i:"none"},update:function(e,t,n){var o=t.value;!o!=!t.oldValue&&((n=Hr(n)).data&&n.data.transition?(n.data.show=!0,o?$r(n,(function(){e.style.display=e.__vOriginalDisplay})):Tr(n,(function(){e.style.display="none"}))):e.style.display=o?e.__vOriginalDisplay:"none")},unbind:function(e,t,n,o,r){r||(e.style.display=e.__vOriginalDisplay)}}},qr={name:String,appear:Boolean,css:Boolean,mode:String,type:String,enterClass:String,leaveClass:String,enterToClass:String,leaveToClass:String,enterActiveClass:String,leaveActiveClass:String,appearClass:String,appearActiveClass:String,appearToClass:String,duration:[Number,String,Object]};function Vr(e){var t=e&&e.componentOptions;return t&&t.Ctor.options.abstract?Vr(xt(t.children)):e}function Gr(e){var t={},n=e.$options;for(var o in n.propsData)t[o]=e[o];var r=n._parentListeners;for(var o in r)t[O(o)]=r[o];return t}function Yr(e,t){if(/\d-keep-alive$/.test(t.tag))return e("keep-alive",{props:t.componentOptions.propsData})}var Zr=function(e){return e.tag||ft(e)},Jr=function(e){return"show"===e.name},Kr={name:"transition",props:qr,abstract:!0,render:function(e){var t=this,n=this.$slots.default;if(n&&(n=n.filter(Zr)).length){0;var o=this.mode;0;var r=n[0];if(function(e){for(;e=e.parent;)if(e.data.transition)return!0}(this.$vnode))return r;var i=Vr(r);if(!i)return r;if(this._leaving)return Yr(e,r);var a="__transition-".concat(this._uid,"-");i.key=null==i.key?i.isComment?a+"comment":a+i.tag:c(i.key)?0===String(i.key).indexOf(a)?i.key:a+i.key:i.key;var s=(i.data||(i.data={})).transition=Gr(this),u=this._vnode,l=Vr(u);if(i.data.directives&&i.data.directives.some(Jr)&&(i.data.show=!0),l&&l.data&&!function(e,t){return t.key===e.key&&t.tag===e.tag}(i,l)&&!ft(l)&&(!l.componentInstance||!l.componentInstance._vnode.isComment)){var d=l.data.transition=E({},s);if("out-in"===o)return this._leaving=!0,Ge(d,"afterLeave",(function(){t._leaving=!1,t.$forceUpdate()})),Yr(e,r);if("in-out"===o){if(ft(i))return u;var h,p=function(){h()};Ge(s,"afterEnter",p),Ge(s,"enterCancelled",p),Ge(d,"delayLeave",(function(e){h=e}))}}return r}}},Qr=E({tag:String,moveClass:String},qr);function Xr(e){e.elm._moveCb&&e.elm._moveCb(),e.elm._enterCb&&e.elm._enterCb()}function ei(e){e.data.newPos=e.elm.getBoundingClientRect()}function ti(e){var t=e.data.pos,n=e.data.newPos,o=t.left-n.left,r=t.top-n.top;if(o||r){e.data.moved=!0;var i=e.elm.style;i.transform=i.WebkitTransform="translate(".concat(o,"px,").concat(r,"px)"),i.transitionDuration="0s"}}delete Qr.mode;var ni={Transition:Kr,TransitionGroup:{props:Qr,beforeMount:function(){var e=this,t=this._update;this._update=function(n,o){var r=Kt(e);e.__patch__(e._vnode,e.kept,!1,!0),e._vnode=e.kept,r(),t.call(e,n,o)}},render:function(e){for(var t=this.tag||this.$vnode.data.tag||"span",n=Object.create(null),o=this.prevChildren=this.children,r=this.$slots.default||[],i=this.children=[],a=Gr(this),s=0;s-1?bo[e]=t.constructor===window.HTMLUnknownElement||t.constructor===window.HTMLElement:bo[e]=/HTMLUnknownElement/.test(t.toString())},E(Gn.options.directives,Br),E(Gn.options.components,ni),Gn.prototype.__patch__=Z?Lr:L,Gn.prototype.$mount=function(e,t){return function(e,t,n){var o;e.$el=t,e.$options.render||(e.$options.render=fe),en(e,"beforeMount"),o=function(){e._update(e._render(),n)},new qt(e,o,L,{before:function(){e._isMounted&&!e._isDestroyed&&en(e,"beforeUpdate")}},!0),n=!1;var r=e._preWatchers;if(r)for(var i=0;iObject.assign({},e))).forEach(e=>{2===e.level?t=e:t&&(t.children||(t.children=[])).push(e)}),e.filter(e=>2===e.level)}function f(e){return Object.assign(e,{type:e.items&&e.items.length?"links":"link"})}},function(e,t,n){"use strict";n.d(t,"a",(function(){return i})),n.d(t,"b",(function(){return a})),n.d(t,"c",(function(){return s})),n.d(t,"d",(function(){return c})),n.d(t,"e",(function(){return u})),n.d(t,"f",(function(){return l})),n.d(t,"g",(function(){return d})),n.d(t,"h",(function(){return h})),n.d(t,"i",(function(){return p})),n.d(t,"j",(function(){return f})),n.d(t,"k",(function(){return m})),n.d(t,"l",(function(){return g})),n.d(t,"m",(function(){return v})),n.d(t,"n",(function(){return y})),n.d(t,"o",(function(){return b})),n.d(t,"p",(function(){return w})),n.d(t,"q",(function(){return k})),n.d(t,"r",(function(){return C})),n.d(t,"s",(function(){return _})),n.d(t,"t",(function(){return x})),n.d(t,"u",(function(){return S}));var o=n(0),r=n.n(o),i={name:"ClockIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-clock"},t.data]),[e("circle",{attrs:{cx:"12",cy:"12",r:"10"}}),e("polyline",{attrs:{points:"12 6 12 12 16 14"}})])}},a={name:"CodepenIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-codepen"},t.data]),[e("polygon",{attrs:{points:"12 2 22 8.5 22 15.5 12 22 2 15.5 2 8.5 12 2"}}),e("line",{attrs:{x1:"12",y1:"22",x2:"12",y2:"15.5"}}),e("polyline",{attrs:{points:"22 8.5 12 15.5 2 8.5"}}),e("polyline",{attrs:{points:"2 15.5 12 8.5 22 15.5"}}),e("line",{attrs:{x1:"12",y1:"2",x2:"12",y2:"8.5"}})])}},s={name:"CodesandboxIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-codesandbox"},t.data]),[e("path",{attrs:{d:"M21 16V8a2 2 0 0 0-1-1.73l-7-4a2 2 0 0 0-2 0l-7 4A2 2 0 0 0 3 8v8a2 2 0 0 0 1 1.73l7 4a2 2 0 0 0 2 0l7-4A2 2 0 0 0 21 16z"}}),e("polyline",{attrs:{points:"7.5 4.21 12 6.81 16.5 4.21"}}),e("polyline",{attrs:{points:"7.5 19.79 7.5 14.6 3 12"}}),e("polyline",{attrs:{points:"21 12 16.5 14.6 16.5 19.79"}}),e("polyline",{attrs:{points:"3.27 6.96 12 12.01 20.73 6.96"}}),e("line",{attrs:{x1:"12",y1:"22.08",x2:"12",y2:"12"}})])}},c={name:"FacebookIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-facebook"},t.data]),[e("path",{attrs:{d:"M18 2h-3a5 5 0 0 0-5 5v3H7v4h3v8h4v-8h3l1-4h-4V7a1 1 0 0 1 1-1h3z"}})])}},u={name:"GithubIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-github"},t.data]),[e("path",{attrs:{d:"M9 19c-5 1.5-5-2.5-7-3m14 6v-3.87a3.37 3.37 0 0 0-.94-2.61c3.14-.35 6.44-1.54 6.44-7A5.44 5.44 0 0 0 20 4.77 5.07 5.07 0 0 0 19.91 1S18.73.65 16 2.48a13.38 13.38 0 0 0-7 0C6.27.65 5.09 1 5.09 1A5.07 5.07 0 0 0 5 4.77a5.44 5.44 0 0 0-1.5 3.78c0 5.42 3.3 6.61 6.44 7A3.37 3.37 0 0 0 9 18.13V22"}})])}},l={name:"GitlabIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-gitlab"},t.data]),[e("path",{attrs:{d:"M22.65 14.39L12 22.13 1.35 14.39a.84.84 0 0 1-.3-.94l1.22-3.78 2.44-7.51A.42.42 0 0 1 4.82 2a.43.43 0 0 1 .58 0 .42.42 0 0 1 .11.18l2.44 7.49h8.1l2.44-7.51A.42.42 0 0 1 18.6 2a.43.43 0 0 1 .58 0 .42.42 0 0 1 .11.18l2.44 7.51L23 13.45a.84.84 0 0 1-.35.94z"}})])}},d={name:"GlobeIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-globe"},t.data]),[e("circle",{attrs:{cx:"12",cy:"12",r:"10"}}),e("line",{attrs:{x1:"2",y1:"12",x2:"22",y2:"12"}}),e("path",{attrs:{d:"M12 2a15.3 15.3 0 0 1 4 10 15.3 15.3 0 0 1-4 10 15.3 15.3 0 0 1-4-10 15.3 15.3 0 0 1 4-10z"}})])}},h={name:"InstagramIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-instagram"},t.data]),[e("rect",{attrs:{x:"2",y:"2",width:"20",height:"20",rx:"5",ry:"5"}}),e("path",{attrs:{d:"M16 11.37A4 4 0 1 1 12.63 8 4 4 0 0 1 16 11.37z"}}),e("line",{attrs:{x1:"17.5",y1:"6.5",x2:"17.5",y2:"6.5"}})])}},p={name:"LinkedinIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-linkedin"},t.data]),[e("path",{attrs:{d:"M16 8a6 6 0 0 1 6 6v7h-4v-7a2 2 0 0 0-2-2 2 2 0 0 0-2 2v7h-4v-7a6 6 0 0 1 6-6z"}}),e("rect",{attrs:{x:"2",y:"9",width:"4",height:"12"}}),e("circle",{attrs:{cx:"4",cy:"4",r:"2"}})])}},f={name:"MailIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-mail"},t.data]),[e("path",{attrs:{d:"M4 4h16c1.1 0 2 .9 2 2v12c0 1.1-.9 2-2 2H4c-1.1 0-2-.9-2-2V6c0-1.1.9-2 2-2z"}}),e("polyline",{attrs:{points:"22,6 12,13 2,6"}})])}},m={name:"MenuIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-menu"},t.data]),[e("line",{attrs:{x1:"3",y1:"12",x2:"21",y2:"12"}}),e("line",{attrs:{x1:"3",y1:"6",x2:"21",y2:"6"}}),e("line",{attrs:{x1:"3",y1:"18",x2:"21",y2:"18"}})])}},g={name:"MessageSquareIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-message-square"},t.data]),[e("path",{attrs:{d:"M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z"}})])}},v={name:"MusicIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-music"},t.data]),[e("path",{attrs:{d:"M9 18V5l12-2v13"}}),e("circle",{attrs:{cx:"6",cy:"18",r:"3"}}),e("circle",{attrs:{cx:"18",cy:"16",r:"3"}})])}},y={name:"NavigationIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-navigation"},t.data]),[e("polygon",{attrs:{points:"3 11 22 2 13 21 11 13 3 11"}})])}},b={name:"PhoneIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-phone"},t.data]),[e("path",{attrs:{d:"M22 16.92v3a2 2 0 0 1-2.18 2 19.79 19.79 0 0 1-8.63-3.07 19.5 19.5 0 0 1-6-6 19.79 19.79 0 0 1-3.07-8.67A2 2 0 0 1 4.11 2h3a2 2 0 0 1 2 1.72 12.84 12.84 0 0 0 .7 2.81 2 2 0 0 1-.45 2.11L8.09 9.91a16 16 0 0 0 6 6l1.27-1.27a2 2 0 0 1 2.11-.45 12.84 12.84 0 0 0 2.81.7A2 2 0 0 1 22 16.92z"}})])}},w={name:"RssIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-rss"},t.data]),[e("path",{attrs:{d:"M4 11a9 9 0 0 1 9 9"}}),e("path",{attrs:{d:"M4 4a16 16 0 0 1 16 16"}}),e("circle",{attrs:{cx:"5",cy:"19",r:"1"}})])}},k={name:"TagIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-tag"},t.data]),[e("path",{attrs:{d:"M20.59 13.41l-7.17 7.17a2 2 0 0 1-2.83 0L2 12V2h10l8.59 8.59a2 2 0 0 1 0 2.82z"}}),e("line",{attrs:{x1:"7",y1:"7",x2:"7",y2:"7"}})])}},C={name:"TwitterIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-twitter"},t.data]),[e("path",{attrs:{d:"M23 3a10.9 10.9 0 0 1-3.14 1.53 4.48 4.48 0 0 0-7.86 3v1A10.66 10.66 0 0 1 3 4s-4 9 5 13a11.64 11.64 0 0 1-7 2c9 5 20 0 20-11.5a4.5 4.5 0 0 0-.08-.83A7.72 7.72 0 0 0 23 3z"}})])}},_={name:"VideoIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-video"},t.data]),[e("polygon",{attrs:{points:"23 7 16 12 23 17 23 7"}}),e("rect",{attrs:{x:"1",y:"5",width:"15",height:"14",rx:"2",ry:"2"}})])}},x={name:"XIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-x"},t.data]),[e("line",{attrs:{x1:"18",y1:"6",x2:"6",y2:"18"}}),e("line",{attrs:{x1:"6",y1:"6",x2:"18",y2:"18"}})])}},S={name:"YoutubeIcon",functional:!0,render:function(e,t){return e("svg",r()([{attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24",fill:"none",stroke:"currentColor","stroke-width":"2","stroke-linecap":"round","stroke-linejoin":"round"},class:"feather feather-youtube"},t.data]),[e("path",{attrs:{d:"M22.54 6.42a2.78 2.78 0 0 0-1.94-2C18.88 4 12 4 12 4s-6.88 0-8.6.46a2.78 2.78 0 0 0-1.94 2A29 29 0 0 0 1 11.75a29 29 0 0 0 .46 5.33A2.78 2.78 0 0 0 3.4 19c1.72.46 8.6.46 8.6.46s6.88 0 8.6-.46a2.78 2.78 0 0 0 1.94-2 29 29 0 0 0 .46-5.25 29 29 0 0 0-.46-5.33z"}}),e("polygon",{attrs:{points:"9.75 15.02 15.5 11.75 9.75 8.48 9.75 15.02"}})])}}},function(e,t,n){"use strict";var o=function(e){return e&&e.Math===Math&&e};e.exports=o("object"==typeof globalThis&&globalThis)||o("object"==typeof window&&window)||o("object"==typeof self&&self)||o("object"==typeof global&&global)||o("object"==typeof this&&this)||function(){return this}()||Function("return this")()},function(e,t,n){"use strict";var o="object"==typeof document&&document.all;e.exports=void 0===o&&void 0!==o?function(e){return"function"==typeof e||e===o}:function(e){return"function"==typeof e}},function(e,t,n){"use strict";e.exports=function(e){try{return!!e()}catch(e){return!0}}},function(e,t,n){"use strict";var o=n(37),r=Function.prototype,i=r.call,a=o&&r.bind.bind(i,i);e.exports=o?a:function(e){return function(){return i.apply(e,arguments)}}},function(e,t,n){"use strict";var o=n(9);e.exports=!o((function(){return 7!==Object.defineProperty({},1,{get:function(){return 7}})[1]}))},function(e,t,n){var o=n(83),r="object"==typeof self&&self&&self.Object===Object&&self,i=o||r||Function("return this")();e.exports=i},function(e,t){var n=Array.isArray;e.exports=n},function(e,t,n){"use strict";var o=n(8);e.exports=function(e){return"object"==typeof e?null!==e:o(e)}},function(e,t,n){"use strict";var o=n(10),r=n(45),i=o({}.hasOwnProperty);e.exports=Object.hasOwn||function(e,t){return i(r(e),t)}},function(e,t,n){var o=n(195),r=n(198);e.exports=function(e,t){var n=r(e,t);return o(n)?n:void 0}},function(e,t){e.exports=function(e){return null!=e&&"object"==typeof e}},function(e,t,n){"use strict";var o=n(11),r=n(23),i=n(38);e.exports=o?function(e,t,n){return r.f(e,t,i(1,n))}:function(e,t,n){return e[t]=n,e}},function(e,t,n){var o=n(12).Symbol;e.exports=o},function(e,t,n){var o=n(19),r=n(181),i=n(182),a=o?o.toStringTag:void 0;e.exports=function(e){return null==e?void 0===e?"[object Undefined]":"[object Null]":a&&a in Object(e)?r(e):i(e)}},function(e,t,n){var o=n(54);e.exports=function(e){if("string"==typeof e||o(e))return e;var t=e+"";return"0"==t&&1/e==-1/0?"-0":t}},function(e,t,n){"use strict";var o=n(10),r=o({}.toString),i=o("".slice);e.exports=function(e){return i(r(e),8,-1)}},function(e,t,n){"use strict";var o=n(11),r=n(78),i=n(138),a=n(46),s=n(69),c=TypeError,u=Object.defineProperty,l=Object.getOwnPropertyDescriptor;t.f=o?i?function(e,t,n){if(a(e),t=s(t),a(n),"function"==typeof e&&"prototype"===t&&"value"in n&&"writable"in n&&!n.writable){var o=l(e,t);o&&o.writable&&(e[t]=n.value,n={configurable:"configurable"in n?n.configurable:o.configurable,enumerable:"enumerable"in n?n.enumerable:o.enumerable,writable:!1})}return u(e,t,n)}:u:function(e,t,n){if(a(e),t=s(t),a(n),r)try{return u(e,t,n)}catch(e){}if("get"in n||"set"in n)throw new c("Accessors not supported");return"value"in n&&(e[t]=n.value),e}},function(e,t,n){var o=n(185),r=n(186),i=n(187),a=n(188),s=n(189);function c(e){var t=-1,n=null==e?0:e.length;for(this.clear();++t1){const t=this.$page.path,n=this.$router.options.routes,o=this.$site.themeConfig.locales||{},r={text:this.$themeLocaleConfig.selectText||"Languages",ariaLabel:this.$themeLocaleConfig.ariaLabel||"Select language",items:Object.keys(e).map(r=>{const i=e[r],a=o[r]&&o[r].label||i.lang;let s;return i.lang===this.$lang?s=t:(s=t.replace(this.$localeConfig.path,r),n.some(e=>e.path===s)||(s=r)),{text:a,link:s}})};return[...this.userNav,r]}return this.userNav},userLinks(){return(this.nav||[]).map(e=>Object.assign(Object(r.h)(e),{items:(e.items||[]).map(r.h)}))},repoLink(){const{repo:e}=this.$site.themeConfig;return e?/^https?:/.test(e)?e:"https://github.com/"+e:null},repoLabel(){if(!this.repoLink)return;if(this.$site.themeConfig.repoLabel)return this.$site.themeConfig.repoLabel;const e=this.repoLink.match(/^https?:\/\/[^/]+/)[0],t=["GitHub","GitLab","Bitbucket"];for(let n=0;ne===this.link):"/"===this.link},isNonHttpURI(){return Object(o.f)(this.link)||Object(o.g)(this.link)},isBlankTarget(){return"_blank"===this.target},isInternal(){return!Object(o.e)(this.link)&&!this.isBlankTarget},target(){return this.isNonHttpURI?null:this.item.target?this.item.target:Object(o.e)(this.link)?"_blank":""},rel(){return this.isNonHttpURI||!1===this.item.rel?null:this.item.rel?this.item.rel:this.isBlankTarget?"noopener noreferrer":null}},methods:{focusoutAction(){this.$emit("focusout")}}},i=n(4),a=Object(i.a)(r,(function(){var e=this,t=e._self._c;return e.isInternal?t("RouterLink",{staticClass:"nav-link",attrs:{to:e.link,exact:e.exact},nativeOn:{focusout:function(t){return e.focusoutAction.apply(null,arguments)}}},[e._v("\n "+e._s(e.item.text)+"\n")]):t("a",{staticClass:"nav-link external",attrs:{href:e.link,target:e.target,rel:e.rel},on:{focusout:e.focusoutAction}},[e._v("\n "+e._s(e.item.text)+"\n "),e.isBlankTarget?t("OutboundLink"):e._e()],1)}),[],!1,null,null,null);t.default=a.exports},function(e,t,n){"use strict";n.r(t);var o={name:"DropdownTransition",methods:{setHeight(e){e.style.height=e.scrollHeight+"px"},unsetHeight(e){e.style.height=""}}},r=(n(264),n(4)),i=Object(r.a)(o,(function(){return(0,this._self._c)("transition",{attrs:{name:"dropdown"},on:{enter:this.setHeight,"after-enter":this.unsetHeight,"before-leave":this.setHeight}},[this._t("default")],2)}),[],!1,null,null,null);t.default=i.exports},function(e,t,n){var o,r; /* NProgress, (c) 2013, 2014 Rico Sta. Cruz - http://ricostacruz.com/nprogress * @license MIT */void 0===(r="function"==typeof(o=function(){var e,t,n={version:"0.2.0"},o=n.settings={minimum:.08,easing:"ease",positionUsing:"",speed:200,trickle:!0,trickleRate:.02,trickleSpeed:800,showSpinner:!0,barSelector:'[role="bar"]',spinnerSelector:'[role="spinner"]',parent:"body",template:'
'};function r(e,t,n){return en?n:e}function i(e){return 100*(-1+e)}n.configure=function(e){var t,n;for(t in e)void 0!==(n=e[t])&&e.hasOwnProperty(t)&&(o[t]=n);return this},n.status=null,n.set=function(e){var t=n.isStarted();e=r(e,o.minimum,1),n.status=1===e?null:e;var c=n.render(!t),u=c.querySelector(o.barSelector),l=o.speed,d=o.easing;return c.offsetWidth,a((function(t){""===o.positionUsing&&(o.positionUsing=n.getPositioningCSS()),s(u,function(e,t,n){var r;return(r="translate3d"===o.positionUsing?{transform:"translate3d("+i(e)+"%,0,0)"}:"translate"===o.positionUsing?{transform:"translate("+i(e)+"%,0)"}:{"margin-left":i(e)+"%"}).transition="all "+t+"ms "+n,r}(e,l,d)),1===e?(s(c,{transition:"none",opacity:1}),c.offsetWidth,setTimeout((function(){s(c,{transition:"all "+l+"ms linear",opacity:0}),setTimeout((function(){n.remove(),t()}),l)}),l)):setTimeout(t,l)})),this},n.isStarted=function(){return"number"==typeof n.status},n.start=function(){n.status||n.set(0);var e=function(){setTimeout((function(){n.status&&(n.trickle(),e())}),o.trickleSpeed)};return o.trickle&&e(),this},n.done=function(e){return e||n.status?n.inc(.3+.5*Math.random()).set(1):this},n.inc=function(e){var t=n.status;return t?("number"!=typeof e&&(e=(1-t)*r(Math.random()*t,.1,.95)),t=r(t+e,0,.994),n.set(t)):n.start()},n.trickle=function(){return n.inc(Math.random()*o.trickleRate)},e=0,t=0,n.promise=function(o){return o&&"resolved"!==o.state()?(0===t&&n.start(),e++,t++,o.always((function(){0==--t?(e=0,n.done()):n.set((e-t)/e)})),this):this},n.render=function(e){if(n.isRendered())return document.getElementById("nprogress");u(document.documentElement,"nprogress-busy");var t=document.createElement("div");t.id="nprogress",t.innerHTML=o.template;var r,a=t.querySelector(o.barSelector),c=e?"-100":i(n.status||0),l=document.querySelector(o.parent);return s(a,{transition:"all 0 linear",transform:"translate3d("+c+"%,0,0)"}),o.showSpinner||(r=t.querySelector(o.spinnerSelector))&&h(r),l!=document.body&&u(l,"nprogress-custom-parent"),l.appendChild(t),t},n.remove=function(){l(document.documentElement,"nprogress-busy"),l(document.querySelector(o.parent),"nprogress-custom-parent");var e=document.getElementById("nprogress");e&&h(e)},n.isRendered=function(){return!!document.getElementById("nprogress")},n.getPositioningCSS=function(){var e=document.body.style,t="WebkitTransform"in e?"Webkit":"MozTransform"in e?"Moz":"msTransform"in e?"ms":"OTransform"in e?"O":"";return t+"Perspective"in e?"translate3d":t+"Transform"in e?"translate":"margin"};var a=function(){var e=[];function t(){var n=e.shift();n&&n(t)}return function(n){e.push(n),1==e.length&&t()}}(),s=function(){var e=["Webkit","O","Moz","ms"],t={};function n(n){return n=n.replace(/^-ms-/,"ms-").replace(/-([\da-z])/gi,(function(e,t){return t.toUpperCase()})),t[n]||(t[n]=function(t){var n=document.body.style;if(t in n)return t;for(var o,r=e.length,i=t.charAt(0).toUpperCase()+t.slice(1);r--;)if((o=e[r]+i)in n)return o;return t}(n))}function o(e,t,o){t=n(t),e.style[t]=o}return function(e,t){var n,r,i=arguments;if(2==i.length)for(n in t)void 0!==(r=t[n])&&t.hasOwnProperty(n)&&o(e,n,r);else o(e,i[1],i[2])}}();function c(e,t){return("string"==typeof e?e:d(e)).indexOf(" "+t+" ")>=0}function u(e,t){var n=d(e),o=n+t;c(n,t)||(e.className=o.substring(1))}function l(e,t){var n,o=d(e);c(e,t)&&(n=o.replace(" "+t+" "," "),e.className=n.substring(1,n.length-1))}function d(e){return(" "+(e.className||"")+" ").replace(/\s+/gi," ")}function h(e){e&&e.parentNode&&e.parentNode.removeChild(e)}return n})?o.call(t,n,t,e):o)||(e.exports=r)},function(e,t){e.exports=function(e){var t=typeof e;return null!=e&&("object"==t||"function"==t)}},function(e,t,n){"use strict";var o=n(35),r=n(45),i=n(47),a=n(175),s=n(177);o({target:"Array",proto:!0,arity:1,forced:n(9)((function(){return 4294967297!==[].push.call({length:4294967296},1)}))||!function(){try{Object.defineProperty([],"length",{writable:!1}).push()}catch(e){return e instanceof TypeError}}()},{push:function(e){var t=r(this),n=i(t),o=arguments.length;s(n+o);for(var c=0;c-1&&e%1==0&&e<=9007199254740991}},function(e,t,n){var o=n(13),r=n(54),i=/\.|\[(?:[^[\]]*|(["'])(?:(?!\1)[^\\]|\\.)*?\1)\]/,a=/^\w*$/;e.exports=function(e,t){if(o(e))return!1;var n=typeof e;return!("number"!=n&&"symbol"!=n&&"boolean"!=n&&null!=e&&!r(e))||(a.test(e)||!i.test(e)||null!=t&&e in Object(t))}},function(e,t,n){var o=n(20),r=n(17);e.exports=function(e){return"symbol"==typeof e||r(e)&&"[object Symbol]"==o(e)}},function(e,t){e.exports=function(e){var t=null==e?0:e.length;return t?e[t-1]:void 0}},function(e,t,n){"use strict";n.r(t);var o=n(115),r=n(118),i=n(5);function a(e,t){return"group"===t.type&&t.children.some(t=>"group"===t.type?a(e,t):"page"===t.type&&Object(i.d)(e,t.path))}var s={name:"SidebarLinks",components:{SidebarGroup:o.default,SidebarLink:r.default},props:["items","depth","sidebarDepth","initialOpenGroupIndex"],data(){return{openGroupIndex:this.initialOpenGroupIndex||0}},watch:{$route(){this.refreshIndex()}},created(){this.refreshIndex()},methods:{refreshIndex(){const e=function(e,t){for(let n=0;n-1&&(this.openGroupIndex=e)},toggleGroup(e){this.openGroupIndex=e===this.openGroupIndex?-1:e},isActive(e){return Object(i.d)(this.$route,e.regularPath)}}},c=n(4),u=Object(c.a)(s,(function(){var e=this,t=e._self._c;return e.items.length?t("ul",{staticClass:"sidebar-links"},e._l(e.items,(function(n,o){return t("li",{key:o},["group"===n.type?t("SidebarGroup",{attrs:{item:n,open:o===e.openGroupIndex,collapsable:n.collapsable||n.collapsible,depth:e.depth},on:{toggle:function(t){return e.toggleGroup(o)}}}):t("SidebarLink",{attrs:{"sidebar-depth":e.sidebarDepth,item:n}})],1)})),0):e._e()}),[],!1,null,null,null);t.default=u.exports},function(e,t,n){var o=n(13),r=n(53),i=n(234),a=n(237);e.exports=function(e,t){return o(e)?e:r(e,t)?[e]:i(a(e))}},function(e,t){e.exports=function(e,t){for(var n=-1,o=t.length,r=e.length;++n-1&&e%1==0&&e{"%%"!==e&&(o++,"%c"===e&&(r=o))}),t.splice(r,0,n)},t.save=function(e){try{e?t.storage.setItem("debug",e):t.storage.removeItem("debug")}catch(e){}},t.load=function(){let e;try{e=t.storage.getItem("debug")}catch(e){}!e&&"undefined"!=typeof process&&"env"in process&&(e=process.env.DEBUG);return e},t.useColors=function(){if("undefined"!=typeof window&&window.process&&("renderer"===window.process.type||window.process.__nwjs))return!0;if("undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/(edge|trident)\/(\d+)/))return!1;return"undefined"!=typeof document&&document.documentElement&&document.documentElement.style&&document.documentElement.style.WebkitAppearance||"undefined"!=typeof window&&window.console&&(window.console.firebug||window.console.exception&&window.console.table)||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/firefox\/(\d+)/)&&parseInt(RegExp.$1,10)>=31||"undefined"!=typeof navigator&&navigator.userAgent&&navigator.userAgent.toLowerCase().match(/applewebkit\/(\d+)/)},t.storage=function(){try{return localStorage}catch(e){}}(),t.destroy=(()=>{let e=!1;return()=>{e||(e=!0,console.warn("Instance method `debug.destroy()` is deprecated and no longer does anything. It will be removed in the next major version of `debug`."))}})(),t.colors=["#0000CC","#0000FF","#0033CC","#0033FF","#0066CC","#0066FF","#0099CC","#0099FF","#00CC00","#00CC33","#00CC66","#00CC99","#00CCCC","#00CCFF","#3300CC","#3300FF","#3333CC","#3333FF","#3366CC","#3366FF","#3399CC","#3399FF","#33CC00","#33CC33","#33CC66","#33CC99","#33CCCC","#33CCFF","#6600CC","#6600FF","#6633CC","#6633FF","#66CC00","#66CC33","#9900CC","#9900FF","#9933CC","#9933FF","#99CC00","#99CC33","#CC0000","#CC0033","#CC0066","#CC0099","#CC00CC","#CC00FF","#CC3300","#CC3333","#CC3366","#CC3399","#CC33CC","#CC33FF","#CC6600","#CC6633","#CC9900","#CC9933","#CCCC00","#CCCC33","#FF0000","#FF0033","#FF0066","#FF0099","#FF00CC","#FF00FF","#FF3300","#FF3333","#FF3366","#FF3399","#FF33CC","#FF33FF","#FF6600","#FF6633","#FF9900","#FF9933","#FFCC00","#FFCC33"],t.log=console.debug||console.log||(()=>{}),e.exports=n(275)(t);const{formatters:o}=e.exports;o.j=function(e){try{return JSON.stringify(e)}catch(e){return"[UnexpectedJSONParseError]: "+e.message}}},function(e,t,n){"use strict";var o=n(35),r=n(156).left,i=n(157),a=n(74);o({target:"Array",proto:!0,forced:!n(158)&&a>79&&a<83||!i("reduce")},{reduce:function(e){var t=arguments.length;return r(this,e,t,t>1?arguments[1]:void 0)}})},function(e,t,n){"use strict";var o=n(11),r=n(36),i=n(131),a=n(38),s=n(39),c=n(69),u=n(15),l=n(78),d=Object.getOwnPropertyDescriptor;t.f=o?d:function(e,t){if(e=s(e),t=c(t),l)try{return d(e,t)}catch(e){}if(u(e,t))return a(!r(i.f,e,t),e[t])}},function(e,t,n){"use strict";var o=n(10),r=n(9),i=n(22),a=Object,s=o("".split);e.exports=r((function(){return!a("z").propertyIsEnumerable(0)}))?function(e){return"String"===i(e)?s(e,""):a(e)}:a},function(e,t,n){"use strict";var o=n(68),r=TypeError;e.exports=function(e){if(o(e))throw new r("Can't call method on "+e);return e}},function(e,t,n){"use strict";e.exports=function(e){return null==e}},function(e,t,n){"use strict";var o=n(132),r=n(70);e.exports=function(e){var t=o(e,"string");return r(t)?t:t+""}},function(e,t,n){"use strict";var o=n(40),r=n(8),i=n(71),a=n(72),s=Object;e.exports=a?function(e){return"symbol"==typeof e}:function(e){var t=o("Symbol");return r(t)&&i(t.prototype,s(e))}},function(e,t,n){"use strict";var o=n(10);e.exports=o({}.isPrototypeOf)},function(e,t,n){"use strict";var o=n(73);e.exports=o&&!Symbol.sham&&"symbol"==typeof Symbol.iterator},function(e,t,n){"use strict";var o=n(74),r=n(9),i=n(7).String;e.exports=!!Object.getOwnPropertySymbols&&!r((function(){var e=Symbol("symbol detection");return!i(e)||!(Object(e)instanceof Symbol)||!Symbol.sham&&o&&o<41}))},function(e,t,n){"use strict";var o,r,i=n(7),a=n(133),s=i.process,c=i.Deno,u=s&&s.versions||c&&c.version,l=u&&u.v8;l&&(r=(o=l.split("."))[0]>0&&o[0]<4?1:+(o[0]+o[1])),!r&&a&&(!(o=a.match(/Edge\/(\d+)/))||o[1]>=74)&&(o=a.match(/Chrome\/(\d+)/))&&(r=+o[1]),e.exports=r},function(e,t,n){"use strict";var o=n(43);e.exports=function(e,t){return o[e]||(o[e]=t||{})}},function(e,t,n){"use strict";e.exports=!1},function(e,t,n){"use strict";var o=n(10),r=0,i=Math.random(),a=o(1..toString);e.exports=function(e){return"Symbol("+(void 0===e?"":e)+")_"+a(++r+i,36)}},function(e,t,n){"use strict";var o=n(11),r=n(9),i=n(137);e.exports=!o&&!r((function(){return 7!==Object.defineProperty(i("div"),"a",{get:function(){return 7}}).a}))},function(e,t,n){"use strict";e.exports={}},function(e,t,n){"use strict";var o=n(15),r=n(146),i=n(65),a=n(23);e.exports=function(e,t,n){for(var s=r(t),c=a.f,u=i.f,l=0;ll))return!1;var h=c.get(e),p=c.get(t);if(h&&p)return h==t&&p==e;var f=-1,m=!0,g=2&n?new o:void 0;for(c.set(e,t),c.set(t,e);++f=0&&(t=e.slice(o),e=e.slice(0,o));var r=e.indexOf("?");return r>=0&&(n=e.slice(r+1),e=e.slice(0,r)),{path:e,query:n,hash:t}}(i.path||""),h=t&&t.path||"/",p=u.path?x(u.path,h,n||i.append):h,f=function(e,t,n){void 0===t&&(t={});var o,r=n||d;try{o=r(e||"")}catch(e){o={}}for(var i in t){var a=t[i];o[i]=Array.isArray(a)?a.map(l):l(a)}return o}(u.query,i.query,o&&o.options.parseQuery),m=i.hash||u.hash;return m&&"#"!==m.charAt(0)&&(m="#"+m),{_normalized:!0,path:p,query:f,hash:m}}var q,V=function(){},G={name:"RouterLink",props:{to:{type:[String,Object],required:!0},tag:{type:String,default:"a"},custom:Boolean,exact:Boolean,exactPath:Boolean,append:Boolean,replace:Boolean,activeClass:String,exactActiveClass:String,ariaCurrentValue:{type:String,default:"page"},event:{type:[String,Array],default:"click"}},render:function(e){var t=this,n=this.$router,o=this.$route,i=n.resolve(this.to,o,this.append),a=i.location,s=i.route,c=i.href,u={},l=n.options.linkActiveClass,d=n.options.linkExactActiveClass,h=null==l?"router-link-active":l,m=null==d?"router-link-exact-active":d,g=null==this.activeClass?h:this.activeClass,v=null==this.exactActiveClass?m:this.exactActiveClass,y=s.redirectedFrom?f(null,B(s.redirectedFrom),null,n):s;u[v]=b(o,y,this.exactPath),u[g]=this.exact||this.exactPath?u[v]:function(e,t){return 0===e.path.replace(p,"/").indexOf(t.path.replace(p,"/"))&&(!t.hash||e.hash===t.hash)&&function(e,t){for(var n in t)if(!(n in e))return!1;return!0}(e.query,t.query)}(o,y);var w=u[v]?this.ariaCurrentValue:null,k=function(e){Y(e)&&(t.replace?n.replace(a,V):n.push(a,V))},C={click:Y};Array.isArray(this.event)?this.event.forEach((function(e){C[e]=k})):C[this.event]=k;var _={class:u},x=!this.$scopedSlots.$hasNormal&&this.$scopedSlots.default&&this.$scopedSlots.default({href:c,route:s,navigate:k,isActive:u[g],isExactActive:u[v]});if(x){if(1===x.length)return x[0];if(x.length>1||!x.length)return 0===x.length?e():e("span",{},x)}if("a"===this.tag)_.on=C,_.attrs={href:c,"aria-current":w};else{var S=function e(t){var n;if(t)for(var o=0;o-1&&(s.params[h]=n.params[h]);return s.path=H(l.path,s.params),c(l,s,a)}if(s.path){s.params={};for(var p=0;p-1}function Se(e,t){return xe(e)&&e._isRouter&&(null==t||e.type===t)}function Oe(e,t,n){var o=function(r){r>=e.length?n():e[r]?t(e[r],(function(){o(r+1)})):o(r+1)};o(0)}function je(e){return function(t,n,o){var r=!1,i=0,a=null;Pe(e,(function(e,t,n,s){if("function"==typeof e&&void 0===e.cid){r=!0,i++;var c,u=Ae((function(t){var r;((r=t).__esModule||Te&&"Module"===r[Symbol.toStringTag])&&(t=t.default),e.resolved="function"==typeof t?t:q.extend(t),n.components[s]=t,--i<=0&&o()})),l=Ae((function(e){var t="Failed to resolve async component "+s+": "+e;a||(a=xe(e)?e:new Error(t),o(a))}));try{c=e(u,l)}catch(e){l(e)}if(c)if("function"==typeof c.then)c.then(u,l);else{var d=c.component;d&&"function"==typeof d.then&&d.then(u,l)}}})),r||o()}}function Pe(e,t){return $e(e.map((function(e){return Object.keys(e.components).map((function(n){return t(e.components[n],e.instances[n],e,n)}))})))}function $e(e){return Array.prototype.concat.apply([],e)}var Te="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag;function Ae(e){var t=!1;return function(){for(var n=[],o=arguments.length;o--;)n[o]=arguments[o];if(!t)return t=!0,e.apply(this,n)}}var Ee=function(e,t){this.router=e,this.base=function(e){if(!e)if(Z){var t=document.querySelector("base");e=(e=t&&t.getAttribute("href")||"/").replace(/^https?:\/\/[^\/]+/,"")}else e="/";"/"!==e.charAt(0)&&(e="/"+e);return e.replace(/\/$/,"")}(t),this.current=g,this.pending=null,this.ready=!1,this.readyCbs=[],this.readyErrorCbs=[],this.errorCbs=[],this.listeners=[]};function Ie(e,t,n,o){var r=Pe(e,(function(e,o,r,i){var a=function(e,t){"function"!=typeof e&&(e=q.extend(e));return e.options[t]}(e,t);if(a)return Array.isArray(a)?a.map((function(e){return n(e,o,r,i)})):n(a,o,r,i)}));return $e(o?r.reverse():r)}function Le(e,t){if(t)return function(){return e.apply(t,arguments)}}Ee.prototype.listen=function(e){this.cb=e},Ee.prototype.onReady=function(e,t){this.ready?e():(this.readyCbs.push(e),t&&this.readyErrorCbs.push(t))},Ee.prototype.onError=function(e){this.errorCbs.push(e)},Ee.prototype.transitionTo=function(e,t,n){var o,r=this;try{o=this.router.match(e,this.current)}catch(e){throw this.errorCbs.forEach((function(t){t(e)})),e}var i=this.current;this.confirmTransition(o,(function(){r.updateRoute(o),t&&t(o),r.ensureURL(),r.router.afterHooks.forEach((function(e){e&&e(o,i)})),r.ready||(r.ready=!0,r.readyCbs.forEach((function(e){e(o)})))}),(function(e){n&&n(e),e&&!r.ready&&(Se(e,be.redirected)&&i===g||(r.ready=!0,r.readyErrorCbs.forEach((function(t){t(e)}))))}))},Ee.prototype.confirmTransition=function(e,t,n){var o=this,r=this.current;this.pending=e;var i,a,s=function(e){!Se(e)&&xe(e)&&(o.errorCbs.length?o.errorCbs.forEach((function(t){t(e)})):console.error(e)),n&&n(e)},c=e.matched.length-1,u=r.matched.length-1;if(b(e,r)&&c===u&&e.matched[c]===r.matched[u])return this.ensureURL(),e.hash&&se(this.router,r,e,!1),s(((a=Ce(i=r,e,be.duplicated,'Avoided redundant navigation to current location: "'+i.fullPath+'".')).name="NavigationDuplicated",a));var l=function(e,t){var n,o=Math.max(e.length,t.length);for(n=0;n0)){var t=this.router,n=t.options.scrollBehavior,o=ge&&n;o&&this.listeners.push(ae());var r=function(){var n=e.current,r=De(e.base);e.current===g&&r===e._startLocation||e.transitionTo(r,(function(e){o&&se(t,e,n,!0)}))};window.addEventListener("popstate",r),this.listeners.push((function(){window.removeEventListener("popstate",r)}))}},t.prototype.go=function(e){window.history.go(e)},t.prototype.push=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){ve(S(o.base+e.fullPath)),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){ye(S(o.base+e.fullPath)),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.ensureURL=function(e){if(De(this.base)!==this.current.fullPath){var t=S(this.base+this.current.fullPath);e?ve(t):ye(t)}},t.prototype.getCurrentLocation=function(){return De(this.base)},t}(Ee);function De(e){var t=window.location.pathname,n=t.toLowerCase(),o=e.toLowerCase();return!e||n!==o&&0!==n.indexOf(S(o+"/"))||(t=t.slice(e.length)),(t||"/")+window.location.search+window.location.hash}var Ne=function(e){function t(t,n,o){e.call(this,t,n),o&&function(e){var t=De(e);if(!/^\/#/.test(t))return window.location.replace(S(e+"/#"+t)),!0}(this.base)||Fe()}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.setupListeners=function(){var e=this;if(!(this.listeners.length>0)){var t=this.router.options.scrollBehavior,n=ge&&t;n&&this.listeners.push(ae());var o=function(){var t=e.current;Fe()&&e.transitionTo(Re(),(function(o){n&&se(e.router,o,t,!0),ge||ze(o.fullPath)}))},r=ge?"popstate":"hashchange";window.addEventListener(r,o),this.listeners.push((function(){window.removeEventListener(r,o)}))}},t.prototype.push=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){Ue(e.fullPath),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){ze(e.fullPath),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.go=function(e){window.history.go(e)},t.prototype.ensureURL=function(e){var t=this.current.fullPath;Re()!==t&&(e?Ue(t):ze(t))},t.prototype.getCurrentLocation=function(){return Re()},t}(Ee);function Fe(){var e=Re();return"/"===e.charAt(0)||(ze("/"+e),!1)}function Re(){var e=window.location.href,t=e.indexOf("#");return t<0?"":e=e.slice(t+1)}function We(e){var t=window.location.href,n=t.indexOf("#");return(n>=0?t.slice(0,n):t)+"#"+e}function Ue(e){ge?ve(We(e)):window.location.hash=e}function ze(e){ge?ye(We(e)):window.location.replace(We(e))}var He=function(e){function t(t,n){e.call(this,t,n),this.stack=[],this.index=-1}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.push=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index+1).concat(e),o.index++,t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index).concat(e),t&&t(e)}),n)},t.prototype.go=function(e){var t=this,n=this.index+e;if(!(n<0||n>=this.stack.length)){var o=this.stack[n];this.confirmTransition(o,(function(){var e=t.current;t.index=n,t.updateRoute(o),t.router.afterHooks.forEach((function(t){t&&t(o,e)}))}),(function(e){Se(e,be.duplicated)&&(t.index=n)}))}},t.prototype.getCurrentLocation=function(){var e=this.stack[this.stack.length-1];return e?e.fullPath:"/"},t.prototype.ensureURL=function(){},t}(Ee),Be=function(e){void 0===e&&(e={}),this.app=null,this.apps=[],this.options=e,this.beforeHooks=[],this.resolveHooks=[],this.afterHooks=[],this.matcher=Q(e.routes||[],this);var t=e.mode||"hash";switch(this.fallback="history"===t&&!ge&&!1!==e.fallback,this.fallback&&(t="hash"),Z||(t="abstract"),this.mode=t,t){case"history":this.history=new Me(this,e.base);break;case"hash":this.history=new Ne(this,e.base,this.fallback);break;case"abstract":this.history=new He(this,e.base);break;default:0}},qe={currentRoute:{configurable:!0}};Be.prototype.match=function(e,t,n){return this.matcher.match(e,t,n)},qe.currentRoute.get=function(){return this.history&&this.history.current},Be.prototype.init=function(e){var t=this;if(this.apps.push(e),e.$once("hook:destroyed",(function(){var n=t.apps.indexOf(e);n>-1&&t.apps.splice(n,1),t.app===e&&(t.app=t.apps[0]||null),t.app||t.history.teardown()})),!this.app){this.app=e;var n=this.history;if(n instanceof Me||n instanceof Ne){var o=function(e){n.setupListeners(),function(e){var o=n.current,r=t.options.scrollBehavior;ge&&r&&"fullPath"in e&&se(t,e,o,!1)}(e)};n.transitionTo(n.getCurrentLocation(),o,o)}n.listen((function(e){t.apps.forEach((function(t){t._route=e}))}))}},Be.prototype.beforeEach=function(e){return Ge(this.beforeHooks,e)},Be.prototype.beforeResolve=function(e){return Ge(this.resolveHooks,e)},Be.prototype.afterEach=function(e){return Ge(this.afterHooks,e)},Be.prototype.onReady=function(e,t){this.history.onReady(e,t)},Be.prototype.onError=function(e){this.history.onError(e)},Be.prototype.push=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.push(e,t,n)}));this.history.push(e,t,n)},Be.prototype.replace=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.replace(e,t,n)}));this.history.replace(e,t,n)},Be.prototype.go=function(e){this.history.go(e)},Be.prototype.back=function(){this.go(-1)},Be.prototype.forward=function(){this.go(1)},Be.prototype.getMatchedComponents=function(e){var t=e?e.matched?e:this.resolve(e).route:this.currentRoute;return t?[].concat.apply([],t.matched.map((function(e){return Object.keys(e.components).map((function(t){return e.components[t]}))}))):[]},Be.prototype.resolve=function(e,t,n){var o=B(e,t=t||this.history.current,n,this),r=this.match(o,t),i=r.redirectedFrom||r.fullPath;return{location:o,route:r,href:function(e,t,n){var o="hash"===n?"#"+t:t;return e?S(e+"/"+o):o}(this.history.base,i,this.mode),normalizedTo:o,resolved:r}},Be.prototype.getRoutes=function(){return this.matcher.getRoutes()},Be.prototype.addRoute=function(e,t){this.matcher.addRoute(e,t),this.history.current!==g&&this.history.transitionTo(this.history.getCurrentLocation())},Be.prototype.addRoutes=function(e){this.matcher.addRoutes(e),this.history.current!==g&&this.history.transitionTo(this.history.getCurrentLocation())},Object.defineProperties(Be.prototype,qe);var Ve=Be;function Ge(e,t){return e.push(t),function(){var n=e.indexOf(t);n>-1&&e.splice(n,1)}}Be.install=function e(t){if(!e.installed||q!==t){e.installed=!0,q=t;var n=function(e){return void 0!==e},o=function(e,t){var o=e.$options._parentVnode;n(o)&&n(o=o.data)&&n(o=o.registerRouteInstance)&&o(e,t)};t.mixin({beforeCreate:function(){n(this.$options.router)?(this._routerRoot=this,this._router=this.$options.router,this._router.init(this),t.util.defineReactive(this,"_route",this._router.history.current)):this._routerRoot=this.$parent&&this.$parent._routerRoot||this,o(this,this)},destroyed:function(){o(this)}}),Object.defineProperty(t.prototype,"$router",{get:function(){return this._routerRoot._router}}),Object.defineProperty(t.prototype,"$route",{get:function(){return this._routerRoot._route}}),t.component("RouterView",C),t.component("RouterLink",G);var r=t.config.optionMergeStrategies;r.beforeRouteEnter=r.beforeRouteLeave=r.beforeRouteUpdate=r.created}},Be.version="3.6.5",Be.isNavigationFailure=Se,Be.NavigationFailureType=be,Be.START_LOCATION=g,Z&&window.Vue&&window.Vue.use(Be);n(64);var Ye=n(1),Ze=n(110),Je=n.n(Ze),Ke=n(111),Qe=n.n(Ke),Xe={created(){if(this.siteMeta=this.$site.headTags.filter(([e])=>"meta"===e).map(([e,t])=>t),this.$ssrContext){const t=this.getMergedMetaTags();this.$ssrContext.title=this.$title,this.$ssrContext.lang=this.$lang,this.$ssrContext.pageMeta=(e=t)?e.map(e=>{let t="{t+=` ${n}="${Qe()(e[n])}"`}),t+">"}).join("\n "):"",this.$ssrContext.canonicalLink=tt(this.$canonicalUrl)}var e},mounted(){this.currentMetaTags=[...document.querySelectorAll("meta")],this.updateMeta(),this.updateCanonicalLink()},methods:{updateMeta(){document.title=this.$title,document.documentElement.lang=this.$lang;const e=this.getMergedMetaTags();this.currentMetaTags=nt(e,this.currentMetaTags)},getMergedMetaTags(){const e=this.$page.frontmatter.meta||[];return Je()([{name:"description",content:this.$description}],e,this.siteMeta,ot)},updateCanonicalLink(){et(),this.$canonicalUrl&&document.head.insertAdjacentHTML("beforeend",tt(this.$canonicalUrl))}},watch:{$page(){this.updateMeta(),this.updateCanonicalLink()}},beforeDestroy(){nt(null,this.currentMetaTags),et()}};function et(){const e=document.querySelector("link[rel='canonical']");e&&e.remove()}function tt(e=""){return e?``:""}function nt(e,t){if(t&&[...t].filter(e=>e.parentNode===document.head).forEach(e=>document.head.removeChild(e)),e)return e.map(e=>{const t=document.createElement("meta");return Object.keys(e).forEach(n=>{t.setAttribute(n,e[n])}),document.head.appendChild(t),t})}function ot(e){for(const t of["name","property","itemprop"])if(e.hasOwnProperty(t))return e[t]+t;return JSON.stringify(e)}var rt=n(31),it=n.n(rt),at={mounted(){it.a.configure({showSpinner:!1}),this.$router.beforeEach((e,t,n)=>{e.path===t.path||o.a.component(e.name)||it.a.start(),n()}),this.$router.afterEach(()=>{it.a.done(),this.isSidebarOpen=!1})}},st=(n(262),Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:{},o=window.Promise||function(e){function t(){}e(t,t)},r=function(e){var t=e.target;t!==S?-1!==b.indexOf(t)&&m({target:t}):f()},i=function(){if(!k&&x.original){var e=window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0;Math.abs(C-e)>_.scrollOffset&&setTimeout(f,150)}},a=function(e){var t=e.key||e.keyCode;"Escape"!==t&&"Esc"!==t&&27!==t||f()},s=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e;if(e.background&&(S.style.background=e.background),e.container&&e.container instanceof Object&&(t.container=st({},_.container,e.container)),e.template){var n=ut(e.template)?e.template:document.querySelector(e.template);t.template=n}return _=st({},_,t),b.forEach((function(e){e.dispatchEvent(ft("medium-zoom:update",{detail:{zoom:O}}))})),O},c=function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};return e(st({},_,t))},u=function(){for(var e=arguments.length,t=Array(e),n=0;n0?t.reduce((function(e,t){return[].concat(e,dt(t))}),[]):b;return o.forEach((function(e){e.classList.remove("medium-zoom-image"),e.dispatchEvent(ft("medium-zoom:detach",{detail:{zoom:O}}))})),b=b.filter((function(e){return-1===o.indexOf(e)})),O},d=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return b.forEach((function(o){o.addEventListener("medium-zoom:"+e,t,n)})),w.push({type:"medium-zoom:"+e,listener:t,options:n}),O},h=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return b.forEach((function(o){o.removeEventListener("medium-zoom:"+e,t,n)})),w=w.filter((function(n){return!(n.type==="medium-zoom:"+e&&n.listener.toString()===t.toString())})),O},p=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.target,n=function(){var e={width:document.documentElement.clientWidth,height:document.documentElement.clientHeight,left:0,top:0,right:0,bottom:0},t=void 0,n=void 0;if(_.container)if(_.container instanceof Object)t=(e=st({},e,_.container)).width-e.left-e.right-2*_.margin,n=e.height-e.top-e.bottom-2*_.margin;else{var o=(ut(_.container)?_.container:document.querySelector(_.container)).getBoundingClientRect(),r=o.width,i=o.height,a=o.left,s=o.top;e=st({},e,{width:r,height:i,left:a,top:s})}t=t||e.width-2*_.margin,n=n||e.height-2*_.margin;var c=x.zoomedHd||x.original,u=lt(c)?t:c.naturalWidth||t,l=lt(c)?n:c.naturalHeight||n,d=c.getBoundingClientRect(),h=d.top,p=d.left,f=d.width,m=d.height,g=Math.min(Math.max(f,u),t)/f,v=Math.min(Math.max(m,l),n)/m,y=Math.min(g,v),b="scale("+y+") translate3d("+((t-f)/2-p+_.margin+e.left)/y+"px, "+((n-m)/2-h+_.margin+e.top)/y+"px, 0)";x.zoomed.style.transform=b,x.zoomedHd&&(x.zoomedHd.style.transform=b)};return new o((function(e){if(t&&-1===b.indexOf(t))e(O);else{if(x.zoomed)e(O);else{if(t)x.original=t;else{if(!(b.length>0))return void e(O);var o=b;x.original=o[0]}if(x.original.dispatchEvent(ft("medium-zoom:open",{detail:{zoom:O}})),C=window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0,k=!0,x.zoomed=pt(x.original),document.body.appendChild(S),_.template){var r=ut(_.template)?_.template:document.querySelector(_.template);x.template=document.createElement("div"),x.template.appendChild(r.content.cloneNode(!0)),document.body.appendChild(x.template)}if(x.original.parentElement&&"PICTURE"===x.original.parentElement.tagName&&x.original.currentSrc&&(x.zoomed.src=x.original.currentSrc),document.body.appendChild(x.zoomed),window.requestAnimationFrame((function(){document.body.classList.add("medium-zoom--opened")})),x.original.classList.add("medium-zoom-image--hidden"),x.zoomed.classList.add("medium-zoom-image--opened"),x.zoomed.addEventListener("click",f),x.zoomed.addEventListener("transitionend",(function t(){k=!1,x.zoomed.removeEventListener("transitionend",t),x.original.dispatchEvent(ft("medium-zoom:opened",{detail:{zoom:O}})),e(O)})),x.original.getAttribute("data-zoom-src")){x.zoomedHd=x.zoomed.cloneNode(),x.zoomedHd.removeAttribute("srcset"),x.zoomedHd.removeAttribute("sizes"),x.zoomedHd.removeAttribute("loading"),x.zoomedHd.src=x.zoomed.getAttribute("data-zoom-src"),x.zoomedHd.onerror=function(){clearInterval(i),console.warn("Unable to reach the zoom image target "+x.zoomedHd.src),x.zoomedHd=null,n()};var i=setInterval((function(){x.zoomedHd.complete&&(clearInterval(i),x.zoomedHd.classList.add("medium-zoom-image--opened"),x.zoomedHd.addEventListener("click",f),document.body.appendChild(x.zoomedHd),n())}),10)}else if(x.original.hasAttribute("srcset")){x.zoomedHd=x.zoomed.cloneNode(),x.zoomedHd.removeAttribute("sizes"),x.zoomedHd.removeAttribute("loading");var a=x.zoomedHd.addEventListener("load",(function(){x.zoomedHd.removeEventListener("load",a),x.zoomedHd.classList.add("medium-zoom-image--opened"),x.zoomedHd.addEventListener("click",f),document.body.appendChild(x.zoomedHd),n()}))}else n()}}}))},f=function(){return new o((function(e){if(!k&&x.original){k=!0,document.body.classList.remove("medium-zoom--opened"),x.zoomed.style.transform="",x.zoomedHd&&(x.zoomedHd.style.transform=""),x.template&&(x.template.style.transition="opacity 150ms",x.template.style.opacity=0),x.original.dispatchEvent(ft("medium-zoom:close",{detail:{zoom:O}})),x.zoomed.addEventListener("transitionend",(function t(){x.original.classList.remove("medium-zoom-image--hidden"),document.body.removeChild(x.zoomed),x.zoomedHd&&document.body.removeChild(x.zoomedHd),document.body.removeChild(S),x.zoomed.classList.remove("medium-zoom-image--opened"),x.template&&document.body.removeChild(x.template),k=!1,x.zoomed.removeEventListener("transitionend",t),x.original.dispatchEvent(ft("medium-zoom:closed",{detail:{zoom:O}})),x.original=null,x.zoomed=null,x.zoomedHd=null,x.template=null,e(O)}))}else e(O)}))},m=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.target;return x.original?f():p({target:t})},g=function(){return _},v=function(){return b},y=function(){return x.original},b=[],w=[],k=!1,C=0,_=n,x={original:null,zoomed:null,zoomedHd:null,template:null};"[object Object]"===Object.prototype.toString.call(t)?_=t:(t||"string"==typeof t)&&u(t),_=st({margin:0,background:"#fff",scrollOffset:40,container:null,template:null},_);var S=ht(_.background);document.addEventListener("click",r),document.addEventListener("keyup",a),document.addEventListener("scroll",i),window.addEventListener("resize",f);var O={open:p,close:f,toggle:m,update:s,clone:c,attach:u,detach:l,on:d,off:h,getOptions:g,getImages:v,getZoomedImage:y};return O},gt=[Xe,at,{data:()=>({zoom:null}),mounted(){this.updateZoom()},updated(){this.updateZoom()},methods:{updateZoom(){setTimeout(()=>{this.zoom&&this.zoom.detach(),this.zoom=mt(".theme-default-content :not(a) > img",void 0)},1e3)}}}],vt=n(2);Object(Ye.g)(vt.default,"mixins",gt);const yt=[{name:"v-0dc9b01d",path:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-0dc9b01d").then(n)}},{path:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/index.html",redirect:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/"},{path:"/_posts/2021-09-30-long-term-commitment-and-support-for-the-cadence-project-and-its-community.html",redirect:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/"},{name:"v-dd6fb5d2",path:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-dd6fb5d2").then(n)}},{path:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/index.html",redirect:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/"},{path:"/_posts/2021-10-13-announcing-cadence-oss-office-hours-and-community-sync-up.html",redirect:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/"},{name:"v-4100b969",path:"/blog/2021/10/19/moving-to-grpc/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-4100b969").then(n)}},{path:"/blog/2021/10/19/moving-to-grpc/index.html",redirect:"/blog/2021/10/19/moving-to-grpc/"},{path:"/_posts/2021-10-19-moving-to-grpc.html",redirect:"/blog/2021/10/19/moving-to-grpc/"},{name:"v-5d913a79",path:"/blog/2022/01/31/community-spotlight-january-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-5d913a79").then(n)}},{path:"/blog/2022/01/31/community-spotlight-january-2022/index.html",redirect:"/blog/2022/01/31/community-spotlight-january-2022/"},{path:"/_posts/2022-01-31-community-spotlight-january-2022.html",redirect:"/blog/2022/01/31/community-spotlight-january-2022/"},{name:"v-5bc86237",path:"/blog/2022/02/28/community-spotlight-february-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-5bc86237").then(n)}},{path:"/blog/2022/02/28/community-spotlight-february-2022/index.html",redirect:"/blog/2022/02/28/community-spotlight-february-2022/"},{path:"/_posts/2022-02-28-community-spotlight-february-2022.html",redirect:"/blog/2022/02/28/community-spotlight-february-2022/"},{name:"v-52ad8f77",path:"/blog/2022/03/31/community-spotlight-update-march-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-52ad8f77").then(n)}},{path:"/blog/2022/03/31/community-spotlight-update-march-2022/index.html",redirect:"/blog/2022/03/31/community-spotlight-update-march-2022/"},{path:"/_posts/2022-03-31-community-spotlight-update-march-2022.html",redirect:"/blog/2022/03/31/community-spotlight-update-march-2022/"},{name:"v-59a2ac57",path:"/blog/2022/04/30/community-spotlight-update-april-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-59a2ac57").then(n)}},{path:"/blog/2022/04/30/community-spotlight-update-april-2022/index.html",redirect:"/blog/2022/04/30/community-spotlight-update-april-2022/"},{path:"/_posts/2022-04-30-community-spotlight-update-april-2022.html",redirect:"/blog/2022/04/30/community-spotlight-update-april-2022/"},{name:"v-586fa1f7",path:"/blog/2022/05/31/community-spotlight-update-may-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-586fa1f7").then(n)}},{path:"/blog/2022/05/31/community-spotlight-update-may-2022/index.html",redirect:"/blog/2022/05/31/community-spotlight-update-may-2022/"},{path:"/_posts/2022-05-31-community-spotlight-update-may-2022.html",redirect:"/blog/2022/05/31/community-spotlight-update-may-2022/"},{name:"v-46e2ddd1",path:"/blog/2022/07/31/community-spotlight-update-july-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-46e2ddd1").then(n)}},{path:"/blog/2022/07/31/community-spotlight-update-july-2022/index.html",redirect:"/blog/2022/07/31/community-spotlight-update-july-2022/"},{path:"/_posts/2022-07-31-community-spotlight-update-july-2022.html",redirect:"/blog/2022/07/31/community-spotlight-update-july-2022/"},{name:"v-2a9dfbe5",path:"/blog/2022/06/30/community-spotlight-update-june-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-2a9dfbe5").then(n)}},{path:"/blog/2022/06/30/community-spotlight-update-june-2022/index.html",redirect:"/blog/2022/06/30/community-spotlight-update-june-2022/"},{path:"/_posts/2022-06-30-community-spotlight-update-june-2022.html",redirect:"/blog/2022/06/30/community-spotlight-update-june-2022/"},{name:"v-151d3dd2",path:"/blog/2022/08/31/community-spotlight-august-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-151d3dd2").then(n)}},{path:"/blog/2022/08/31/community-spotlight-august-2022/index.html",redirect:"/blog/2022/08/31/community-spotlight-august-2022/"},{path:"/_posts/2022-08-31-community-spotlight-august-2022.html",redirect:"/blog/2022/08/31/community-spotlight-august-2022/"},{name:"v-793e7375",path:"/blog/2022/10/11/community-spotlight-september-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-793e7375").then(n)}},{path:"/blog/2022/10/11/community-spotlight-september-2022/index.html",redirect:"/blog/2022/10/11/community-spotlight-september-2022/"},{path:"/_posts/2022-09-30-community-spotlight-september-2022.html",redirect:"/blog/2022/10/11/community-spotlight-september-2022/"},{name:"v-5f5271a9",path:"/blog/2022/10/31/community-spotlight-october-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-5f5271a9").then(n)}},{path:"/blog/2022/10/31/community-spotlight-october-2022/index.html",redirect:"/blog/2022/10/31/community-spotlight-october-2022/"},{path:"/_posts/2022-10-31-community-spotlight-october-2022.html",redirect:"/blog/2022/10/31/community-spotlight-october-2022/"},{name:"v-185e9f52",path:"/blog/2022/11/30/community-spotlight-november-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-185e9f52").then(n)}},{path:"/blog/2022/11/30/community-spotlight-november-2022/index.html",redirect:"/blog/2022/11/30/community-spotlight-november-2022/"},{path:"/_posts/2022-11-30-community-spotlight-november-2022.html",redirect:"/blog/2022/11/30/community-spotlight-november-2022/"},{name:"v-6582ae57",path:"/blog/2022/12/23/community-spotlight-december-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-6582ae57").then(n)}},{path:"/blog/2022/12/23/community-spotlight-december-2022/index.html",redirect:"/blog/2022/12/23/community-spotlight-december-2022/"},{path:"/_posts/2022-12-23-community-spotlight-december-2022.html",redirect:"/blog/2022/12/23/community-spotlight-december-2022/"},{name:"v-55690947",path:"/blog/2023/02/28/community-spotlight-february/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-55690947").then(n)}},{path:"/blog/2023/02/28/community-spotlight-february/index.html",redirect:"/blog/2023/02/28/community-spotlight-february/"},{path:"/_posts/2023-02-28-community-spotlight-february.html",redirect:"/blog/2023/02/28/community-spotlight-february/"},{name:"v-2315d60a",path:"/blog/2023/06/08/survey-results/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-2315d60a").then(n)}},{path:"/blog/2023/06/08/survey-results/index.html",redirect:"/blog/2023/06/08/survey-results/"},{path:"/_posts/2023-06-08-survey-results.html",redirect:"/blog/2023/06/08/survey-results/"},{name:"v-9e2dfeb2",path:"/blog/2023/03/31/community-spotlight-march-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-9e2dfeb2").then(n)}},{path:"/blog/2023/03/31/community-spotlight-march-2023/index.html",redirect:"/blog/2023/03/31/community-spotlight-march-2023/"},{path:"/_posts/2023-03-31-community-spotlight-march-2023.html",redirect:"/blog/2023/03/31/community-spotlight-march-2023/"},{name:"v-1ea4d8b9",path:"/blog/2023/01/31/community-spotlight-january-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-1ea4d8b9").then(n)}},{path:"/blog/2023/01/31/community-spotlight-january-2023/index.html",redirect:"/blog/2023/01/31/community-spotlight-january-2023/"},{path:"/_posts/2023-01-31-community-spotlight-january-2023.html",redirect:"/blog/2023/01/31/community-spotlight-january-2023/"},{name:"v-4ff003f7",path:"/blog/2023/07/01/components-of-cadence-application-setup/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-4ff003f7").then(n)}},{path:"/blog/2023/07/01/components-of-cadence-application-setup/index.html",redirect:"/blog/2023/07/01/components-of-cadence-application-setup/"},{path:"/_posts/2023-06-28-components-of-cadence-application-setup.html",redirect:"/blog/2023/07/01/components-of-cadence-application-setup/"},{name:"v-7ca21f57",path:"/blog/2023/06/30/community-spotlight-june-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-7ca21f57").then(n)}},{path:"/blog/2023/06/30/community-spotlight-june-2023/index.html",redirect:"/blog/2023/06/30/community-spotlight-june-2023/"},{path:"/_posts/2023-06-30-community-spotlight-june-2023.html",redirect:"/blog/2023/06/30/community-spotlight-june-2023/"},{name:"v-6df5dc97",path:"/blog/2023/07/05/implement-cadence-worker-from-scratch/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-6df5dc97").then(n)}},{path:"/blog/2023/07/05/implement-cadence-worker-from-scratch/index.html",redirect:"/blog/2023/07/05/implement-cadence-worker-from-scratch/"},{path:"/_posts/2023-07-05-implement-cadence-worker-from-scratch.html",redirect:"/blog/2023/07/05/implement-cadence-worker-from-scratch/"},{name:"v-45466bdb",path:"/blog/2023/07/16/write-your-first-workflow-with-cadence/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-45466bdb").then(n)}},{path:"/blog/2023/07/16/write-your-first-workflow-with-cadence/index.html",redirect:"/blog/2023/07/16/write-your-first-workflow-with-cadence/"},{path:"/_posts/2023-07-16-write-your-first-workflow-with-cadence.html",redirect:"/blog/2023/07/16/write-your-first-workflow-with-cadence/"},{name:"v-bed2d0d2",path:"/blog/2023/07/31/community-spotlight-july-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-bed2d0d2").then(n)}},{path:"/blog/2023/07/31/community-spotlight-july-2023/index.html",redirect:"/blog/2023/07/31/community-spotlight-july-2023/"},{path:"/_posts/2023-07-31-community-spotlight-july-2023.html",redirect:"/blog/2023/07/31/community-spotlight-july-2023/"},{name:"v-54c8d717",path:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-54c8d717").then(n)}},{path:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/index.html",redirect:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/"},{path:"/_posts/2023-08-28-nondeterministic-errors-replayers-shadowers.html",redirect:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/"},{name:"v-6e3f5451",path:"/blog/2023/11/30/community-spotlight-update-november-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-6e3f5451").then(n)}},{path:"/blog/2023/11/30/community-spotlight-update-november-2023/index.html",redirect:"/blog/2023/11/30/community-spotlight-update-november-2023/"},{path:"/_posts/2023-11-30-community-spotlight-update-november-2023.html",redirect:"/blog/2023/11/30/community-spotlight-update-november-2023/"},{name:"v-32adf8e6",path:"/blog/2023/07/10/cadence-bad-practices-part-1/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-32adf8e6").then(n)}},{path:"/blog/2023/07/10/cadence-bad-practices-part-1/index.html",redirect:"/blog/2023/07/10/cadence-bad-practices-part-1/"},{path:"/_posts/2023-07-10-cadence-bad-practices-part-1.html",redirect:"/blog/2023/07/10/cadence-bad-practices-part-1/"},{name:"v-0b00b852",path:"/blog/2023/08/31/community-spotlight-august-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-0b00b852").then(n)}},{path:"/blog/2023/08/31/community-spotlight-august-2023/index.html",redirect:"/blog/2023/08/31/community-spotlight-august-2023/"},{path:"/_posts/2023-08-31-community-spotlight-august-2023.html",redirect:"/blog/2023/08/31/community-spotlight-august-2023/"},{name:"v-39909852",path:"/blog/2024/03/10/cadence-non-deterministic-common-qa/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-39909852").then(n)}},{path:"/blog/2024/03/10/cadence-non-deterministic-common-qa/index.html",redirect:"/blog/2024/03/10/cadence-non-deterministic-common-qa/"},{path:"/_posts/2024-02-15-cadence-non-deterministic-common-qa.html",redirect:"/blog/2024/03/10/cadence-non-deterministic-common-qa/"},{name:"v-44d49837",path:"/blog/2024/07/11/yearly-roadmap-update/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-44d49837").then(n)}},{path:"/blog/2024/07/11/yearly-roadmap-update/index.html",redirect:"/blog/2024/07/11/yearly-roadmap-update/"},{path:"/_posts/2024-07-11-yearly-roadmap-update.html",redirect:"/blog/2024/07/11/yearly-roadmap-update/"},{name:"v-15401a12",path:"/blog/2024/09/05/workflow-specific-rate-limits/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-15401a12").then(n)}},{path:"/blog/2024/09/05/workflow-specific-rate-limits/index.html",redirect:"/blog/2024/09/05/workflow-specific-rate-limits/"},{path:"/_posts/2024-09-05-workflow-specific-rate-limits.html",redirect:"/blog/2024/09/05/workflow-specific-rate-limits/"},{name:"v-480f0a7a",path:"/blog/2023/03/11/community-spotlight-update-march-2024/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-480f0a7a").then(n)}},{path:"/blog/2023/03/11/community-spotlight-update-march-2024/index.html",redirect:"/blog/2023/03/11/community-spotlight-update-march-2024/"},{path:"/_posts/2024-3-11-community-spotlight-update-march-2024.html",redirect:"/blog/2023/03/11/community-spotlight-update-march-2024/"},{name:"v-424df898",path:"/blog/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-424df898").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/index.html",redirect:"/blog/"},{name:"v-b1564aac",path:"/tag/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("FrontmatterKey","v-b1564aac").then(n)},meta:{pid:"tag",id:"tag"}},{path:"/tag/index.html",redirect:"/tag/"},{name:"v-c3507bb6",path:"/blog/page/2/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507bb6").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/2/index.html",redirect:"/blog/page/2/"},{name:"v-c3507b78",path:"/blog/page/3/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507b78").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/3/index.html",redirect:"/blog/page/3/"},{name:"v-c3507b3a",path:"/blog/page/4/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507b3a").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/4/index.html",redirect:"/blog/page/4/"},{name:"v-c3507afc",path:"/blog/page/5/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507afc").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/5/index.html",redirect:"/blog/page/5/"},{name:"v-c3507abe",path:"/blog/page/6/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507abe").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/6/index.html",redirect:"/blog/page/6/"},{name:"v-c3507a80",path:"/blog/page/7/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507a80").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/7/index.html",redirect:"/blog/page/7/"},{path:"*",component:vt.default}],bt={title:"",description:"",base:"/",headTags:[["link",{rel:"alternate",type:"application/rss+xml",href:"/rss.xml",title:" RSS Feed"}],["link",{rel:"alternate",type:"application/json",href:"/feed.json",title:" JSON Feed"}]],pages:[{title:"Long-term commitment and support for the Cadence project, and its community",frontmatter:{title:"Long-term commitment and support for the Cadence project, and its community",date:"2021-09-30T00:00:00.000Z",author:"Liang Mei",authorlink:"https://www.linkedin.com/in/meiliang86/",description:"Dear valued Cadence users and developers,\n\nSome of you might have read Temporal’s recent announcement about their decision to drop the support for the Cadence project. This message caused some confusion in the community, so we would like to take this opportunity to clear things out.\n\nFirst of all, Uber is committed to the long-term success of the Cadence project. Since its inception 5 years ago, use cases built on Cadence and their scale have grown significantly at Uber. Today, Cadence powers a variety of our most business-critical use cases (some public stories are available here and here). At the same time, the Cadence development team at Uber has enjoyed rapid growth with the product and has been driving innovations of workflow technology across the board, from new features (e.g. graceful failover, [workflow shadowing] ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2021-09-30-long-term-commitment-and-support-for-the-cadence-project-and-its-community.html",relativePath:"_posts/2021-09-30-long-term-commitment-and-support-for-the-cadence-project-and-its-community.md",key:"v-0dc9b01d",path:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/",summary:"Dear valued Cadence users and developers,\n\nSome of you might have read Temporal’s recent announcement about their decision to drop the support for the Cadence project. This message caused some confusion in the community, so we would like to take this opportunity to clear things out.\n\nFirst of all, Uber is committed to the long-term success of the Cadence project. Since its inception 5 years ago, use cases built on Cadence and their scale have grown significantly at Uber. Today, Cadence powers a variety of our most business-critical use cases (some public stories are available here and here). At the same time, the Cadence development team at Uber has enjoyed rapid growth with the product and has been driving innovations of workflow technology across the board, from new features (e.g. graceful failover, [workflow shadowing] ...",id:"post",pid:"post"},{title:"Announcing Cadence OSS office hours and community sync up",frontmatter:{title:"Announcing Cadence OSS office hours and community sync up",date:"2021-10-13T00:00:00.000Z",author:"Liang Mei",authorlink:"https://www.linkedin.com/in/meiliang86/",description:"Are you a current Cadence user, do you operate Cadence services, or are you interested in learning about workflow technologies and wonder what problems Cadence could solve for you? We would like to talk to you!\n\nOur team has spent a significant amount of time working with users and partner teams at Uber to design, scale and operate their workflows. This helps our users understand the technology better, smooth their learning curve and ramp up experience, and at the same time allows us to get fast and direct feedback so we can improve the developer experience and close feature gaps. As our product and community grows, we would like to expand this practice to our users in the OSS community. For the first time ever, members of the Cadence team along with core contributors from the community will host bi-weekly office hours to answer any questions you have about Cadence, or workflow technology in general. We can also dedicate future sessions to specific topics that have a common intere ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2021-10-13-announcing-cadence-oss-office-hours-and-community-sync-up.html",relativePath:"_posts/2021-10-13-announcing-cadence-oss-office-hours-and-community-sync-up.md",key:"v-dd6fb5d2",path:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/",summary:"Are you a current Cadence user, do you operate Cadence services, or are you interested in learning about workflow technologies and wonder what problems Cadence could solve for you? We would like to talk to you!\n\nOur team has spent a significant amount of time working with users and partner teams at Uber to design, scale and operate their workflows. This helps our users understand the technology better, smooth their learning curve and ramp up experience, and at the same time allows us to get fast and direct feedback so we can improve the developer experience and close feature gaps. As our product and community grows, we would like to expand this practice to our users in the OSS community. For the first time ever, members of the Cadence team along with core contributors from the community will host bi-weekly office hours to answer any questions you have about Cadence, or workflow technology in general. We can also dedicate future sessions to specific topics that have a common intere ...",id:"post",pid:"post"},{title:"Moving to gRPC",frontmatter:{title:"Moving to gRPC",date:"2021-10-19T00:00:00.000Z",author:"Vytautas Karpavicius",authorlink:"https://www.linkedin.com/in/vytautas-karpavicius",description:"\nCadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.\n\nOur Approach\nWith ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2021-10-19-moving-to-grpc.html",relativePath:"_posts/2021-10-19-moving-to-grpc.md",key:"v-4100b969",path:"/blog/2021/10/19/moving-to-grpc/",headers:[{level:2,title:"Background",slug:"background"},{level:2,title:"Our Approach",slug:"our-approach"},{level:2,title:"System overview",slug:"system-overview"},{level:2,title:"Migration steps",slug:"migration-steps"},{level:3,title:"Upgrading Cadence server",slug:"upgrading-cadence-server"},{level:3,title:"Upgrading clients",slug:"upgrading-clients"},{level:3,title:"Status at Uber",slug:"status-at-uber"}],summary:"\nCadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.\n\nOur Approach\nWith ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - January 2022",frontmatter:{title:"Cadence Community Spotlight Update - January 2022",date:"2022-01-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our very first Cadence Community Spotlight update!\n\nThis monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.\n\nPlease see below for a short round up of things that have happened recently in the community.\n\nCommunity Related Office Hours\n\nOn the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.\n\nAnd you can see that this monthly update is the result of the fe ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-01-31-community-spotlight-january-2022.html",relativePath:"_posts/2022-01-31-community-spotlight-january-2022.md",key:"v-5d913a79",path:"/blog/2022/01/31/community-spotlight-january-2022/",headers:[{level:2,title:"Community Related Office Hours",slug:"community-related-office-hours"},{level:2,title:"Adopting a Cadence Community Code of Conduct",slug:"adopting-a-cadence-community-code-of-conduct"},{level:2,title:"Recording from Cadence Meetup Available",slug:"recording-from-cadence-meetup-available"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our very first Cadence Community Spotlight update!\n\nThis monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.\n\nPlease see below for a short round up of things that have happened recently in the community.\n\nCommunity Related Office Hours\n\nOn the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.\n\nAnd you can see that this monthly update is the result of the fe ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - February 2022",frontmatter:{title:"Cadence Community Spotlight Update - February 2022",date:"2022-02-28T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the Cadence Community Spotlight update!\n\nThis is the second in our series of monthly updates focused on the Cadence community and news about what you have been doing with Cadence. We hope that you enjoyed last month's update and are keen to find out what has been happening.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nAnnouncements\n\nJust in case you missed it the alpha version of the Cadence notification service has been released. Details can be found at the following link:\nCadence Notification Service\n\nThanks very much to everyone that worked on this!\n\nCommunity Supporting the Community\n\nDuring February 16 questions were posted in the Cadence #support Slack channel from new Cadence users and existing community members looking for help and guidance. A very big thank you to the following community members who to ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-02-28-community-spotlight-february-2022.html",relativePath:"_posts/2022-02-28-community-spotlight-february-2022.md",key:"v-5bc86237",path:"/blog/2022/02/28/community-spotlight-february-2022/",headers:[{level:2,title:"Announcements",slug:"announcements"},{level:2,title:"Community Supporting the Community",slug:"community-supporting-the-community"},{level:2,title:"Please Subscribe to our Youtube Channel",slug:"please-subscribe-to-our-youtube-channel"},{level:2,title:"Help us to Make Cadence even better",slug:"help-us-to-make-cadence-even-better"},{level:2,title:"Cadence Calendar",slug:"cadence-calendar"},{level:2,title:"Cadence Technical Office Hours",slug:"cadence-technical-office-hours"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the Cadence Community Spotlight update!\n\nThis is the second in our series of monthly updates focused on the Cadence community and news about what you have been doing with Cadence. We hope that you enjoyed last month's update and are keen to find out what has been happening.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nAnnouncements\n\nJust in case you missed it the alpha version of the Cadence notification service has been released. Details can be found at the following link:\nCadence Notification Service\n\nThanks very much to everyone that worked on this!\n\nCommunity Supporting the Community\n\nDuring February 16 questions were posted in the Cadence #support Slack channel from new Cadence users and existing community members looking for help and guidance. A very big thank you to the following community members who to ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - March 2022",frontmatter:{title:"Cadence Community Spotlight Update - March 2022",date:"2022-03-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our Cadence Community Spotlight update!\n\nThis is the latest in our series of monthly blog posts focused on the Cadence community and news about what you have been doing with Cadence.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nUpdated Cadence Topology Diagram\n\nDid you know that we have an updated Cadence Service diagram on the website? Well we do - and you can find it on our Deployment Topology page. We are always looking for information that helps makes it easier for people to understand how Cadence works.\n\nSpecial thanks to Ben Slater for updating the diagram and also to Ender, Emrah and Long for helping review it.\n\nMonthly Cadence Technical Office Hours\n\nEvery month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-03-31-community-spotlight-update-march-2022.html",relativePath:"_posts/2022-03-31-community-spotlight-update-march-2022.md",key:"v-52ad8f77",path:"/blog/2022/03/31/community-spotlight-update-march-2022/",headers:[{level:2,title:"Updated Cadence Topology Diagram",slug:"updated-cadence-topology-diagram"},{level:2,title:"Monthly Cadence Technical Office Hours",slug:"monthly-cadence-technical-office-hours"},{level:2,title:"Some Cadence Statistics",slug:"some-cadence-statistics"},{level:2,title:"Using StackOverflow to Respond to Support Questions",slug:"using-stackoverflow-to-respond-to-support-questions"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our Cadence Community Spotlight update!\n\nThis is the latest in our series of monthly blog posts focused on the Cadence community and news about what you have been doing with Cadence.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nUpdated Cadence Topology Diagram\n\nDid you know that we have an updated Cadence Service diagram on the website? Well we do - and you can find it on our Deployment Topology page. We are always looking for information that helps makes it easier for people to understand how Cadence works.\n\nSpecial thanks to Ben Slater for updating the diagram and also to Ender, Emrah and Long for helping review it.\n\nMonthly Cadence Technical Office Hours\n\nEvery month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - April 2022",frontmatter:{title:"Cadence Community Spotlight Update - April 2022",date:"2022-04-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nSD Times Names Cadence Open Source Project of the Week\n\nIn April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the Cadence in the News section below.\n\nFollow Us on LinkedIn and Twitter!\n\nWe have now set up Cadence accounts on LinkedIn and Twitter where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!\n\nAnd don’t forget to share your news with us. We are l ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-04-30-community-spotlight-update-april-2022.html",relativePath:"_posts/2022-04-30-community-spotlight-update-april-2022.md",key:"v-59a2ac57",path:"/blog/2022/04/30/community-spotlight-update-april-2022/",headers:[{level:2,title:"SD Times Names Cadence Open Source Project of the Week",slug:"sd-times-names-cadence-open-source-project-of-the-week"},{level:2,title:"Follow Us on LinkedIn and Twitter!",slug:"follow-us-on-linkedin-and-twitter"},{level:2,title:"Proposal to Change the Way We Write Workflows",slug:"proposal-to-change-the-way-we-write-workflows"},{level:2,title:"Help Us Improve Cadence",slug:"help-us-improve-cadence"},{level:2,title:"Next Cadence Technical Office Hours: 30th May 2022",slug:"next-cadence-technical-office-hours-30th-may-2022"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nSD Times Names Cadence Open Source Project of the Week\n\nIn April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the Cadence in the News section below.\n\nFollow Us on LinkedIn and Twitter!\n\nWe have now set up Cadence accounts on LinkedIn and Twitter where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!\n\nAnd don’t forget to share your news with us. We are l ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - May 2022",frontmatter:{title:"Cadence Community Spotlight Update - May 2022",date:"2022-05-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our regular Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nCadence Polling Cookbook\n\nDo you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new Cadence Polling cookbook is now available that gives you all the details you need. The cookbook was created by several members of the Instaclustr team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the Polling an external API for a specific resource to become available section of the Polling Use cases.\n\nA [Github repository](https://github.com/instaclustr/cadence-cookbook ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-05-31-community-spotlight-update-may-2022.html",relativePath:"_posts/2022-05-31-community-spotlight-update-may-2022.md",key:"v-586fa1f7",path:"/blog/2022/05/31/community-spotlight-update-may-2022/",headers:[{level:2,title:"Cadence Polling Cookbook",slug:"cadence-polling-cookbook"},{level:2,title:"Congratulations to a First Time Contributor",slug:"congratulations-to-a-first-time-contributor"},{level:2,title:"Share Your News!",slug:"share-your-news"},{level:2,title:"Next Cadence Technical Office Hours: 3rd and 27th June 2022",slug:"next-cadence-technical-office-hours-3rd-and-27th-june-2022"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our regular Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nCadence Polling Cookbook\n\nDo you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new Cadence Polling cookbook is now available that gives you all the details you need. The cookbook was created by several members of the Instaclustr team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the Polling an external API for a specific resource to become available section of the Polling Use cases.\n\nA [Github repository](https://github.com/instaclustr/cadence-cookbook ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - July 2022",frontmatter:{title:"Cadence Community Spotlight Update - July 2022",date:"2022-07-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nFlying Drones with Cadence\n\nCommunity member Paul Brebner has released another blog in the series of using Cadence to manage a drone delivery service. You can see a simulated view of it in action\n\nDon’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.\n\nGitHub Statistics\n\nDuring July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details here\n\nThe Cadence documentati ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-07-31-community-spotlight-update-july-2022.html",relativePath:"_posts/2022-07-31-community-spotlight-update-july-2022.md",key:"v-46e2ddd1",path:"/blog/2022/07/31/community-spotlight-update-july-2022/",headers:[{level:2,title:"Flying Drones with Cadence",slug:"flying-drones-with-cadence"},{level:2,title:"GitHub Statistics",slug:"github-statistics"},{level:2,title:"Cadence Roadmap",slug:"cadence-roadmap"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nFlying Drones with Cadence\n\nCommunity member Paul Brebner has released another blog in the series of using Cadence to manage a drone delivery service. You can see a simulated view of it in action\n\nDon’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.\n\nGitHub Statistics\n\nDuring July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details here\n\nThe Cadence documentati ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - June 2022",frontmatter:{title:"Cadence Community Spotlight Update - June 2022",date:"2022-06-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nKnowledge Sharing and Support\n\nOur Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.\n\nPlease don’t forget that we encourage everyone to post questions on StackOverflow using the cadence-workflow and uber-cadence tags so that others with similar questions or issues can easily search for and find an answer.\n\nImproving Technical Office Hours\n\nOver the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-06-30-community-spotlight-update-june-2022.html",relativePath:"_posts/2022-06-30-community-spotlight-update-june-2022.md",key:"v-2a9dfbe5",path:"/blog/2022/06/30/community-spotlight-update-june-2022/",headers:[{level:2,title:"Knowledge Sharing and Support",slug:"knowledge-sharing-and-support"},{level:2,title:"Improving Technical Office Hours",slug:"improving-technical-office-hours"},{level:2,title:"Cadence Stability Improvements",slug:"cadence-stability-improvements"},{level:2,title:"Sprechen Sie Deutsch?",slug:"sprechen-sie-deutsch"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nKnowledge Sharing and Support\n\nOur Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.\n\nPlease don’t forget that we encourage everyone to post questions on StackOverflow using the cadence-workflow and uber-cadence tags so that others with similar questions or issues can easily search for and find an answer.\n\nImproving Technical Office Hours\n\nOver the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - August 2022",frontmatter:{title:"Cadence Community Spotlight Update - August 2022",date:"2022-08-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\n\nWe are working on putting together our first community survey to find out a bit more about our community. We would like to get your feedback about on a few things such as:\n\nhow you are using Cadence\nany specific experiences you have had where you'd like to see new features\nany special use cases not yet covered\nand of course whatever other feedback you'd like to give us\n\nSo please watch out for the survey which will be coming out to you via the Slack channel soon!\n\nSupport Activity\n\nWe have noticed that community activity is increasing and that we are continuing to respond to questions in our Slack #support channel. Eight questions have been posted in the channel this month and another seven questions have been posted on StackOverflow. We encourage people to post their questi ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-08-31-community-spotlight-august-2022.html",relativePath:"_posts/2022-08-31-community-spotlight-august-2022.md",key:"v-151d3dd2",path:"/blog/2022/08/31/community-spotlight-august-2022/",headers:[{level:2,title:"Community Survey",slug:"community-survey"},{level:2,title:"Support Activity",slug:"support-activity"},{level:2,title:"GitHub Activity",slug:"github-activity"},{level:2,title:"Come Along to Our Next Cadence Meetup!",slug:"come-along-to-our-next-cadence-meetup"},{level:2,title:"Looking for a Cadence Role?",slug:"looking-for-a-cadence-role"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\n\nWe are working on putting together our first community survey to find out a bit more about our community. We would like to get your feedback about on a few things such as:\n\nhow you are using Cadence\nany specific experiences you have had where you'd like to see new features\nany special use cases not yet covered\nand of course whatever other feedback you'd like to give us\n\nSo please watch out for the survey which will be coming out to you via the Slack channel soon!\n\nSupport Activity\n\nWe have noticed that community activity is increasing and that we are continuing to respond to questions in our Slack #support channel. Eight questions have been posted in the channel this month and another seven questions have been posted on StackOverflow. We encourage people to post their questi ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - September 2022",frontmatter:{title:"Cadence Community Spotlight Update - September 2022",date:"2022-10-11T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Developer Week\n\nA Cadence talk by Ender Demirkaya and Ben Slater has been accepted for Developer Week Enterprise.\n\nThe talk is scheduled to for 16th November so please make a note in your calendars.\n\nSharing Knowledge\n\nOver the last few months we have had a continual stream of Cadence questions in our Slack #support channel or on StackOverflow. As a result of the increased interest some members from the Cadence core team have decided to spend some time each day responding to your questions.\n\nRemember that if you have received a response ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-09-30-community-spotlight-september-2022.html",relativePath:"_posts/2022-09-30-community-spotlight-september-2022.md",key:"v-793e7375",path:"/blog/2022/10/11/community-spotlight-september-2022/",headers:[{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Developer Week\n\nA Cadence talk by Ender Demirkaya and Ben Slater has been accepted for Developer Week Enterprise.\n\nThe talk is scheduled to for 16th November so please make a note in your calendars.\n\nSharing Knowledge\n\nOver the last few months we have had a continual stream of Cadence questions in our Slack #support channel or on StackOverflow. As a result of the increased interest some members from the Cadence core team have decided to spend some time each day responding to your questions.\n\nRemember that if you have received a response ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - October 2022",frontmatter:{title:"Cadence Community Spotlight Update - October 2022",date:"2022-10-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Meetup Postponed\n\nIt's always great to get the community together and we had planned to run another Cadence Meetup in early November. Unfortunately we didn't have enough time to get things organised so we've decided to postpone it. So please watch out for an announcement for the new Cadence meetup date.\n\nDoordash Technnical Showcase Featuring Cadence\n\nWe have had some great feedback from people who attended Technical Showcase that was run this month by Doordash. It featured their financial products but also highlighted some of the key technologies they use...and guess what Cadence is one of them!\n\nIf you missed the session then you will be happy to know that it was recorded and we've inlcuded a link to the the recording on Youtube.\n\nThanks to ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-10-31-community-spotlight-october-2022.html",relativePath:"_posts/2022-10-31-community-spotlight-october-2022.md",key:"v-5f5271a9",path:"/blog/2022/10/31/community-spotlight-october-2022/",headers:[{level:2,title:"Cadence Meetup Postponed",slug:"cadence-meetup-postponed"},{level:2,title:"Doordash Technnical Showcase Featuring Cadence",slug:"doordash-technnical-showcase-featuring-cadence"},{level:2,title:"iWF Support for Cadence",slug:"iwf-support-for-cadence"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Meetup Postponed\n\nIt's always great to get the community together and we had planned to run another Cadence Meetup in early November. Unfortunately we didn't have enough time to get things organised so we've decided to postpone it. So please watch out for an announcement for the new Cadence meetup date.\n\nDoordash Technnical Showcase Featuring Cadence\n\nWe have had some great feedback from people who attended Technical Showcase that was run this month by Doordash. It featured their financial products but also highlighted some of the key technologies they use...and guess what Cadence is one of them!\n\nIf you missed the session then you will be happy to know that it was recorded and we've inlcuded a link to the the recording on Youtube.\n\nThanks to ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - November 2022",frontmatter:{title:"Cadence Community Spotlight Update - November 2022",date:"2022-11-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence @ Uber\n\nThis month Uber Engineering published a really nice article on one of the ways they are using Cadence. The article is called How Uber Optimizes the Timing of Push Notifications using ML and Linear Programming.\n\nThe Uber team take you through the details of the problem that they are looking to solve, so you can understand the scope limitations and depedencies - so please take a look.\n\nCadence @ DeveloperWeek Enterprise\n\nDevNetwork run a series of conferences and during November Cadence was featured in at DeveloperWeek Enterprise. Ender Demirkaya and [Ben Slater](https://www.linkedin.com/in/ ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-11-30-community-spotlight-november-2022.html",relativePath:"_posts/2022-11-30-community-spotlight-november-2022.md",key:"v-185e9f52",path:"/blog/2022/11/30/community-spotlight-november-2022/",headers:[{level:2,title:"Cadence @ Uber",slug:"cadence-uber"},{level:2,title:"Cadence @ DeveloperWeek Enterprise",slug:"cadence-developerweek-enterprise"},{level:2,title:"Cadence at W-JAX",slug:"cadence-at-w-jax"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence @ Uber\n\nThis month Uber Engineering published a really nice article on one of the ways they are using Cadence. The article is called How Uber Optimizes the Timing of Push Notifications using ML and Linear Programming.\n\nThe Uber team take you through the details of the problem that they are looking to solve, so you can understand the scope limitations and depedencies - so please take a look.\n\nCadence @ DeveloperWeek Enterprise\n\nDevNetwork run a series of conferences and during November Cadence was featured in at DeveloperWeek Enterprise. Ender Demirkaya and [Ben Slater](https://www.linkedin.com/in/ ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - December 2022",frontmatter:{title:"Cadence Community Spotlight Update - December 2022",date:"2022-12-23T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"I know we are a little early this month as many people will be taking some time out for holidays.\n\nHappy Holidays\n\nWe'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.\n\nLet's keep going!...This is a great way to build a sustainable community.\n\nWe are sure that 2023 will be even more exciting as we continue to develop Cadence.\n\nCadence in the News!\n\nBelow are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.\n\nCadence iWF\n\nChild Workflow Cookbook\n\n[Cadence Connection Examples Using TLS](https://www.instaclus ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-12-23-community-spotlight-december-2022.html",relativePath:"_posts/2022-12-23-community-spotlight-december-2022.md",key:"v-6582ae57",path:"/blog/2022/12/23/community-spotlight-december-2022/",headers:[{level:2,title:"Happy Holidays",slug:"happy-holidays"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"I know we are a little early this month as many people will be taking some time out for holidays.\n\nHappy Holidays\n\nWe'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.\n\nLet's keep going!...This is a great way to build a sustainable community.\n\nWe are sure that 2023 will be even more exciting as we continue to develop Cadence.\n\nCadence in the News!\n\nBelow are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.\n\nCadence iWF\n\nChild Workflow Cookbook\n\n[Cadence Connection Examples Using TLS](https://www.instaclus ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - February 2023",frontmatter:{title:"Cadence Community Spotlight Update - February 2023",date:"2023-02-28T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\nWe've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.\n\nThe survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.\n\nUse this Survey Link\n\nPlease take a few minutes to give us your opinion.\n\nCadence and Temporal\nDuring user surveys we've had a few queries about whether Cadence and Temporal are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-02-28-community-spotlight-february.html",relativePath:"_posts/2023-02-28-community-spotlight-february.md",key:"v-55690947",path:"/blog/2023/02/28/community-spotlight-february/",headers:[{level:2,title:"Community Survey",slug:"community-survey"},{level:2,title:"Cadence and Temporal",slug:"cadence-and-temporal"},{level:2,title:"Cadence at DoorDash",slug:"cadence-at-doordash"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\nWe've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.\n\nThe survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.\n\nUse this Survey Link\n\nPlease take a few minutes to give us your opinion.\n\nCadence and Temporal\nDuring user surveys we've had a few queries about whether Cadence and Temporal are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different ...",id:"post",pid:"post"},{title:"2023 Cadence Community Survey Results",frontmatter:{title:"2023 Cadence Community Survey Results",date:"2023-06-08T00:00:00.000Z",author:"Ender Demirkaya",authorlink:"https://www.linkedin.com/in/enderdemirkaya/",description:"We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our Slack workspace, cadenceworkflow.io Blog and LinkedIn. After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.\n\nHere are some highlights in text and you can check out the visuals to get more details:\n\nusing.png\n\njob_role.png\n\nMost of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-06-08-survey-results.html",relativePath:"_posts/2023-06-08-survey-results.md",key:"v-2315d60a",path:"/blog/2023/06/08/survey-results/",summary:"We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our Slack workspace, cadenceworkflow.io Blog and LinkedIn. After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.\n\nHere are some highlights in text and you can check out the visuals to get more details:\n\nusing.png\n\njob_role.png\n\nMost of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - March 2023",frontmatter:{title:"Cadence Community Spotlight Update - March 2023",date:"2023-03-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Open Source Summit, North America\nWe are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's Open Source Summit, North America in Vancouver on 10th - 12th May 2023.\n\nThe talk called Cadence: The New Open Source Project for Building Complex Distributed Applications will be given by Ender Demirkaya and Emrah Seker If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!\n\nCommunity Activity\nOur Slack #support channel has been very active over the last fe ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-03-31-community-spotlight-march-2023.html",relativePath:"_posts/2023-03-31-community-spotlight-march-2023.md",key:"v-9e2dfeb2",path:"/blog/2023/03/31/community-spotlight-march-2023/",headers:[{level:2,title:"Cadence at Open Source Summit, North America",slug:"cadence-at-open-source-summit-north-america"},{level:2,title:"Community Activity",slug:"community-activity"},{level:2,title:"Cadence Developer Advocate",slug:"cadence-developer-advocate"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Open Source Summit, North America\nWe are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's Open Source Summit, North America in Vancouver on 10th - 12th May 2023.\n\nThe talk called Cadence: The New Open Source Project for Building Complex Distributed Applications will be given by Ender Demirkaya and Emrah Seker If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!\n\nCommunity Activity\nOur Slack #support channel has been very active over the last fe ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - January 2023",frontmatter:{title:"Cadence Community Spotlight Update - January 2023",date:"2023-01-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nClosing Down Cadence Office Hours\nWe have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.\n\nThanks very much to Ender Demirkayaand the Uber team for making themselves available for these sessions.\n\nPlease remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support Slack channel. If you al ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-01-31-community-spotlight-january-2023.html",relativePath:"_posts/2023-01-31-community-spotlight-january-2023.md",key:"v-1ea4d8b9",path:"/blog/2023/01/31/community-spotlight-january-2023/",headers:[{level:2,title:"Closing Down Cadence Office Hours",slug:"closing-down-cadence-office-hours"},{level:2,title:"Update on iWF Support for Cadence",slug:"update-on-iwf-support-for-cadence"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nClosing Down Cadence Office Hours\nWe have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.\n\nThanks very much to Ender Demirkayaand the Uber team for making themselves available for these sessions.\n\nPlease remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support Slack channel. If you al ...",id:"post",pid:"post"},{title:"Understanding components of Cadence application",frontmatter:{title:"Understanding components of Cadence application",date:"2023-07-01T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:"Cadence is a powerful, scalable, and fault-tolerant workflow orchestration framework that helps developers implement and manage complex workflow tasks. In most cases, developers contribute activities and workflows directly to their codebases, and they may not have a full understanding of the components behind a running Cadence application. We receive numerous inquiries about setting up Cadence in a local environment from scratch for testing. Therefore, in this article, we will explore the components that power a Cadence cluster.\n\nThere are three critical components that are essential for any Cadence application:\nA running Cadence backend server.\nA registered Cadence domain.\nA running Cadence worker that registers all workflows and activities.\n\nLet's go over these components in more details.\n\nThe Cadence backend serves as the heart of your Cadence application. It is responsible for processing and scheduling your workflows and activities. While the backend relies on various dep ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-06-28-components-of-cadence-application-setup.html",relativePath:"_posts/2023-06-28-components-of-cadence-application-setup.md",key:"v-4ff003f7",path:"/blog/2023/07/01/components-of-cadence-application-setup/",summary:"Cadence is a powerful, scalable, and fault-tolerant workflow orchestration framework that helps developers implement and manage complex workflow tasks. In most cases, developers contribute activities and workflows directly to their codebases, and they may not have a full understanding of the components behind a running Cadence application. We receive numerous inquiries about setting up Cadence in a local environment from scratch for testing. Therefore, in this article, we will explore the components that power a Cadence cluster.\n\nThere are three critical components that are essential for any Cadence application:\nA running Cadence backend server.\nA registered Cadence domain.\nA running Cadence worker that registers all workflows and activities.\n\nLet's go over these components in more details.\n\nThe Cadence backend serves as the heart of your Cadence application. It is responsible for processing and scheduling your workflows and activities. While the backend relies on various dep ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - June 2023",frontmatter:{title:"Cadence Community Spotlight Update - June 2023",date:"2023-06-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"We've had a short break but now we are back. Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Release 1.0\n\nJust in case you missed it - at the end of April Cadence v1.0 was officially released. This release is a significant milestone for the project and the community. It indicates that we are confident in the stability of the code that we can recommend it and promote it widely to more users. Kudos to everyone that worked together to make this release happen.\n\nAnd the Uber team also gave Cadence a writeup on the Uber Engineering Blog so please take a look.\n\nCommunity Survey Results\n\nThe results of our Community Survey have been published and you can find [the details right here on our blog](https://cadenceworkflow.io/blog/2 ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-06-30-community-spotlight-june-2023.html",relativePath:"_posts/2023-06-30-community-spotlight-june-2023.md",key:"v-7ca21f57",path:"/blog/2023/06/30/community-spotlight-june-2023/",headers:[{level:2,title:"Cadence Release 1.0",slug:"cadence-release-1-0"},{level:2,title:"Community Survey Results",slug:"community-survey-results"},{level:2,title:"Cadence Video Open Source Summit, North America",slug:"cadence-video-open-source-summit-north-america"},{level:2,title:"Overcoming Potential Workflow Versioning Maintenance Challenges",slug:"overcoming-potential-workflow-versioning-maintenance-challenges"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"We've had a short break but now we are back. Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Release 1.0\n\nJust in case you missed it - at the end of April Cadence v1.0 was officially released. This release is a significant milestone for the project and the community. It indicates that we are confident in the stability of the code that we can recommend it and promote it widely to more users. Kudos to everyone that worked together to make this release happen.\n\nAnd the Uber team also gave Cadence a writeup on the Uber Engineering Blog so please take a look.\n\nCommunity Survey Results\n\nThe results of our Community Survey have been published and you can find [the details right here on our blog](https://cadenceworkflow.io/blog/2 ...",id:"post",pid:"post"},{title:"Implement a Cadence worker service from scratch",frontmatter:{title:"Implement a Cadence worker service from scratch",date:"2023-07-05T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'In the previous blog, we have introduced three critical components for a Cadence application: the Cadence backend, domain, and worker. Among these, the worker service is the most crucial focus for developers as it hosts the activities and workflows of a Cadence application. In this blog, I will provide a short tutorial on how to implement a simple worker service from scratch in Go.\n\nTo finish this tutorial, there are two prerequisites you need to finish first\nRegister a Cadence domain for your worker. For this tutorial, I\'ve already registered a domain named test-domain\nStart the Cadence backend server in background.\n\nTo get started, let\'s simply use the native HTTP package built in Go to start a process listening to port 3000. You may customize the port for your worker, but the port you choose should not conflict with existing port for your Cadence backend.\n\npackage main\n\nimport (\n\t"fmt"\n\t"net/http"\n)\n\nfunc main( ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-05-implement-cadence-worker-from-scratch.html",relativePath:"_posts/2023-07-05-implement-cadence-worker-from-scratch.md",key:"v-6df5dc97",path:"/blog/2023/07/05/implement-cadence-worker-from-scratch/",summary:'In the previous blog, we have introduced three critical components for a Cadence application: the Cadence backend, domain, and worker. Among these, the worker service is the most crucial focus for developers as it hosts the activities and workflows of a Cadence application. In this blog, I will provide a short tutorial on how to implement a simple worker service from scratch in Go.\n\nTo finish this tutorial, there are two prerequisites you need to finish first\nRegister a Cadence domain for your worker. For this tutorial, I\'ve already registered a domain named test-domain\nStart the Cadence backend server in background.\n\nTo get started, let\'s simply use the native HTTP package built in Go to start a process listening to port 3000. You may customize the port for your worker, but the port you choose should not conflict with existing port for your Cadence backend.\n\npackage main\n\nimport (\n\t"fmt"\n\t"net/http"\n)\n\nfunc main( ...',id:"post",pid:"post"},{title:"Write your first workflow with Cadence",frontmatter:{title:"Write your first workflow with Cadence",date:"2023-07-16T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'We have covered basic components of Cadence and how to implement a Cadence worker on local environment in previous blogs. In this blog, let\'s write your very first HelloWorld workflow with Cadence. I\'ve started the Cadence backend server in background and registered a domain named test-domain. You may use the code snippet for the worker service in this blog Let\'s first write a activity, which takes a single string argument and print a log in the console.\n\nfunc helloWorldActivity(ctx context.Context, name string) (string, error) {\n\tlogger := activity.GetLogger(ctx)\n\tlogger.Info("helloworld activity started")\n\treturn "Hello " + name + "!", nil\n}\n\nThen let\'s write a workflow that invokes this activity\nfunc helloWorldWorkflow(ctx workflow.Context, name string) error {\n\tao := workflow.ActivityOptions{\n ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-16-write-your-first-workflow-with-cadence.html",relativePath:"_posts/2023-07-16-write-your-first-workflow-with-cadence.md",key:"v-45466bdb",path:"/blog/2023/07/16/write-your-first-workflow-with-cadence/",summary:'We have covered basic components of Cadence and how to implement a Cadence worker on local environment in previous blogs. In this blog, let\'s write your very first HelloWorld workflow with Cadence. I\'ve started the Cadence backend server in background and registered a domain named test-domain. You may use the code snippet for the worker service in this blog Let\'s first write a activity, which takes a single string argument and print a log in the console.\n\nfunc helloWorldActivity(ctx context.Context, name string) (string, error) {\n\tlogger := activity.GetLogger(ctx)\n\tlogger.Info("helloworld activity started")\n\treturn "Hello " + name + "!", nil\n}\n\nThen let\'s write a workflow that invokes this activity\nfunc helloWorldWorkflow(ctx workflow.Context, name string) error {\n\tao := workflow.ActivityOptions{\n ...',id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - July 2023",frontmatter:{title:"Cadence Community Spotlight Update - July 2023",date:"2023-07-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nGetting Started with Cadence\n\nAre you new to Cadence and want to understand the basic concepts and architecture? Well we have some great information for you!\n\nCommunity member Chris Qin has written a short blog post that takes you through the the three main components that make up a Cadence application. Please take a look and feel free to give us your comments and feedback.\n\nThanks Chris for sharing your knowledge and helping others to get started.\n\nCadence Go Client v1.0 Released\n\nThis month saw the release of v1.0 of the Cadence Go Client. Note that the work done on this release was as a result ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-31-community-spotlight-july-2023.html",relativePath:"_posts/2023-07-31-community-spotlight-july-2023.md",key:"v-bed2d0d2",path:"/blog/2023/07/31/community-spotlight-july-2023/",headers:[{level:2,title:"Getting Started with Cadence",slug:"getting-started-with-cadence"},{level:2,title:"Cadence Go Client v1.0 Released",slug:"cadence-go-client-v1-0-released"},{level:2,title:"Cadence Release Strategy",slug:"cadence-release-strategy"},{level:2,title:"Cadence Helm Charts",slug:"cadence-helm-charts"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nGetting Started with Cadence\n\nAre you new to Cadence and want to understand the basic concepts and architecture? Well we have some great information for you!\n\nCommunity member Chris Qin has written a short blog post that takes you through the the three main components that make up a Cadence application. Please take a look and feel free to give us your comments and feedback.\n\nThanks Chris for sharing your knowledge and helping others to get started.\n\nCadence Go Client v1.0 Released\n\nThis month saw the release of v1.0 of the Cadence Go Client. Note that the work done on this release was as a result ...",id:"post",pid:"post"},{title:"Non-deterministic errors, replayers and shadowers",frontmatter:{title:"Non-deterministic errors, replayers and shadowers",date:"2023-08-27T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'It is conceivable that developers constantly update their Cadence workflow code based upon new business use cases and needs. However,\nthe definition of a Cadence workflow must be deterministic because behind the scenes cadence uses event sourcing to construct\nthe workflow state by replaying the historical events stored for this specific workflow. Introducing components that are not compatible\nwith an existing running workflow will yield to non-deterministic errors and sometimes developers find it tricky to debug. Consider the\nfollowing workflow that executes two activities.\n\nfunc SampleWorkflow(ctx workflow.Context, data string) (string, error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Minute,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n var result1 string\n err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n if err != nil {\n return "", err\n }\n v ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-08-28-nondeterministic-errors-replayers-shadowers.html",relativePath:"_posts/2023-08-28-nondeterministic-errors-replayers-shadowers.md",key:"v-54c8d717",path:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/",summary:'It is conceivable that developers constantly update their Cadence workflow code based upon new business use cases and needs. However,\nthe definition of a Cadence workflow must be deterministic because behind the scenes cadence uses event sourcing to construct\nthe workflow state by replaying the historical events stored for this specific workflow. Introducing components that are not compatible\nwith an existing running workflow will yield to non-deterministic errors and sometimes developers find it tricky to debug. Consider the\nfollowing workflow that executes two activities.\n\nfunc SampleWorkflow(ctx workflow.Context, data string) (string, error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Minute,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n var result1 string\n err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n if err != nil {\n return "", err\n }\n v ...',id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - November 2023",frontmatter:{title:"Cadence Community Spotlight Update - November 2023",date:"2023-11-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nIt's been a couple of months since our last update so we have a lot of updates to share with you.\n\nPlease see below for a roundup of the highlights:\n\nProposal for Cadence Native Authentication\n\nCommunity member Mantas Sidlauskas has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:\n\nCadence Native Authentication Proposal\n\n This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the result ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-11-30-community-spotlight-update-november-2023.html",relativePath:"_posts/2023-11-30-community-spotlight-update-november-2023.md",key:"v-6e3f5451",path:"/blog/2023/11/30/community-spotlight-update-november-2023/",headers:[{level:2,title:"Proposal for Cadence Native Authentication",slug:"proposal-for-cadence-native-authentication"},{level:2,title:"iWF Deep Dive and More!",slug:"iwf-deep-dive-and-more"},{level:2,title:"New Go Samples for Cadence",slug:"new-go-samples-for-cadence"},{level:2,title:"Cadence Retrospective",slug:"cadence-retrospective"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nIt's been a couple of months since our last update so we have a lot of updates to share with you.\n\nPlease see below for a roundup of the highlights:\n\nProposal for Cadence Native Authentication\n\nCommunity member Mantas Sidlauskas has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:\n\nCadence Native Authentication Proposal\n\n This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the result ...",id:"post",pid:"post"},{title:"Bad practices and Anti-patterns with Cadence (Part 1)",frontmatter:{title:"Bad practices and Anti-patterns with Cadence (Part 1)",date:"2023-07-10T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.\n\nReusing the same workflow ID for very active/continuous running workflows\n\nCadence organizes workflows based on their unique IDs, using a process called partitioning. If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the continueAsNew function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the prob ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-10-cadence-bad-practices-part-1.html",relativePath:"_posts/2023-07-10-cadence-bad-practices-part-1.md",key:"v-32adf8e6",path:"/blog/2023/07/10/cadence-bad-practices-part-1/",summary:'In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.\n\nReusing the same workflow ID for very active/continuous running workflows\n\nCadence organizes workflows based on their unique IDs, using a process called partitioning. If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the continueAsNew function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the prob ...',id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - August 2023",frontmatter:{title:"Cadence Community Spotlight Update - August 2023",date:"2023-08-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nMore Cadence How To's\n\nYou might have noticed that we have had a few more contributions to our blog from Chris Qin. Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.\n\nHere are the latest topics:\n\nBad Practices and Anti-Patterns with Cadence - Part 1\n\nNon-Determistic Errors, Replayers and Shadowers\n\nEven if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.\n\nChris is also going to take a look at ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-08-31-community-spotlight-august-2023.html",relativePath:"_posts/2023-08-31-community-spotlight-august-2023.md",key:"v-0b00b852",path:"/blog/2023/08/31/community-spotlight-august-2023/",headers:[{level:2,title:"More Cadence How To's",slug:"more-cadence-how-to-s"},{level:2,title:"More iWF Examaples",slug:"more-iwf-examaples"},{level:2,title:"Cadence At the Helm!",slug:"cadence-at-the-helm"},{level:2,title:"Community Support!",slug:"community-support"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nMore Cadence How To's\n\nYou might have noticed that we have had a few more contributions to our blog from Chris Qin. Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.\n\nHere are the latest topics:\n\nBad Practices and Anti-Patterns with Cadence - Part 1\n\nNon-Determistic Errors, Replayers and Shadowers\n\nEven if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.\n\nChris is also going to take a look at ...",id:"post",pid:"post"},{title:"Cadence non-derministic errors common question Q&A (part 1)",frontmatter:{title:"Cadence non-derministic errors common question Q&A (part 1)",date:"2024-03-10T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:"\n\nNO. This change will not trigger non-deterministic error.\n\nAn Activity is the smallest unit of execution for Cadence and what happens inside activities are not recorded as historical events and therefore will not be replayed. In short, this change is deterministic and it is fine to modify logic inside activities.\n\nDoes changing the workflow definition trigger non-determinstic errors?\n\nYES. This is a very typical non-deterministic error.\n\nWhen a new workflow code change is deployed, Cadence will find if it is compatible with\nCadence history. Changes to workflow definition will fail the replay process of Cadence\nas it finds the new workflow definition imcompatible with previous historical events.\n\nHere is a list of common workflow definition changes.\nChanging workflow parameter counts\nChanging workflow parameter types\nChanging workflow return types\n\nThe following changes are not categorized as definition changes and therefore will not\ntrigger non-deterministic e ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-02-15-cadence-non-deterministic-common-qa.html",relativePath:"_posts/2024-02-15-cadence-non-deterministic-common-qa.md",key:"v-39909852",path:"/blog/2024/03/10/cadence-non-deterministic-common-qa/",headers:[{level:3,title:"If I change code logic inside an Cadence activity (for example, my activity is calling database A but now I want it to call database B), will it trigger an non-deterministic error?",slug:"if-i-change-code-logic-inside-an-cadence-activity-for-example-my-activity-is-calling-database-a-but-now-i-want-it-to-call-database-b-will-it-trigger-an-non-deterministic-error"},{level:3,title:"Does changing the workflow definition trigger non-determinstic errors?",slug:"does-changing-the-workflow-definition-trigger-non-determinstic-errors"},{level:3,title:"Does changing activity definitions trigger non-determinstic errors?",slug:"does-changing-activity-definitions-trigger-non-determinstic-errors"},{level:3,title:"What changes inside workflows may potentially trigger non-deterministic errors?",slug:"what-changes-inside-workflows-may-potentially-trigger-non-deterministic-errors"},{level:3,title:"Are Cadence signals replayed? If definition of signal is changed, will it trigger non-deterministic errors?",slug:"are-cadence-signals-replayed-if-definition-of-signal-is-changed-will-it-trigger-non-deterministic-errors"},{level:3,title:"If I have new business requirement and really need to change the definition of a workflow, what should I do?",slug:"if-i-have-new-business-requirement-and-really-need-to-change-the-definition-of-a-workflow-what-should-i-do"},{level:3,title:"Does changes to local activities' definition trigger non-deterministic errors?",slug:"does-changes-to-local-activities-definition-trigger-non-deterministic-errors"}],summary:"\n\nNO. This change will not trigger non-deterministic error.\n\nAn Activity is the smallest unit of execution for Cadence and what happens inside activities are not recorded as historical events and therefore will not be replayed. In short, this change is deterministic and it is fine to modify logic inside activities.\n\nDoes changing the workflow definition trigger non-determinstic errors?\n\nYES. This is a very typical non-deterministic error.\n\nWhen a new workflow code change is deployed, Cadence will find if it is compatible with\nCadence history. Changes to workflow definition will fail the replay process of Cadence\nas it finds the new workflow definition imcompatible with previous historical events.\n\nHere is a list of common workflow definition changes.\nChanging workflow parameter counts\nChanging workflow parameter types\nChanging workflow return types\n\nThe following changes are not categorized as definition changes and therefore will not\ntrigger non-deterministic e ...",id:"post",pid:"post"},{title:"2024 Cadence Yearly Roadmap Update",frontmatter:{title:"2024 Cadence Yearly Roadmap Update",date:"2024-07-11T00:00:00.000Z",author:"Ender Demirkaya",authorlink:"https://www.linkedin.com/in/enderdemirkaya/",description:"\n\nIf you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:\n\nWhat’s a workflow? (everyone has a different definition)\nWhy does it matter to be code-driven?\nBenefits of Cadence\n\nWhat is a Workflow?\n\nworkflow.png\n\nIn the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-07-11-yearly-roadmap-update.html",relativePath:"_posts/2024-07-11-yearly-roadmap-update.md",key:"v-44d49837",path:"/blog/2024/07/11/yearly-roadmap-update/",headers:[{level:2,title:"Introduction",slug:"introduction"},{level:3,title:"What is a Workflow?",slug:"what-is-a-workflow"},{level:3,title:"Code-Driven Workflows",slug:"code-driven-workflows"},{level:3,title:"Benefits",slug:"benefits"},{level:2,title:"Project Support",slug:"project-support"},{level:3,title:"Team",slug:"team"},{level:3,title:"Community",slug:"community"},{level:3,title:"Scale",slug:"scale"},{level:3,title:"Managed Solutions",slug:"managed-solutions"},{level:2,title:"After V1 Release",slug:"after-v1-release"},{level:3,title:"Frequent Releases",slug:"frequent-releases"},{level:3,title:"Zonal Isolation",slug:"zonal-isolation"},{level:3,title:"Narrowing Blast Radius",slug:"narrowing-blast-radius"},{level:3,title:"Async APIs",slug:"async-apis"},{level:3,title:"Pinot as Visibility Store",slug:"pinot-as-visibility-store"},{level:3,title:"Code Coverage",slug:"code-coverage"},{level:3,title:"Replayer Improvements",slug:"replayer-improvements"},{level:3,title:"Global Rate Limiters",slug:"global-rate-limiters"},{level:3,title:"Regular Failover Drills",slug:"regular-failover-drills"},{level:3,title:"Cadence Web v4",slug:"cadence-web-v4"},{level:3,title:"Code Review Time Non-determinism Checks",slug:"code-review-time-non-determinism-checks"},{level:3,title:"Domain Reports",slug:"domain-reports"},{level:3,title:"Client Based Migrations",slug:"client-based-migrations"},{level:2,title:"Roadmap (Next Year)",slug:"roadmap-next-year"},{level:3,title:"Database efficiency",slug:"database-efficiency"},{level:3,title:"Helm Charts",slug:"helm-charts"},{level:3,title:"Dashboard Templates",slug:"dashboard-templates"},{level:3,title:"Client V2 Modernization",slug:"client-v2-modernization"},{level:3,title:"Higher Parallelization and Prioritization in Task Processing",slug:"higher-parallelization-and-prioritization-in-task-processing"},{level:3,title:"Timer and Cron Burst Handling",slug:"timer-and-cron-burst-handling"},{level:3,title:"High zonal skew handling",slug:"high-zonal-skew-handling"},{level:3,title:"Tasklist Improvements",slug:"tasklist-improvements"},{level:3,title:"Shard Movement/Assignment Improvements",slug:"shard-movement-assignment-improvements"},{level:3,title:"Worker Heartbeats",slug:"worker-heartbeats"},{level:3,title:"Domain and Workflow Diagnostics",slug:"domain-and-workflow-diagnostics"},{level:3,title:"Self Serve Operations",slug:"self-serve-operations"},{level:3,title:"Cost Estimation",slug:"cost-estimation"},{level:3,title:"Domain Reports (continue)",slug:"domain-reports-continue"},{level:3,title:"Non-determinism Detection Improvements (continue)",slug:"non-determinism-detection-improvements-continue"},{level:3,title:"Domain Migrations (continue)",slug:"domain-migrations-continue"},{level:2,title:"Community",slug:"community-2"}],summary:"\n\nIf you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:\n\nWhat’s a workflow? (everyone has a different definition)\nWhy does it matter to be code-driven?\nBenefits of Cadence\n\nWhat is a Workflow?\n\nworkflow.png\n\nIn the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might ...",id:"post",pid:"post"},{title:"Minimizing blast radius in Cadence: Introducing Workflow ID-based Rate Limits",frontmatter:{title:"Minimizing blast radius in Cadence: Introducing Workflow ID-based Rate Limits",subtitle:"test",date:"2024-09-05T00:00:00.000Z",author:"Jakob Haahr Taankvist",authorlink:"https://www.linkedin.com/in/jakob-taankvist/",description:"At Uber, we run several big multitenant Cadence clusters with hundreds of domains in each. The clusters being multi-tenant means potential noisy neighbor effects between domains.\n\nAn essential aspect of avoiding this is managing how workflows interact with our infrastructure to prevent any single workflow from causing instability for the whole cluster. To this end, we are excited to introduce Workflow ID-based rate limits — a new feature designed to protect our clusters from problematic workflows and ensure stability across the board.\n\nWhy Workflow ID-based Rate Limits?\nWe already have rate limits for how many requests can be sent to a domain. However, since Cadence is sharded on the workflow ID, a user-provided input, an overused workflow with a particular id might overwhelm a shard by making too many requests. There are two main ways this happens:\n\nA user starts, or signals the ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-09-05-workflow-specific-rate-limits.html",relativePath:"_posts/2024-09-05-workflow-specific-rate-limits.md",key:"v-15401a12",path:"/blog/2024/09/05/workflow-specific-rate-limits/",headers:[{level:2,title:"Why Workflow ID-based Rate Limits?",slug:"why-workflow-id-based-rate-limits"},{level:2,title:"Why not Shard Rate Limits?",slug:"why-not-shard-rate-limits"},{level:2,title:"How Does It Work?",slug:"how-does-it-work"},{level:3,title:"How do I Enable It?",slug:"how-do-i-enable-it"},{level:2,title:"Monitoring and Troubleshooting",slug:"monitoring-and-troubleshooting"},{level:2,title:"Conclusion",slug:"conclusion"}],summary:"At Uber, we run several big multitenant Cadence clusters with hundreds of domains in each. The clusters being multi-tenant means potential noisy neighbor effects between domains.\n\nAn essential aspect of avoiding this is managing how workflows interact with our infrastructure to prevent any single workflow from causing instability for the whole cluster. To this end, we are excited to introduce Workflow ID-based rate limits — a new feature designed to protect our clusters from problematic workflows and ensure stability across the board.\n\nWhy Workflow ID-based Rate Limits?\nWe already have rate limits for how many requests can be sent to a domain. However, since Cadence is sharded on the workflow ID, a user-provided input, an overused workflow with a particular id might overwhelm a shard by making too many requests. There are two main ways this happens:\n\nA user starts, or signals the ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - March 2024",frontmatter:{title:"Cadence Community Spotlight Update - March 2024",date:"2023-03-11T00:00:00.000Z",author:"Kevin Corbett",authorlink:"https://github.com/kcorbett-netapp",description:"Welcome back to the latest in our regular Cadence community spotlight updates where we aim to deliver you news from in and around the Cadence community!\nIt’s been a few months since our last update, so I have a bunch of exciting updates to share.\n\nLet’s get started!\n\nProposal for Cadence Plugin System\nCommunity member Mantas Sidlauskas drafted a thorough proposal around putting together a plugin system in Cadence. Aimed at enhancing the flexibility of integrating various components like storage, document search, and archival, this system encourages the use of external plugins, promoting innovation and reducing dependency complications. Your insights and feedback are crucial; learn more and contribute your thoughts at the link below:\n\nCadence Plugin System Proposal\n\nA huge thank you to Mantas for i ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-3-11-community-spotlight-update-march-2024.html",relativePath:"_posts/2024-3-11-community-spotlight-update-march-2024.md",key:"v-480f0a7a",path:"/blog/2023/03/11/community-spotlight-update-march-2024/",headers:[{level:2,title:"Proposal for Cadence Plugin System",slug:"proposal-for-cadence-plugin-system"},{level:2,title:"Admin API Permissions Rethinking",slug:"admin-api-permissions-rethinking"},{level:2,title:"New Java Samples for Cadence: Signal Workflow Interactions",slug:"new-java-samples-for-cadence-signal-workflow-interactions"},{level:2,title:"New GoLang client & Cadence Web Enhancements",slug:"new-golang-client-cadence-web-enhancements"},{level:2,title:"Release Updates: v1.2.6 & v1.2.7",slug:"release-updates-v1-2-6-v1-2-7"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Recent Events",slug:"recent-events"}],summary:"Welcome back to the latest in our regular Cadence community spotlight updates where we aim to deliver you news from in and around the Cadence community!\nIt’s been a few months since our last update, so I have a bunch of exciting updates to share.\n\nLet’s get started!\n\nProposal for Cadence Plugin System\nCommunity member Mantas Sidlauskas drafted a thorough proposal around putting together a plugin system in Cadence. Aimed at enhancing the flexibility of integrating various components like storage, document search, and archival, this system encourages the use of external plugins, promoting innovation and reducing dependency complications. Your insights and feedback are crucial; learn more and contribute your thoughts at the link below:\n\nCadence Plugin System Proposal\n\nA huge thank you to Mantas for i ...",id:"post",pid:"post"},{frontmatter:{layout:"Layout",title:"Post"},regularPath:"/blog/",key:"v-424df898",path:"/blog/"},{frontmatter:{layout:"FrontmatterKey",title:"Tag"},regularPath:"/tag/",key:"v-b1564aac",path:"/tag/"},{frontmatter:{layout:"Layout",title:"Page 2 | Post"},regularPath:"/blog/page/2/",key:"v-c3507bb6",path:"/blog/page/2/"},{frontmatter:{layout:"Layout",title:"Page 3 | Post"},regularPath:"/blog/page/3/",key:"v-c3507b78",path:"/blog/page/3/"},{frontmatter:{layout:"Layout",title:"Page 4 | Post"},regularPath:"/blog/page/4/",key:"v-c3507b3a",path:"/blog/page/4/"},{frontmatter:{layout:"Layout",title:"Page 5 | Post"},regularPath:"/blog/page/5/",key:"v-c3507afc",path:"/blog/page/5/"},{frontmatter:{layout:"Layout",title:"Page 6 | Post"},regularPath:"/blog/page/6/",key:"v-c3507abe",path:"/blog/page/6/"},{frontmatter:{layout:"Layout",title:"Page 7 | Post"},regularPath:"/blog/page/7/",key:"v-c3507a80",path:"/blog/page/7/"}],themeConfig:{logo:"/img/logo-white.svg",nav:[{text:"Docs",items:[{text:"Get Started",link:"/docs/get-started/"},{text:"Use cases",link:"/docs/use-cases/"},{text:"Concepts",link:"/docs/concepts/"},{text:"Java client",link:"/docs/java-client/"},{text:"Go client",link:"/docs/go-client/"},{text:"Command line interface",link:"/docs/cli/"},{text:"Operation Guide",link:"/docs/operation-guide/"},{text:"Glossary",link:"/GLOSSARY"},{text:"About",link:"/docs/about/"}]},{text:"Blog",link:"/blog/"},{text:"Client",items:[{text:"Java Docs",link:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client"},{text:"Java Client",link:"https://mvnrepository.com/artifact/com.uber.cadence/cadence-client"},{text:"Go Docs",link:"https://godoc.org/go.uber.org/cadence"},{text:"Go Client",link:"https://github.com/uber-go/cadence-client/releases/latest"}]},{text:"Community",items:[{text:"Github Discussion",link:"https://github.com/uber/cadence/discussions"},{text:"StackOverflow",link:"https://stackoverflow.com/questions/tagged/cadence-workflow"},{text:"Github Issues",link:"https://github.com/uber/cadence/issues"},{text:"Slack",link:"http://t.uber.com/cadence-slack"},{text:"Office Hours Calendar",link:"https://calendar.google.com/event?action=TEMPLATE&tmeid=MjFwOW01NWhlZ3MyZWJkcmo2djVsMjNkNzNfMjAyMjA3MjVUMTYwMDAwWiBlNnI0MGdwM2MycjAxMDU0aWQ3ZTk5ZGxhY0Bn&tmsrc=e6r40gp3c2r01054id7e99dlac%40group.calendar.google.com&scp=ALL"}]},{text:"GitHub",items:[{text:"Cadence Service and CLI",link:"https://github.com/uber/cadence"},{text:"Cadence Go Client",link:"https://github.com/uber-go/cadence-client"},{text:"Cadence Go Client Samples",link:"https://github.com/uber-common/cadence-samples"},{text:"Cadence Java Client",link:"https://github.com/uber-java/cadence-client"},{text:"Cadence Java Client Samples",link:"https://github.com/uber/cadence-java-samples"},{text:"Cadence Web UI",link:"https://github.com/uber/cadence-web"},{text:"Cadence Docs",link:"https://github.com/uber/cadence-docs"}]},{text:"Docker",items:[{text:"Cadence Service",link:"https://hub.docker.com/r/ubercadence/server/tags"},{text:"Cadence CLI",link:"https://hub.docker.com/r/ubercadence/cli/tags"},{text:"Cadence Web UI",link:"https://hub.docker.com/r/ubercadence/web/tags"}]}],directories:[{dirname:"_posts",id:"post",itemPermalink:"/blog/:year/:month/:day/:slug",path:"/blog/"}],feed:{canonical_base:"/",count:5,json:!0},footer:{copyright:[{text:"© 2024 Uber Technologies, Inc."}]},summaryLength:1e3,summary:!0,pwa:!1}};n(273);o.a.component("BaseListLayout",()=>Promise.all([n.e(0),n.e(2)]).then(n.bind(null,356))),o.a.component("BlogTag",()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,357))),o.a.component("BlogTags",()=>Promise.all([n.e(0),n.e(7)]).then(n.bind(null,358))),o.a.component("NavLink",()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,359)));n(274),n(33);var wt={tag:{}};class kt{constructor(e,t){this._metaMap=Object.assign({},e),Object.keys(this._metaMap).forEach(e=>{const{pageKeys:n}=this._metaMap[e];this._metaMap[e].pages=n.map(e=>Object(Ye.b)(t,e))})}get length(){return Object.keys(this._metaMap).length}get map(){return this._metaMap}get pages(){return this.list}get list(){return this.toArray()}toArray(){const e=[];return Object.keys(this._metaMap).forEach(t=>{const{pages:n,path:o}=this._metaMap[t];e.push({name:t,pages:n,path:o})}),e}getItemByName(e){return this._metaMap[e]}}var Ct=[{pid:"post",id:"post",filter:function(e,t,n){return e.pid===n&&e.id===t},sorter:{post:(e,t)=>{const o=n(119);return o(e.frontmatter.date)-o(t.frontmatter.date)>0?-1:1}}.post,pages:[{path:"/blog/",interval:[0,4]},{path:"/blog/page/2/",interval:[5,9]},{path:"/blog/page/3/",interval:[10,14]},{path:"/blog/page/4/",interval:[15,19]},{path:"/blog/page/5/",interval:[20,24]},{path:"/blog/page/6/",interval:[25,29]},{path:"/blog/page/7/",interval:[30,32]}],prevText:"Prev",nextText:"Next"}],_t=n(63);const xt=n.n(_t)()("plugin-blog:pagination");class St{constructor(e,t,n){xt("pagination",e);const{pages:o,prevText:r,nextText:i}=e,{path:a}=n;this._prevText=r,this._nextText=i;for(let e=0,t=o.length;ee.filter(t,e.id,e.pid)).sort(e.sorter)}setIndexPage(e){this._indexPage=e}get length(){return this._paginationPages.length}get pages(){const[e,t]=this._currentPage.interval;return this._matchedPages.slice(e,t+1)}get hasPrev(){return 0!==this.paginationIndex}get prevLink(){return this.hasPrev?this.paginationIndex-1==0&&this._indexPage?this._indexPage:this._paginationPages[this.paginationIndex-1].path:null}get hasNext(){return this.paginationIndex!==this.length-1}get nextLink(){return this.hasNext?this._paginationPages[this.paginationIndex+1].path:null}get prevText(){return this._prevText}get nextText(){return this._nextText}getSpecificPageLink(e){return this._paginationPages[e].path}}const Ot=new class{constructor(e){this.paginations=e}get pages(){return o.a.$vuepress.$get("siteData").pages}getPagination(e,t,n){xt("id",t),xt("pid",e);const o=this.paginations.filter(n=>n.id===t&&n.pid===e)[0];return new St(o,this.pages,n)}}(Ct);var jt={comment:{enabled:!1,service:""},email:{enabled:!1},feed:{rss:!0,atom:!1,json:!0}},Pt=[({Vue:e,options:t,router:n,siteData:o})=>{n.beforeResolve((e,t,n)=>{const o="undefined"!=typeof window?window:null;!o||"/"===t.path||e.path.startsWith("/blog")?n():o.location.href=e.fullPath})},{},({Vue:e})=>{e.mixin({computed:{$dataBlock(){return this.$options.__data__block__}}})},{},{},({Vue:e})=>{const t=Object.keys(wt).map(e=>{const t=wt[e],n="$"+e;return{[n](){const{pages:e}=this.$site;return new kt(t,e)},["$current"+(e.charAt(0).toUpperCase()+e.slice(1))](){const e=this.$route.meta.id;return this[n].getItemByName(e)}}}).reduce((e,t)=>(Object.assign(e,t),e),{});t.$frontmatterKey=function(){const e=this["$"+this.$route.meta.id];return e||null},e.mixin({computed:t})},({Vue:e})=>{e.mixin({computed:{$pagination(){return this.$route.meta.pid&&this.$route.meta.id?this.$getPagination(this.$route.meta.pid,this.$route.meta.id):null}},methods:{$getPagination(e,t){return t=t||e,Ot.getPagination(e,t,this.$route)}}})},({Vue:e})=>{const t={$service:()=>jt};e.mixin({computed:t})}],$t=[];class Tt extends class{constructor(){this.store=new o.a({data:{state:{}}})}$get(e){return this.store.state[e]}$set(e,t){o.a.set(this.store.state,e,t)}$emit(...e){this.store.$emit(...e)}$on(...e){this.store.$on(...e)}}{}Object.assign(Tt.prototype,{getPageAsyncComponent:Ye.e,getLayoutAsyncComponent:Ye.d,getAsyncComponent:Ye.c,getVueComponent:Ye.f});var At={install(e){const t=new Tt;e.$vuepress=t,e.prototype.$vuepress=t}};function Et(e,t){const n=t.toLowerCase();return e.options.routes.some(e=>e.path.toLowerCase()===n)}var It={props:{pageKey:String,slotKey:{type:String,default:"default"}},render(e){const t=this.pageKey||this.$parent.$page.key;return Object(Ye.h)("pageKey",t),o.a.component(t)||o.a.component(t,Object(Ye.e)(t)),o.a.component(t)?e(t):e("")}},Lt={functional:!0,props:{slotKey:String,required:!0},render:(e,{props:t,slots:n})=>e("div",{class:["content__"+t.slotKey]},n()[t.slotKey])},Mt={computed:{openInNewWindowTitle(){return this.$themeLocaleConfig.openNewWindowText||"(opens new window)"}}},Dt=(n(277),n(278),n(4)),Nt=Object(Dt.a)(Mt,(function(){var e=this._self._c;return e("span",[e("svg",{staticClass:"icon outbound",attrs:{xmlns:"http://www.w3.org/2000/svg","aria-hidden":"true",focusable:"false",x:"0px",y:"0px",viewBox:"0 0 100 100",width:"15",height:"15"}},[e("path",{attrs:{fill:"currentColor",d:"M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"}}),this._v(" "),e("polygon",{attrs:{fill:"currentColor",points:"45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"}})]),this._v(" "),e("span",{staticClass:"sr-only"},[this._v(this._s(this.openInNewWindowTitle))])])}),[],!1,null,null,null).exports,Ft={functional:!0,render(e,{parent:t,children:n}){if(t._isMounted)return n;t.$once("hook:mounted",()=>{t.$forceUpdate()})}};o.a.config.productionTip=!1,o.a.use(Ve),o.a.use(At),o.a.mixin(function(e,t,n=o.a){!function(e){e.locales&&Object.keys(e.locales).forEach(t=>{e.locales[t].path=t});Object.freeze(e)}(t),n.$vuepress.$set("siteData",t);const r=new(e(n.$vuepress.$get("siteData"))),i=Object.getOwnPropertyDescriptors(Object.getPrototypeOf(r)),a={};return Object.keys(i).reduce((e,t)=>(t.startsWith("$")&&(e[t]=i[t].get),e),a),{computed:a}}(e=>class{setPage(e){this.__page=e}get $site(){return e}get $themeConfig(){return this.$site.themeConfig}get $frontmatter(){return this.$page.frontmatter}get $localeConfig(){const{locales:e={}}=this.$site;let t,n;for(const o in e)"/"===o?n=e[o]:0===this.$page.path.indexOf(o)&&(t=e[o]);return t||n||{}}get $siteTitle(){return this.$localeConfig.title||this.$site.title||""}get $canonicalUrl(){const{canonicalUrl:e}=this.$page.frontmatter;return"string"==typeof e&&e}get $title(){const e=this.$page,{metaTitle:t}=this.$page.frontmatter;if("string"==typeof t)return t;const n=this.$siteTitle,o=e.frontmatter.home?null:e.frontmatter.title||e.title;return n?o?o+" | "+n:n:o||"VuePress"}get $description(){const e=function(e){if(e){const t=e.filter(e=>"description"===e.name)[0];if(t)return t.content}}(this.$page.frontmatter.meta);return e||(this.$page.frontmatter.description||this.$localeConfig.description||this.$site.description||"")}get $lang(){return this.$page.frontmatter.lang||this.$localeConfig.lang||"en-US"}get $localePath(){return this.$localeConfig.path||"/"}get $themeLocaleConfig(){return(this.$site.themeConfig.locales||{})[this.$localePath]||{}}get $page(){return this.__page?this.__page:function(e,t){for(let n=0;nn||(e.hash?!o.a.$vuepress.$get("disableScrollBehavior")&&{selector:decodeURIComponent(e.hash)}:{x:0,y:0})});!function(e){e.beforeEach((t,n,o)=>{if(Et(e,t.path))o();else if(/(\/|\.html)$/.test(t.path))if(/\/$/.test(t.path)){const n=t.path.replace(/\/$/,"")+".html";Et(e,n)?o(n):o()}else o();else{const n=t.path+"/",r=t.path+".html";Et(e,r)?o(r):Et(e,n)?o(n):o()}})}(n);const r={};try{await Promise.all(Pt.filter(e=>"function"==typeof e).map(t=>t({Vue:o.a,options:r,router:n,siteData:bt,isServer:e})))}catch(e){console.error(e)}return{app:new o.a(Object.assign(r,{router:n,render:e=>e("div",{attrs:{id:"app"}},[e("RouterView",{ref:"layout"}),e("div",{class:"global-ui"},$t.map(t=>e(t)))])})),router:n}}(!1).then(({app:e,router:t})=>{t.onReady(()=>{e.$mount("#app")})})}]); \ No newline at end of file + */function r(e,t){for(var n in t)e[n]=t[n];return e}var i=/[!'()*]/g,a=function(e){return"%"+e.charCodeAt(0).toString(16)},s=/%2C/g,c=function(e){return encodeURIComponent(e).replace(i,a).replace(s,",")};function u(e){try{return decodeURIComponent(e)}catch(e){0}return e}var l=function(e){return null==e||"object"==typeof e?e:String(e)};function d(e){var t={};return(e=e.trim().replace(/^(\?|#|&)/,""))?(e.split("&").forEach((function(e){var n=e.replace(/\+/g," ").split("="),o=u(n.shift()),r=n.length>0?u(n.join("=")):null;void 0===t[o]?t[o]=r:Array.isArray(t[o])?t[o].push(r):t[o]=[t[o],r]})),t):t}function h(e){var t=e?Object.keys(e).map((function(t){var n=e[t];if(void 0===n)return"";if(null===n)return c(t);if(Array.isArray(n)){var o=[];return n.forEach((function(e){void 0!==e&&(null===e?o.push(c(t)):o.push(c(t)+"="+c(e)))})),o.join("&")}return c(t)+"="+c(n)})).filter((function(e){return e.length>0})).join("&"):null;return t?"?"+t:""}var p=/\/?$/;function f(e,t,n,o){var r=o&&o.options.stringifyQuery,i=t.query||{};try{i=m(i)}catch(e){}var a={name:t.name||e&&e.name,meta:e&&e.meta||{},path:t.path||"/",hash:t.hash||"",query:i,params:t.params||{},fullPath:y(t,r),matched:e?v(e):[]};return n&&(a.redirectedFrom=y(n,r)),Object.freeze(a)}function m(e){if(Array.isArray(e))return e.map(m);if(e&&"object"==typeof e){var t={};for(var n in e)t[n]=m(e[n]);return t}return e}var g=f(null,{path:"/"});function v(e){for(var t=[];e;)t.unshift(e),e=e.parent;return t}function y(e,t){var n=e.path,o=e.query;void 0===o&&(o={});var r=e.hash;return void 0===r&&(r=""),(n||"/")+(t||h)(o)+r}function b(e,t,n){return t===g?e===t:!!t&&(e.path&&t.path?e.path.replace(p,"")===t.path.replace(p,"")&&(n||e.hash===t.hash&&w(e.query,t.query)):!(!e.name||!t.name)&&(e.name===t.name&&(n||e.hash===t.hash&&w(e.query,t.query)&&w(e.params,t.params))))}function w(e,t){if(void 0===e&&(e={}),void 0===t&&(t={}),!e||!t)return e===t;var n=Object.keys(e).sort(),o=Object.keys(t).sort();return n.length===o.length&&n.every((function(n,r){var i=e[n];if(o[r]!==n)return!1;var a=t[n];return null==i||null==a?i===a:"object"==typeof i&&"object"==typeof a?w(i,a):String(i)===String(a)}))}function k(e){for(var t=0;t=0&&(t=e.slice(o),e=e.slice(0,o));var r=e.indexOf("?");return r>=0&&(n=e.slice(r+1),e=e.slice(0,r)),{path:e,query:n,hash:t}}(i.path||""),h=t&&t.path||"/",p=u.path?x(u.path,h,n||i.append):h,f=function(e,t,n){void 0===t&&(t={});var o,r=n||d;try{o=r(e||"")}catch(e){o={}}for(var i in t){var a=t[i];o[i]=Array.isArray(a)?a.map(l):l(a)}return o}(u.query,i.query,o&&o.options.parseQuery),m=i.hash||u.hash;return m&&"#"!==m.charAt(0)&&(m="#"+m),{_normalized:!0,path:p,query:f,hash:m}}var q,V=function(){},G={name:"RouterLink",props:{to:{type:[String,Object],required:!0},tag:{type:String,default:"a"},custom:Boolean,exact:Boolean,exactPath:Boolean,append:Boolean,replace:Boolean,activeClass:String,exactActiveClass:String,ariaCurrentValue:{type:String,default:"page"},event:{type:[String,Array],default:"click"}},render:function(e){var t=this,n=this.$router,o=this.$route,i=n.resolve(this.to,o,this.append),a=i.location,s=i.route,c=i.href,u={},l=n.options.linkActiveClass,d=n.options.linkExactActiveClass,h=null==l?"router-link-active":l,m=null==d?"router-link-exact-active":d,g=null==this.activeClass?h:this.activeClass,v=null==this.exactActiveClass?m:this.exactActiveClass,y=s.redirectedFrom?f(null,B(s.redirectedFrom),null,n):s;u[v]=b(o,y,this.exactPath),u[g]=this.exact||this.exactPath?u[v]:function(e,t){return 0===e.path.replace(p,"/").indexOf(t.path.replace(p,"/"))&&(!t.hash||e.hash===t.hash)&&function(e,t){for(var n in t)if(!(n in e))return!1;return!0}(e.query,t.query)}(o,y);var w=u[v]?this.ariaCurrentValue:null,k=function(e){Y(e)&&(t.replace?n.replace(a,V):n.push(a,V))},C={click:Y};Array.isArray(this.event)?this.event.forEach((function(e){C[e]=k})):C[this.event]=k;var _={class:u},x=!this.$scopedSlots.$hasNormal&&this.$scopedSlots.default&&this.$scopedSlots.default({href:c,route:s,navigate:k,isActive:u[g],isExactActive:u[v]});if(x){if(1===x.length)return x[0];if(x.length>1||!x.length)return 0===x.length?e():e("span",{},x)}if("a"===this.tag)_.on=C,_.attrs={href:c,"aria-current":w};else{var S=function e(t){var n;if(t)for(var o=0;o-1&&(s.params[h]=n.params[h]);return s.path=H(l.path,s.params),c(l,s,a)}if(s.path){s.params={};for(var p=0;p-1}function Se(e,t){return xe(e)&&e._isRouter&&(null==t||e.type===t)}function Oe(e,t,n){var o=function(r){r>=e.length?n():e[r]?t(e[r],(function(){o(r+1)})):o(r+1)};o(0)}function je(e){return function(t,n,o){var r=!1,i=0,a=null;Pe(e,(function(e,t,n,s){if("function"==typeof e&&void 0===e.cid){r=!0,i++;var c,u=Ae((function(t){var r;((r=t).__esModule||Te&&"Module"===r[Symbol.toStringTag])&&(t=t.default),e.resolved="function"==typeof t?t:q.extend(t),n.components[s]=t,--i<=0&&o()})),l=Ae((function(e){var t="Failed to resolve async component "+s+": "+e;a||(a=xe(e)?e:new Error(t),o(a))}));try{c=e(u,l)}catch(e){l(e)}if(c)if("function"==typeof c.then)c.then(u,l);else{var d=c.component;d&&"function"==typeof d.then&&d.then(u,l)}}})),r||o()}}function Pe(e,t){return $e(e.map((function(e){return Object.keys(e.components).map((function(n){return t(e.components[n],e.instances[n],e,n)}))})))}function $e(e){return Array.prototype.concat.apply([],e)}var Te="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag;function Ae(e){var t=!1;return function(){for(var n=[],o=arguments.length;o--;)n[o]=arguments[o];if(!t)return t=!0,e.apply(this,n)}}var Ee=function(e,t){this.router=e,this.base=function(e){if(!e)if(Z){var t=document.querySelector("base");e=(e=t&&t.getAttribute("href")||"/").replace(/^https?:\/\/[^\/]+/,"")}else e="/";"/"!==e.charAt(0)&&(e="/"+e);return e.replace(/\/$/,"")}(t),this.current=g,this.pending=null,this.ready=!1,this.readyCbs=[],this.readyErrorCbs=[],this.errorCbs=[],this.listeners=[]};function Ie(e,t,n,o){var r=Pe(e,(function(e,o,r,i){var a=function(e,t){"function"!=typeof e&&(e=q.extend(e));return e.options[t]}(e,t);if(a)return Array.isArray(a)?a.map((function(e){return n(e,o,r,i)})):n(a,o,r,i)}));return $e(o?r.reverse():r)}function Le(e,t){if(t)return function(){return e.apply(t,arguments)}}Ee.prototype.listen=function(e){this.cb=e},Ee.prototype.onReady=function(e,t){this.ready?e():(this.readyCbs.push(e),t&&this.readyErrorCbs.push(t))},Ee.prototype.onError=function(e){this.errorCbs.push(e)},Ee.prototype.transitionTo=function(e,t,n){var o,r=this;try{o=this.router.match(e,this.current)}catch(e){throw this.errorCbs.forEach((function(t){t(e)})),e}var i=this.current;this.confirmTransition(o,(function(){r.updateRoute(o),t&&t(o),r.ensureURL(),r.router.afterHooks.forEach((function(e){e&&e(o,i)})),r.ready||(r.ready=!0,r.readyCbs.forEach((function(e){e(o)})))}),(function(e){n&&n(e),e&&!r.ready&&(Se(e,be.redirected)&&i===g||(r.ready=!0,r.readyErrorCbs.forEach((function(t){t(e)}))))}))},Ee.prototype.confirmTransition=function(e,t,n){var o=this,r=this.current;this.pending=e;var i,a,s=function(e){!Se(e)&&xe(e)&&(o.errorCbs.length?o.errorCbs.forEach((function(t){t(e)})):console.error(e)),n&&n(e)},c=e.matched.length-1,u=r.matched.length-1;if(b(e,r)&&c===u&&e.matched[c]===r.matched[u])return this.ensureURL(),e.hash&&se(this.router,r,e,!1),s(((a=Ce(i=r,e,be.duplicated,'Avoided redundant navigation to current location: "'+i.fullPath+'".')).name="NavigationDuplicated",a));var l=function(e,t){var n,o=Math.max(e.length,t.length);for(n=0;n0)){var t=this.router,n=t.options.scrollBehavior,o=ge&&n;o&&this.listeners.push(ae());var r=function(){var n=e.current,r=De(e.base);e.current===g&&r===e._startLocation||e.transitionTo(r,(function(e){o&&se(t,e,n,!0)}))};window.addEventListener("popstate",r),this.listeners.push((function(){window.removeEventListener("popstate",r)}))}},t.prototype.go=function(e){window.history.go(e)},t.prototype.push=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){ve(S(o.base+e.fullPath)),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){ye(S(o.base+e.fullPath)),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.ensureURL=function(e){if(De(this.base)!==this.current.fullPath){var t=S(this.base+this.current.fullPath);e?ve(t):ye(t)}},t.prototype.getCurrentLocation=function(){return De(this.base)},t}(Ee);function De(e){var t=window.location.pathname,n=t.toLowerCase(),o=e.toLowerCase();return!e||n!==o&&0!==n.indexOf(S(o+"/"))||(t=t.slice(e.length)),(t||"/")+window.location.search+window.location.hash}var Ne=function(e){function t(t,n,o){e.call(this,t,n),o&&function(e){var t=De(e);if(!/^\/#/.test(t))return window.location.replace(S(e+"/#"+t)),!0}(this.base)||Fe()}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.setupListeners=function(){var e=this;if(!(this.listeners.length>0)){var t=this.router.options.scrollBehavior,n=ge&&t;n&&this.listeners.push(ae());var o=function(){var t=e.current;Fe()&&e.transitionTo(We(),(function(o){n&&se(e.router,o,t,!0),ge||ze(o.fullPath)}))},r=ge?"popstate":"hashchange";window.addEventListener(r,o),this.listeners.push((function(){window.removeEventListener(r,o)}))}},t.prototype.push=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){Ue(e.fullPath),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,r=this.current;this.transitionTo(e,(function(e){ze(e.fullPath),se(o.router,e,r,!1),t&&t(e)}),n)},t.prototype.go=function(e){window.history.go(e)},t.prototype.ensureURL=function(e){var t=this.current.fullPath;We()!==t&&(e?Ue(t):ze(t))},t.prototype.getCurrentLocation=function(){return We()},t}(Ee);function Fe(){var e=We();return"/"===e.charAt(0)||(ze("/"+e),!1)}function We(){var e=window.location.href,t=e.indexOf("#");return t<0?"":e=e.slice(t+1)}function Re(e){var t=window.location.href,n=t.indexOf("#");return(n>=0?t.slice(0,n):t)+"#"+e}function Ue(e){ge?ve(Re(e)):window.location.hash=e}function ze(e){ge?ye(Re(e)):window.location.replace(Re(e))}var He=function(e){function t(t,n){e.call(this,t,n),this.stack=[],this.index=-1}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.push=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index+1).concat(e),o.index++,t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index).concat(e),t&&t(e)}),n)},t.prototype.go=function(e){var t=this,n=this.index+e;if(!(n<0||n>=this.stack.length)){var o=this.stack[n];this.confirmTransition(o,(function(){var e=t.current;t.index=n,t.updateRoute(o),t.router.afterHooks.forEach((function(t){t&&t(o,e)}))}),(function(e){Se(e,be.duplicated)&&(t.index=n)}))}},t.prototype.getCurrentLocation=function(){var e=this.stack[this.stack.length-1];return e?e.fullPath:"/"},t.prototype.ensureURL=function(){},t}(Ee),Be=function(e){void 0===e&&(e={}),this.app=null,this.apps=[],this.options=e,this.beforeHooks=[],this.resolveHooks=[],this.afterHooks=[],this.matcher=Q(e.routes||[],this);var t=e.mode||"hash";switch(this.fallback="history"===t&&!ge&&!1!==e.fallback,this.fallback&&(t="hash"),Z||(t="abstract"),this.mode=t,t){case"history":this.history=new Me(this,e.base);break;case"hash":this.history=new Ne(this,e.base,this.fallback);break;case"abstract":this.history=new He(this,e.base);break;default:0}},qe={currentRoute:{configurable:!0}};Be.prototype.match=function(e,t,n){return this.matcher.match(e,t,n)},qe.currentRoute.get=function(){return this.history&&this.history.current},Be.prototype.init=function(e){var t=this;if(this.apps.push(e),e.$once("hook:destroyed",(function(){var n=t.apps.indexOf(e);n>-1&&t.apps.splice(n,1),t.app===e&&(t.app=t.apps[0]||null),t.app||t.history.teardown()})),!this.app){this.app=e;var n=this.history;if(n instanceof Me||n instanceof Ne){var o=function(e){n.setupListeners(),function(e){var o=n.current,r=t.options.scrollBehavior;ge&&r&&"fullPath"in e&&se(t,e,o,!1)}(e)};n.transitionTo(n.getCurrentLocation(),o,o)}n.listen((function(e){t.apps.forEach((function(t){t._route=e}))}))}},Be.prototype.beforeEach=function(e){return Ge(this.beforeHooks,e)},Be.prototype.beforeResolve=function(e){return Ge(this.resolveHooks,e)},Be.prototype.afterEach=function(e){return Ge(this.afterHooks,e)},Be.prototype.onReady=function(e,t){this.history.onReady(e,t)},Be.prototype.onError=function(e){this.history.onError(e)},Be.prototype.push=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.push(e,t,n)}));this.history.push(e,t,n)},Be.prototype.replace=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.replace(e,t,n)}));this.history.replace(e,t,n)},Be.prototype.go=function(e){this.history.go(e)},Be.prototype.back=function(){this.go(-1)},Be.prototype.forward=function(){this.go(1)},Be.prototype.getMatchedComponents=function(e){var t=e?e.matched?e:this.resolve(e).route:this.currentRoute;return t?[].concat.apply([],t.matched.map((function(e){return Object.keys(e.components).map((function(t){return e.components[t]}))}))):[]},Be.prototype.resolve=function(e,t,n){var o=B(e,t=t||this.history.current,n,this),r=this.match(o,t),i=r.redirectedFrom||r.fullPath;return{location:o,route:r,href:function(e,t,n){var o="hash"===n?"#"+t:t;return e?S(e+"/"+o):o}(this.history.base,i,this.mode),normalizedTo:o,resolved:r}},Be.prototype.getRoutes=function(){return this.matcher.getRoutes()},Be.prototype.addRoute=function(e,t){this.matcher.addRoute(e,t),this.history.current!==g&&this.history.transitionTo(this.history.getCurrentLocation())},Be.prototype.addRoutes=function(e){this.matcher.addRoutes(e),this.history.current!==g&&this.history.transitionTo(this.history.getCurrentLocation())},Object.defineProperties(Be.prototype,qe);var Ve=Be;function Ge(e,t){return e.push(t),function(){var n=e.indexOf(t);n>-1&&e.splice(n,1)}}Be.install=function e(t){if(!e.installed||q!==t){e.installed=!0,q=t;var n=function(e){return void 0!==e},o=function(e,t){var o=e.$options._parentVnode;n(o)&&n(o=o.data)&&n(o=o.registerRouteInstance)&&o(e,t)};t.mixin({beforeCreate:function(){n(this.$options.router)?(this._routerRoot=this,this._router=this.$options.router,this._router.init(this),t.util.defineReactive(this,"_route",this._router.history.current)):this._routerRoot=this.$parent&&this.$parent._routerRoot||this,o(this,this)},destroyed:function(){o(this)}}),Object.defineProperty(t.prototype,"$router",{get:function(){return this._routerRoot._router}}),Object.defineProperty(t.prototype,"$route",{get:function(){return this._routerRoot._route}}),t.component("RouterView",C),t.component("RouterLink",G);var r=t.config.optionMergeStrategies;r.beforeRouteEnter=r.beforeRouteLeave=r.beforeRouteUpdate=r.created}},Be.version="3.6.5",Be.isNavigationFailure=Se,Be.NavigationFailureType=be,Be.START_LOCATION=g,Z&&window.Vue&&window.Vue.use(Be);n(64);var Ye=n(1),Ze=n(110),Je=n.n(Ze),Ke=n(111),Qe=n.n(Ke),Xe={created(){if(this.siteMeta=this.$site.headTags.filter(([e])=>"meta"===e).map(([e,t])=>t),this.$ssrContext){const t=this.getMergedMetaTags();this.$ssrContext.title=this.$title,this.$ssrContext.lang=this.$lang,this.$ssrContext.pageMeta=(e=t)?e.map(e=>{let t="{t+=` ${n}="${Qe()(e[n])}"`}),t+">"}).join("\n "):"",this.$ssrContext.canonicalLink=tt(this.$canonicalUrl)}var e},mounted(){this.currentMetaTags=[...document.querySelectorAll("meta")],this.updateMeta(),this.updateCanonicalLink()},methods:{updateMeta(){document.title=this.$title,document.documentElement.lang=this.$lang;const e=this.getMergedMetaTags();this.currentMetaTags=nt(e,this.currentMetaTags)},getMergedMetaTags(){const e=this.$page.frontmatter.meta||[];return Je()([{name:"description",content:this.$description}],e,this.siteMeta,ot)},updateCanonicalLink(){et(),this.$canonicalUrl&&document.head.insertAdjacentHTML("beforeend",tt(this.$canonicalUrl))}},watch:{$page(){this.updateMeta(),this.updateCanonicalLink()}},beforeDestroy(){nt(null,this.currentMetaTags),et()}};function et(){const e=document.querySelector("link[rel='canonical']");e&&e.remove()}function tt(e=""){return e?``:""}function nt(e,t){if(t&&[...t].filter(e=>e.parentNode===document.head).forEach(e=>document.head.removeChild(e)),e)return e.map(e=>{const t=document.createElement("meta");return Object.keys(e).forEach(n=>{t.setAttribute(n,e[n])}),document.head.appendChild(t),t})}function ot(e){for(const t of["name","property","itemprop"])if(e.hasOwnProperty(t))return e[t]+t;return JSON.stringify(e)}var rt=n(31),it=n.n(rt),at={mounted(){it.a.configure({showSpinner:!1}),this.$router.beforeEach((e,t,n)=>{e.path===t.path||o.a.component(e.name)||it.a.start(),n()}),this.$router.afterEach(()=>{it.a.done(),this.isSidebarOpen=!1})}},st=(n(262),Object.assign||function(e){for(var t=1;t1&&void 0!==arguments[1]?arguments[1]:{},o=window.Promise||function(e){function t(){}e(t,t)},r=function(e){var t=e.target;t!==S?-1!==b.indexOf(t)&&m({target:t}):f()},i=function(){if(!k&&x.original){var e=window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0;Math.abs(C-e)>_.scrollOffset&&setTimeout(f,150)}},a=function(e){var t=e.key||e.keyCode;"Escape"!==t&&"Esc"!==t&&27!==t||f()},s=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e;if(e.background&&(S.style.background=e.background),e.container&&e.container instanceof Object&&(t.container=st({},_.container,e.container)),e.template){var n=ut(e.template)?e.template:document.querySelector(e.template);t.template=n}return _=st({},_,t),b.forEach((function(e){e.dispatchEvent(ft("medium-zoom:update",{detail:{zoom:O}}))})),O},c=function(){var t=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};return e(st({},_,t))},u=function(){for(var e=arguments.length,t=Array(e),n=0;n0?t.reduce((function(e,t){return[].concat(e,dt(t))}),[]):b;return o.forEach((function(e){e.classList.remove("medium-zoom-image"),e.dispatchEvent(ft("medium-zoom:detach",{detail:{zoom:O}}))})),b=b.filter((function(e){return-1===o.indexOf(e)})),O},d=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return b.forEach((function(o){o.addEventListener("medium-zoom:"+e,t,n)})),w.push({type:"medium-zoom:"+e,listener:t,options:n}),O},h=function(e,t){var n=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return b.forEach((function(o){o.removeEventListener("medium-zoom:"+e,t,n)})),w=w.filter((function(n){return!(n.type==="medium-zoom:"+e&&n.listener.toString()===t.toString())})),O},p=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.target,n=function(){var e={width:document.documentElement.clientWidth,height:document.documentElement.clientHeight,left:0,top:0,right:0,bottom:0},t=void 0,n=void 0;if(_.container)if(_.container instanceof Object)t=(e=st({},e,_.container)).width-e.left-e.right-2*_.margin,n=e.height-e.top-e.bottom-2*_.margin;else{var o=(ut(_.container)?_.container:document.querySelector(_.container)).getBoundingClientRect(),r=o.width,i=o.height,a=o.left,s=o.top;e=st({},e,{width:r,height:i,left:a,top:s})}t=t||e.width-2*_.margin,n=n||e.height-2*_.margin;var c=x.zoomedHd||x.original,u=lt(c)?t:c.naturalWidth||t,l=lt(c)?n:c.naturalHeight||n,d=c.getBoundingClientRect(),h=d.top,p=d.left,f=d.width,m=d.height,g=Math.min(Math.max(f,u),t)/f,v=Math.min(Math.max(m,l),n)/m,y=Math.min(g,v),b="scale("+y+") translate3d("+((t-f)/2-p+_.margin+e.left)/y+"px, "+((n-m)/2-h+_.margin+e.top)/y+"px, 0)";x.zoomed.style.transform=b,x.zoomedHd&&(x.zoomedHd.style.transform=b)};return new o((function(e){if(t&&-1===b.indexOf(t))e(O);else{if(x.zoomed)e(O);else{if(t)x.original=t;else{if(!(b.length>0))return void e(O);var o=b;x.original=o[0]}if(x.original.dispatchEvent(ft("medium-zoom:open",{detail:{zoom:O}})),C=window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0,k=!0,x.zoomed=pt(x.original),document.body.appendChild(S),_.template){var r=ut(_.template)?_.template:document.querySelector(_.template);x.template=document.createElement("div"),x.template.appendChild(r.content.cloneNode(!0)),document.body.appendChild(x.template)}if(x.original.parentElement&&"PICTURE"===x.original.parentElement.tagName&&x.original.currentSrc&&(x.zoomed.src=x.original.currentSrc),document.body.appendChild(x.zoomed),window.requestAnimationFrame((function(){document.body.classList.add("medium-zoom--opened")})),x.original.classList.add("medium-zoom-image--hidden"),x.zoomed.classList.add("medium-zoom-image--opened"),x.zoomed.addEventListener("click",f),x.zoomed.addEventListener("transitionend",(function t(){k=!1,x.zoomed.removeEventListener("transitionend",t),x.original.dispatchEvent(ft("medium-zoom:opened",{detail:{zoom:O}})),e(O)})),x.original.getAttribute("data-zoom-src")){x.zoomedHd=x.zoomed.cloneNode(),x.zoomedHd.removeAttribute("srcset"),x.zoomedHd.removeAttribute("sizes"),x.zoomedHd.removeAttribute("loading"),x.zoomedHd.src=x.zoomed.getAttribute("data-zoom-src"),x.zoomedHd.onerror=function(){clearInterval(i),console.warn("Unable to reach the zoom image target "+x.zoomedHd.src),x.zoomedHd=null,n()};var i=setInterval((function(){x.zoomedHd.complete&&(clearInterval(i),x.zoomedHd.classList.add("medium-zoom-image--opened"),x.zoomedHd.addEventListener("click",f),document.body.appendChild(x.zoomedHd),n())}),10)}else if(x.original.hasAttribute("srcset")){x.zoomedHd=x.zoomed.cloneNode(),x.zoomedHd.removeAttribute("sizes"),x.zoomedHd.removeAttribute("loading");var a=x.zoomedHd.addEventListener("load",(function(){x.zoomedHd.removeEventListener("load",a),x.zoomedHd.classList.add("medium-zoom-image--opened"),x.zoomedHd.addEventListener("click",f),document.body.appendChild(x.zoomedHd),n()}))}else n()}}}))},f=function(){return new o((function(e){if(!k&&x.original){k=!0,document.body.classList.remove("medium-zoom--opened"),x.zoomed.style.transform="",x.zoomedHd&&(x.zoomedHd.style.transform=""),x.template&&(x.template.style.transition="opacity 150ms",x.template.style.opacity=0),x.original.dispatchEvent(ft("medium-zoom:close",{detail:{zoom:O}})),x.zoomed.addEventListener("transitionend",(function t(){x.original.classList.remove("medium-zoom-image--hidden"),document.body.removeChild(x.zoomed),x.zoomedHd&&document.body.removeChild(x.zoomedHd),document.body.removeChild(S),x.zoomed.classList.remove("medium-zoom-image--opened"),x.template&&document.body.removeChild(x.template),k=!1,x.zoomed.removeEventListener("transitionend",t),x.original.dispatchEvent(ft("medium-zoom:closed",{detail:{zoom:O}})),x.original=null,x.zoomed=null,x.zoomedHd=null,x.template=null,e(O)}))}else e(O)}))},m=function(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{},t=e.target;return x.original?f():p({target:t})},g=function(){return _},v=function(){return b},y=function(){return x.original},b=[],w=[],k=!1,C=0,_=n,x={original:null,zoomed:null,zoomedHd:null,template:null};"[object Object]"===Object.prototype.toString.call(t)?_=t:(t||"string"==typeof t)&&u(t),_=st({margin:0,background:"#fff",scrollOffset:40,container:null,template:null},_);var S=ht(_.background);document.addEventListener("click",r),document.addEventListener("keyup",a),document.addEventListener("scroll",i),window.addEventListener("resize",f);var O={open:p,close:f,toggle:m,update:s,clone:c,attach:u,detach:l,on:d,off:h,getOptions:g,getImages:v,getZoomedImage:y};return O},gt=[Xe,at,{data:()=>({zoom:null}),mounted(){this.updateZoom()},updated(){this.updateZoom()},methods:{updateZoom(){setTimeout(()=>{this.zoom&&this.zoom.detach(),this.zoom=mt(".theme-default-content :not(a) > img",void 0)},1e3)}}}],vt=n(2);Object(Ye.g)(vt.default,"mixins",gt);const yt=[{name:"v-0dc9b01d",path:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-0dc9b01d").then(n)}},{path:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/index.html",redirect:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/"},{path:"/_posts/2021-09-30-long-term-commitment-and-support-for-the-cadence-project-and-its-community.html",redirect:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/"},{name:"v-dd6fb5d2",path:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-dd6fb5d2").then(n)}},{path:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/index.html",redirect:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/"},{path:"/_posts/2021-10-13-announcing-cadence-oss-office-hours-and-community-sync-up.html",redirect:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/"},{name:"v-5d913a79",path:"/blog/2022/01/31/community-spotlight-january-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-5d913a79").then(n)}},{path:"/blog/2022/01/31/community-spotlight-january-2022/index.html",redirect:"/blog/2022/01/31/community-spotlight-january-2022/"},{path:"/_posts/2022-01-31-community-spotlight-january-2022.html",redirect:"/blog/2022/01/31/community-spotlight-january-2022/"},{name:"v-5bc86237",path:"/blog/2022/02/28/community-spotlight-february-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-5bc86237").then(n)}},{path:"/blog/2022/02/28/community-spotlight-february-2022/index.html",redirect:"/blog/2022/02/28/community-spotlight-february-2022/"},{path:"/_posts/2022-02-28-community-spotlight-february-2022.html",redirect:"/blog/2022/02/28/community-spotlight-february-2022/"},{name:"v-4100b969",path:"/blog/2021/10/19/moving-to-grpc/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-4100b969").then(n)}},{path:"/blog/2021/10/19/moving-to-grpc/index.html",redirect:"/blog/2021/10/19/moving-to-grpc/"},{path:"/_posts/2021-10-19-moving-to-grpc.html",redirect:"/blog/2021/10/19/moving-to-grpc/"},{name:"v-52ad8f77",path:"/blog/2022/03/31/community-spotlight-update-march-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-52ad8f77").then(n)}},{path:"/blog/2022/03/31/community-spotlight-update-march-2022/index.html",redirect:"/blog/2022/03/31/community-spotlight-update-march-2022/"},{path:"/_posts/2022-03-31-community-spotlight-update-march-2022.html",redirect:"/blog/2022/03/31/community-spotlight-update-march-2022/"},{name:"v-586fa1f7",path:"/blog/2022/05/31/community-spotlight-update-may-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-586fa1f7").then(n)}},{path:"/blog/2022/05/31/community-spotlight-update-may-2022/index.html",redirect:"/blog/2022/05/31/community-spotlight-update-may-2022/"},{path:"/_posts/2022-05-31-community-spotlight-update-may-2022.html",redirect:"/blog/2022/05/31/community-spotlight-update-may-2022/"},{name:"v-59a2ac57",path:"/blog/2022/04/30/community-spotlight-update-april-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-59a2ac57").then(n)}},{path:"/blog/2022/04/30/community-spotlight-update-april-2022/index.html",redirect:"/blog/2022/04/30/community-spotlight-update-april-2022/"},{path:"/_posts/2022-04-30-community-spotlight-update-april-2022.html",redirect:"/blog/2022/04/30/community-spotlight-update-april-2022/"},{name:"v-2a9dfbe5",path:"/blog/2022/06/30/community-spotlight-update-june-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-2a9dfbe5").then(n)}},{path:"/blog/2022/06/30/community-spotlight-update-june-2022/index.html",redirect:"/blog/2022/06/30/community-spotlight-update-june-2022/"},{path:"/_posts/2022-06-30-community-spotlight-update-june-2022.html",redirect:"/blog/2022/06/30/community-spotlight-update-june-2022/"},{name:"v-46e2ddd1",path:"/blog/2022/07/31/community-spotlight-update-july-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-46e2ddd1").then(n)}},{path:"/blog/2022/07/31/community-spotlight-update-july-2022/index.html",redirect:"/blog/2022/07/31/community-spotlight-update-july-2022/"},{path:"/_posts/2022-07-31-community-spotlight-update-july-2022.html",redirect:"/blog/2022/07/31/community-spotlight-update-july-2022/"},{name:"v-151d3dd2",path:"/blog/2022/08/31/community-spotlight-august-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-151d3dd2").then(n)}},{path:"/blog/2022/08/31/community-spotlight-august-2022/index.html",redirect:"/blog/2022/08/31/community-spotlight-august-2022/"},{path:"/_posts/2022-08-31-community-spotlight-august-2022.html",redirect:"/blog/2022/08/31/community-spotlight-august-2022/"},{name:"v-793e7375",path:"/blog/2022/10/11/community-spotlight-september-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-793e7375").then(n)}},{path:"/blog/2022/10/11/community-spotlight-september-2022/index.html",redirect:"/blog/2022/10/11/community-spotlight-september-2022/"},{path:"/_posts/2022-09-30-community-spotlight-september-2022.html",redirect:"/blog/2022/10/11/community-spotlight-september-2022/"},{name:"v-5f5271a9",path:"/blog/2022/10/31/community-spotlight-october-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-5f5271a9").then(n)}},{path:"/blog/2022/10/31/community-spotlight-october-2022/index.html",redirect:"/blog/2022/10/31/community-spotlight-october-2022/"},{path:"/_posts/2022-10-31-community-spotlight-october-2022.html",redirect:"/blog/2022/10/31/community-spotlight-october-2022/"},{name:"v-185e9f52",path:"/blog/2022/11/30/community-spotlight-november-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-185e9f52").then(n)}},{path:"/blog/2022/11/30/community-spotlight-november-2022/index.html",redirect:"/blog/2022/11/30/community-spotlight-november-2022/"},{path:"/_posts/2022-11-30-community-spotlight-november-2022.html",redirect:"/blog/2022/11/30/community-spotlight-november-2022/"},{name:"v-55690947",path:"/blog/2023/02/28/community-spotlight-february/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-55690947").then(n)}},{path:"/blog/2023/02/28/community-spotlight-february/index.html",redirect:"/blog/2023/02/28/community-spotlight-february/"},{path:"/_posts/2023-02-28-community-spotlight-february.html",redirect:"/blog/2023/02/28/community-spotlight-february/"},{name:"v-9e2dfeb2",path:"/blog/2023/03/31/community-spotlight-march-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-9e2dfeb2").then(n)}},{path:"/blog/2023/03/31/community-spotlight-march-2023/index.html",redirect:"/blog/2023/03/31/community-spotlight-march-2023/"},{path:"/_posts/2023-03-31-community-spotlight-march-2023.html",redirect:"/blog/2023/03/31/community-spotlight-march-2023/"},{name:"v-1ea4d8b9",path:"/blog/2023/01/31/community-spotlight-january-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-1ea4d8b9").then(n)}},{path:"/blog/2023/01/31/community-spotlight-january-2023/index.html",redirect:"/blog/2023/01/31/community-spotlight-january-2023/"},{path:"/_posts/2023-01-31-community-spotlight-january-2023.html",redirect:"/blog/2023/01/31/community-spotlight-january-2023/"},{name:"v-2315d60a",path:"/blog/2023/06/08/survey-results/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-2315d60a").then(n)}},{path:"/blog/2023/06/08/survey-results/index.html",redirect:"/blog/2023/06/08/survey-results/"},{path:"/_posts/2023-06-08-survey-results.html",redirect:"/blog/2023/06/08/survey-results/"},{name:"v-6582ae57",path:"/blog/2022/12/23/community-spotlight-december-2022/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-6582ae57").then(n)}},{path:"/blog/2022/12/23/community-spotlight-december-2022/index.html",redirect:"/blog/2022/12/23/community-spotlight-december-2022/"},{path:"/_posts/2022-12-23-community-spotlight-december-2022.html",redirect:"/blog/2022/12/23/community-spotlight-december-2022/"},{name:"v-4ff003f7",path:"/blog/2023/07/01/components-of-cadence-application-setup/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-4ff003f7").then(n)}},{path:"/blog/2023/07/01/components-of-cadence-application-setup/index.html",redirect:"/blog/2023/07/01/components-of-cadence-application-setup/"},{path:"/_posts/2023-06-28-components-of-cadence-application-setup.html",redirect:"/blog/2023/07/01/components-of-cadence-application-setup/"},{name:"v-7ca21f57",path:"/blog/2023/06/30/community-spotlight-june-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-7ca21f57").then(n)}},{path:"/blog/2023/06/30/community-spotlight-june-2023/index.html",redirect:"/blog/2023/06/30/community-spotlight-june-2023/"},{path:"/_posts/2023-06-30-community-spotlight-june-2023.html",redirect:"/blog/2023/06/30/community-spotlight-june-2023/"},{name:"v-6df5dc97",path:"/blog/2023/07/05/implement-cadence-worker-from-scratch/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-6df5dc97").then(n)}},{path:"/blog/2023/07/05/implement-cadence-worker-from-scratch/index.html",redirect:"/blog/2023/07/05/implement-cadence-worker-from-scratch/"},{path:"/_posts/2023-07-05-implement-cadence-worker-from-scratch.html",redirect:"/blog/2023/07/05/implement-cadence-worker-from-scratch/"},{name:"v-45466bdb",path:"/blog/2023/07/16/write-your-first-workflow-with-cadence/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-45466bdb").then(n)}},{path:"/blog/2023/07/16/write-your-first-workflow-with-cadence/index.html",redirect:"/blog/2023/07/16/write-your-first-workflow-with-cadence/"},{path:"/_posts/2023-07-16-write-your-first-workflow-with-cadence.html",redirect:"/blog/2023/07/16/write-your-first-workflow-with-cadence/"},{name:"v-bed2d0d2",path:"/blog/2023/07/31/community-spotlight-july-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-bed2d0d2").then(n)}},{path:"/blog/2023/07/31/community-spotlight-july-2023/index.html",redirect:"/blog/2023/07/31/community-spotlight-july-2023/"},{path:"/_posts/2023-07-31-community-spotlight-july-2023.html",redirect:"/blog/2023/07/31/community-spotlight-july-2023/"},{name:"v-32adf8e6",path:"/blog/2023/07/10/cadence-bad-practices-part-1/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-32adf8e6").then(n)}},{path:"/blog/2023/07/10/cadence-bad-practices-part-1/index.html",redirect:"/blog/2023/07/10/cadence-bad-practices-part-1/"},{path:"/_posts/2023-07-10-cadence-bad-practices-part-1.html",redirect:"/blog/2023/07/10/cadence-bad-practices-part-1/"},{name:"v-54c8d717",path:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-54c8d717").then(n)}},{path:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/index.html",redirect:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/"},{path:"/_posts/2023-08-28-nondeterministic-errors-replayers-shadowers.html",redirect:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/"},{name:"v-0b00b852",path:"/blog/2023/08/31/community-spotlight-august-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-0b00b852").then(n)}},{path:"/blog/2023/08/31/community-spotlight-august-2023/index.html",redirect:"/blog/2023/08/31/community-spotlight-august-2023/"},{path:"/_posts/2023-08-31-community-spotlight-august-2023.html",redirect:"/blog/2023/08/31/community-spotlight-august-2023/"},{name:"v-6e3f5451",path:"/blog/2023/11/30/community-spotlight-update-november-2023/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-6e3f5451").then(n)}},{path:"/blog/2023/11/30/community-spotlight-update-november-2023/index.html",redirect:"/blog/2023/11/30/community-spotlight-update-november-2023/"},{path:"/_posts/2023-11-30-community-spotlight-update-november-2023.html",redirect:"/blog/2023/11/30/community-spotlight-update-november-2023/"},{name:"v-44d49837",path:"/blog/2024/07/11/yearly-roadmap-update/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-44d49837").then(n)}},{path:"/blog/2024/07/11/yearly-roadmap-update/index.html",redirect:"/blog/2024/07/11/yearly-roadmap-update/"},{path:"/_posts/2024-07-11-yearly-roadmap-update.html",redirect:"/blog/2024/07/11/yearly-roadmap-update/"},{name:"v-39909852",path:"/blog/2024/03/10/cadence-non-deterministic-common-qa/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-39909852").then(n)}},{path:"/blog/2024/03/10/cadence-non-deterministic-common-qa/index.html",redirect:"/blog/2024/03/10/cadence-non-deterministic-common-qa/"},{path:"/_posts/2024-02-15-cadence-non-deterministic-common-qa.html",redirect:"/blog/2024/03/10/cadence-non-deterministic-common-qa/"},{name:"v-15401a12",path:"/blog/2024/09/05/workflow-specific-rate-limits/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-15401a12").then(n)}},{path:"/blog/2024/09/05/workflow-specific-rate-limits/index.html",redirect:"/blog/2024/09/05/workflow-specific-rate-limits/"},{path:"/_posts/2024-09-05-workflow-specific-rate-limits.html",redirect:"/blog/2024/09/05/workflow-specific-rate-limits/"},{name:"v-480f0a7a",path:"/blog/2023/03/11/community-spotlight-update-march-2024/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Post","v-480f0a7a").then(n)}},{path:"/blog/2023/03/11/community-spotlight-update-march-2024/index.html",redirect:"/blog/2023/03/11/community-spotlight-update-march-2024/"},{path:"/_posts/2024-3-11-community-spotlight-update-march-2024.html",redirect:"/blog/2023/03/11/community-spotlight-update-march-2024/"},{name:"v-424df898",path:"/blog/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-424df898").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/index.html",redirect:"/blog/"},{name:"v-b1564aac",path:"/tag/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("FrontmatterKey","v-b1564aac").then(n)},meta:{pid:"tag",id:"tag"}},{path:"/tag/index.html",redirect:"/tag/"},{name:"v-c3507bb6",path:"/blog/page/2/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507bb6").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/2/index.html",redirect:"/blog/page/2/"},{name:"v-c3507b78",path:"/blog/page/3/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507b78").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/3/index.html",redirect:"/blog/page/3/"},{name:"v-c3507b3a",path:"/blog/page/4/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507b3a").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/4/index.html",redirect:"/blog/page/4/"},{name:"v-c3507afc",path:"/blog/page/5/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507afc").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/5/index.html",redirect:"/blog/page/5/"},{name:"v-c3507abe",path:"/blog/page/6/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507abe").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/6/index.html",redirect:"/blog/page/6/"},{name:"v-c3507a80",path:"/blog/page/7/",component:vt.default,beforeEnter:(e,t,n)=>{Object(Ye.a)("Layout","v-c3507a80").then(n)},meta:{pid:"post",id:"post"}},{path:"/blog/page/7/index.html",redirect:"/blog/page/7/"},{path:"*",component:vt.default}],bt={title:"",description:"",base:"/",headTags:[["script",{async:!0,src:"https://www.googletagmanager.com/gtag/js?id=G-W63QD8QE6E"}],["script",{},"window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-W63QD8QE6E');"],["link",{rel:"alternate",type:"application/rss+xml",href:"/rss.xml",title:" RSS Feed"}],["link",{rel:"alternate",type:"application/json",href:"/feed.json",title:" JSON Feed"}]],pages:[{title:"Long-term commitment and support for the Cadence project, and its community",frontmatter:{title:"Long-term commitment and support for the Cadence project, and its community",date:"2021-09-30T00:00:00.000Z",author:"Liang Mei",authorlink:"https://www.linkedin.com/in/meiliang86/",description:"Dear valued Cadence users and developers,\n\nSome of you might have read Temporal’s recent announcement about their decision to drop the support for the Cadence project. This message caused some confusion in the community, so we would like to take this opportunity to clear things out.\n\nFirst of all, Uber is committed to the long-term success of the Cadence project. Since its inception 5 years ago, use cases built on Cadence and their scale have grown significantly at Uber. Today, Cadence powers a variety of our most business-critical use cases (some public stories are available here and here). At the same time, the Cadence development team at Uber has enjoyed rapid growth with the product and has been driving innovations of workflow technology across the board, from new features (e.g. graceful failover, [workflow shadowing] ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2021-09-30-long-term-commitment-and-support-for-the-cadence-project-and-its-community.html",relativePath:"_posts/2021-09-30-long-term-commitment-and-support-for-the-cadence-project-and-its-community.md",key:"v-0dc9b01d",path:"/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/",summary:"Dear valued Cadence users and developers,\n\nSome of you might have read Temporal’s recent announcement about their decision to drop the support for the Cadence project. This message caused some confusion in the community, so we would like to take this opportunity to clear things out.\n\nFirst of all, Uber is committed to the long-term success of the Cadence project. Since its inception 5 years ago, use cases built on Cadence and their scale have grown significantly at Uber. Today, Cadence powers a variety of our most business-critical use cases (some public stories are available here and here). At the same time, the Cadence development team at Uber has enjoyed rapid growth with the product and has been driving innovations of workflow technology across the board, from new features (e.g. graceful failover, [workflow shadowing] ...",id:"post",pid:"post"},{title:"Announcing Cadence OSS office hours and community sync up",frontmatter:{title:"Announcing Cadence OSS office hours and community sync up",date:"2021-10-13T00:00:00.000Z",author:"Liang Mei",authorlink:"https://www.linkedin.com/in/meiliang86/",description:"Are you a current Cadence user, do you operate Cadence services, or are you interested in learning about workflow technologies and wonder what problems Cadence could solve for you? We would like to talk to you!\n\nOur team has spent a significant amount of time working with users and partner teams at Uber to design, scale and operate their workflows. This helps our users understand the technology better, smooth their learning curve and ramp up experience, and at the same time allows us to get fast and direct feedback so we can improve the developer experience and close feature gaps. As our product and community grows, we would like to expand this practice to our users in the OSS community. For the first time ever, members of the Cadence team along with core contributors from the community will host bi-weekly office hours to answer any questions you have about Cadence, or workflow technology in general. We can also dedicate future sessions to specific topics that have a common intere ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2021-10-13-announcing-cadence-oss-office-hours-and-community-sync-up.html",relativePath:"_posts/2021-10-13-announcing-cadence-oss-office-hours-and-community-sync-up.md",key:"v-dd6fb5d2",path:"/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/",summary:"Are you a current Cadence user, do you operate Cadence services, or are you interested in learning about workflow technologies and wonder what problems Cadence could solve for you? We would like to talk to you!\n\nOur team has spent a significant amount of time working with users and partner teams at Uber to design, scale and operate their workflows. This helps our users understand the technology better, smooth their learning curve and ramp up experience, and at the same time allows us to get fast and direct feedback so we can improve the developer experience and close feature gaps. As our product and community grows, we would like to expand this practice to our users in the OSS community. For the first time ever, members of the Cadence team along with core contributors from the community will host bi-weekly office hours to answer any questions you have about Cadence, or workflow technology in general. We can also dedicate future sessions to specific topics that have a common intere ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - January 2022",frontmatter:{title:"Cadence Community Spotlight Update - January 2022",date:"2022-01-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our very first Cadence Community Spotlight update!\n\nThis monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.\n\nPlease see below for a short round up of things that have happened recently in the community.\n\nCommunity Related Office Hours\n\nOn the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.\n\nAnd you can see that this monthly update is the result of the fe ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-01-31-community-spotlight-january-2022.html",relativePath:"_posts/2022-01-31-community-spotlight-january-2022.md",key:"v-5d913a79",path:"/blog/2022/01/31/community-spotlight-january-2022/",headers:[{level:2,title:"Community Related Office Hours",slug:"community-related-office-hours"},{level:2,title:"Adopting a Cadence Community Code of Conduct",slug:"adopting-a-cadence-community-code-of-conduct"},{level:2,title:"Recording from Cadence Meetup Available",slug:"recording-from-cadence-meetup-available"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our very first Cadence Community Spotlight update!\n\nThis monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.\n\nPlease see below for a short round up of things that have happened recently in the community.\n\nCommunity Related Office Hours\n\nOn the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.\n\nAnd you can see that this monthly update is the result of the fe ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - February 2022",frontmatter:{title:"Cadence Community Spotlight Update - February 2022",date:"2022-02-28T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the Cadence Community Spotlight update!\n\nThis is the second in our series of monthly updates focused on the Cadence community and news about what you have been doing with Cadence. We hope that you enjoyed last month's update and are keen to find out what has been happening.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nAnnouncements\n\nJust in case you missed it the alpha version of the Cadence notification service has been released. Details can be found at the following link:\nCadence Notification Service\n\nThanks very much to everyone that worked on this!\n\nCommunity Supporting the Community\n\nDuring February 16 questions were posted in the Cadence #support Slack channel from new Cadence users and existing community members looking for help and guidance. A very big thank you to the following community members who to ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-02-28-community-spotlight-february-2022.html",relativePath:"_posts/2022-02-28-community-spotlight-february-2022.md",key:"v-5bc86237",path:"/blog/2022/02/28/community-spotlight-february-2022/",headers:[{level:2,title:"Announcements",slug:"announcements"},{level:2,title:"Community Supporting the Community",slug:"community-supporting-the-community"},{level:2,title:"Please Subscribe to our Youtube Channel",slug:"please-subscribe-to-our-youtube-channel"},{level:2,title:"Help us to Make Cadence even better",slug:"help-us-to-make-cadence-even-better"},{level:2,title:"Cadence Calendar",slug:"cadence-calendar"},{level:2,title:"Cadence Technical Office Hours",slug:"cadence-technical-office-hours"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the Cadence Community Spotlight update!\n\nThis is the second in our series of monthly updates focused on the Cadence community and news about what you have been doing with Cadence. We hope that you enjoyed last month's update and are keen to find out what has been happening.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nAnnouncements\n\nJust in case you missed it the alpha version of the Cadence notification service has been released. Details can be found at the following link:\nCadence Notification Service\n\nThanks very much to everyone that worked on this!\n\nCommunity Supporting the Community\n\nDuring February 16 questions were posted in the Cadence #support Slack channel from new Cadence users and existing community members looking for help and guidance. A very big thank you to the following community members who to ...",id:"post",pid:"post"},{title:"Moving to gRPC",frontmatter:{title:"Moving to gRPC",date:"2021-10-19T00:00:00.000Z",author:"Vytautas Karpavicius",authorlink:"https://www.linkedin.com/in/vytautas-karpavicius",description:"\nCadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.\n\nOur Approach\nWith ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2021-10-19-moving-to-grpc.html",relativePath:"_posts/2021-10-19-moving-to-grpc.md",key:"v-4100b969",path:"/blog/2021/10/19/moving-to-grpc/",headers:[{level:2,title:"Background",slug:"background"},{level:2,title:"Our Approach",slug:"our-approach"},{level:2,title:"System overview",slug:"system-overview"},{level:2,title:"Migration steps",slug:"migration-steps"},{level:3,title:"Upgrading Cadence server",slug:"upgrading-cadence-server"},{level:3,title:"Upgrading clients",slug:"upgrading-clients"},{level:3,title:"Status at Uber",slug:"status-at-uber"}],summary:"\nCadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.\n\nOur Approach\nWith ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - March 2022",frontmatter:{title:"Cadence Community Spotlight Update - March 2022",date:"2022-03-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our Cadence Community Spotlight update!\n\nThis is the latest in our series of monthly blog posts focused on the Cadence community and news about what you have been doing with Cadence.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nUpdated Cadence Topology Diagram\n\nDid you know that we have an updated Cadence Service diagram on the website? Well we do - and you can find it on our Deployment Topology page. We are always looking for information that helps makes it easier for people to understand how Cadence works.\n\nSpecial thanks to Ben Slater for updating the diagram and also to Ender, Emrah and Long for helping review it.\n\nMonthly Cadence Technical Office Hours\n\nEvery month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-03-31-community-spotlight-update-march-2022.html",relativePath:"_posts/2022-03-31-community-spotlight-update-march-2022.md",key:"v-52ad8f77",path:"/blog/2022/03/31/community-spotlight-update-march-2022/",headers:[{level:2,title:"Updated Cadence Topology Diagram",slug:"updated-cadence-topology-diagram"},{level:2,title:"Monthly Cadence Technical Office Hours",slug:"monthly-cadence-technical-office-hours"},{level:2,title:"Some Cadence Statistics",slug:"some-cadence-statistics"},{level:2,title:"Using StackOverflow to Respond to Support Questions",slug:"using-stackoverflow-to-respond-to-support-questions"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our Cadence Community Spotlight update!\n\nThis is the latest in our series of monthly blog posts focused on the Cadence community and news about what you have been doing with Cadence.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nUpdated Cadence Topology Diagram\n\nDid you know that we have an updated Cadence Service diagram on the website? Well we do - and you can find it on our Deployment Topology page. We are always looking for information that helps makes it easier for people to understand how Cadence works.\n\nSpecial thanks to Ben Slater for updating the diagram and also to Ender, Emrah and Long for helping review it.\n\nMonthly Cadence Technical Office Hours\n\nEvery month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - May 2022",frontmatter:{title:"Cadence Community Spotlight Update - May 2022",date:"2022-05-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our regular Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nCadence Polling Cookbook\n\nDo you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new Cadence Polling cookbook is now available that gives you all the details you need. The cookbook was created by several members of the Instaclustr team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the Polling an external API for a specific resource to become available section of the Polling Use cases.\n\nA [Github repository](https://github.com/instaclustr/cadence-cookbook ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-05-31-community-spotlight-update-may-2022.html",relativePath:"_posts/2022-05-31-community-spotlight-update-may-2022.md",key:"v-586fa1f7",path:"/blog/2022/05/31/community-spotlight-update-may-2022/",headers:[{level:2,title:"Cadence Polling Cookbook",slug:"cadence-polling-cookbook"},{level:2,title:"Congratulations to a First Time Contributor",slug:"congratulations-to-a-first-time-contributor"},{level:2,title:"Share Your News!",slug:"share-your-news"},{level:2,title:"Next Cadence Technical Office Hours: 3rd and 27th June 2022",slug:"next-cadence-technical-office-hours-3rd-and-27th-june-2022"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our regular Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nCadence Polling Cookbook\n\nDo you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new Cadence Polling cookbook is now available that gives you all the details you need. The cookbook was created by several members of the Instaclustr team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the Polling an external API for a specific resource to become available section of the Polling Use cases.\n\nA [Github repository](https://github.com/instaclustr/cadence-cookbook ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - April 2022",frontmatter:{title:"Cadence Community Spotlight Update - April 2022",date:"2022-04-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to our Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nSD Times Names Cadence Open Source Project of the Week\n\nIn April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the Cadence in the News section below.\n\nFollow Us on LinkedIn and Twitter!\n\nWe have now set up Cadence accounts on LinkedIn and Twitter where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!\n\nAnd don’t forget to share your news with us. We are l ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-04-30-community-spotlight-update-april-2022.html",relativePath:"_posts/2022-04-30-community-spotlight-update-april-2022.md",key:"v-59a2ac57",path:"/blog/2022/04/30/community-spotlight-update-april-2022/",headers:[{level:2,title:"SD Times Names Cadence Open Source Project of the Week",slug:"sd-times-names-cadence-open-source-project-of-the-week"},{level:2,title:"Follow Us on LinkedIn and Twitter!",slug:"follow-us-on-linkedin-and-twitter"},{level:2,title:"Proposal to Change the Way We Write Workflows",slug:"proposal-to-change-the-way-we-write-workflows"},{level:2,title:"Help Us Improve Cadence",slug:"help-us-improve-cadence"},{level:2,title:"Next Cadence Technical Office Hours: 30th May 2022",slug:"next-cadence-technical-office-hours-30th-may-2022"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to our Cadence Community Spotlight update!\n\nThis is our monthly blog post series focused on news from in and around the Cadence community.\n\nPlease see below for a short activity roundup of what has happened recently in the community.\n\nSD Times Names Cadence Open Source Project of the Week\n\nIn April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the Cadence in the News section below.\n\nFollow Us on LinkedIn and Twitter!\n\nWe have now set up Cadence accounts on LinkedIn and Twitter where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!\n\nAnd don’t forget to share your news with us. We are l ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - June 2022",frontmatter:{title:"Cadence Community Spotlight Update - June 2022",date:"2022-06-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nKnowledge Sharing and Support\n\nOur Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.\n\nPlease don’t forget that we encourage everyone to post questions on StackOverflow using the cadence-workflow and uber-cadence tags so that others with similar questions or issues can easily search for and find an answer.\n\nImproving Technical Office Hours\n\nOver the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-06-30-community-spotlight-update-june-2022.html",relativePath:"_posts/2022-06-30-community-spotlight-update-june-2022.md",key:"v-2a9dfbe5",path:"/blog/2022/06/30/community-spotlight-update-june-2022/",headers:[{level:2,title:"Knowledge Sharing and Support",slug:"knowledge-sharing-and-support"},{level:2,title:"Improving Technical Office Hours",slug:"improving-technical-office-hours"},{level:2,title:"Cadence Stability Improvements",slug:"cadence-stability-improvements"},{level:2,title:"Sprechen Sie Deutsch?",slug:"sprechen-sie-deutsch"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nKnowledge Sharing and Support\n\nOur Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.\n\nPlease don’t forget that we encourage everyone to post questions on StackOverflow using the cadence-workflow and uber-cadence tags so that others with similar questions or issues can easily search for and find an answer.\n\nImproving Technical Office Hours\n\nOver the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - July 2022",frontmatter:{title:"Cadence Community Spotlight Update - July 2022",date:"2022-07-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nFlying Drones with Cadence\n\nCommunity member Paul Brebner has released another blog in the series of using Cadence to manage a drone delivery service. You can see a simulated view of it in action\n\nDon’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.\n\nGitHub Statistics\n\nDuring July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details here\n\nThe Cadence documentati ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-07-31-community-spotlight-update-july-2022.html",relativePath:"_posts/2022-07-31-community-spotlight-update-july-2022.md",key:"v-46e2ddd1",path:"/blog/2022/07/31/community-spotlight-update-july-2022/",headers:[{level:2,title:"Flying Drones with Cadence",slug:"flying-drones-with-cadence"},{level:2,title:"GitHub Statistics",slug:"github-statistics"},{level:2,title:"Cadence Roadmap",slug:"cadence-roadmap"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nFlying Drones with Cadence\n\nCommunity member Paul Brebner has released another blog in the series of using Cadence to manage a drone delivery service. You can see a simulated view of it in action\n\nDon’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.\n\nGitHub Statistics\n\nDuring July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details here\n\nThe Cadence documentati ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - August 2022",frontmatter:{title:"Cadence Community Spotlight Update - August 2022",date:"2022-08-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\n\nWe are working on putting together our first community survey to find out a bit more about our community. We would like to get your feedback about on a few things such as:\n\nhow you are using Cadence\nany specific experiences you have had where you'd like to see new features\nany special use cases not yet covered\nand of course whatever other feedback you'd like to give us\n\nSo please watch out for the survey which will be coming out to you via the Slack channel soon!\n\nSupport Activity\n\nWe have noticed that community activity is increasing and that we are continuing to respond to questions in our Slack #support channel. Eight questions have been posted in the channel this month and another seven questions have been posted on StackOverflow. We encourage people to post their questi ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-08-31-community-spotlight-august-2022.html",relativePath:"_posts/2022-08-31-community-spotlight-august-2022.md",key:"v-151d3dd2",path:"/blog/2022/08/31/community-spotlight-august-2022/",headers:[{level:2,title:"Community Survey",slug:"community-survey"},{level:2,title:"Support Activity",slug:"support-activity"},{level:2,title:"GitHub Activity",slug:"github-activity"},{level:2,title:"Come Along to Our Next Cadence Meetup!",slug:"come-along-to-our-next-cadence-meetup"},{level:2,title:"Looking for a Cadence Role?",slug:"looking-for-a-cadence-role"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\n\nWe are working on putting together our first community survey to find out a bit more about our community. We would like to get your feedback about on a few things such as:\n\nhow you are using Cadence\nany specific experiences you have had where you'd like to see new features\nany special use cases not yet covered\nand of course whatever other feedback you'd like to give us\n\nSo please watch out for the survey which will be coming out to you via the Slack channel soon!\n\nSupport Activity\n\nWe have noticed that community activity is increasing and that we are continuing to respond to questions in our Slack #support channel. Eight questions have been posted in the channel this month and another seven questions have been posted on StackOverflow. We encourage people to post their questi ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - September 2022",frontmatter:{title:"Cadence Community Spotlight Update - September 2022",date:"2022-10-11T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Developer Week\n\nA Cadence talk by Ender Demirkaya and Ben Slater has been accepted for Developer Week Enterprise.\n\nThe talk is scheduled to for 16th November so please make a note in your calendars.\n\nSharing Knowledge\n\nOver the last few months we have had a continual stream of Cadence questions in our Slack #support channel or on StackOverflow. As a result of the increased interest some members from the Cadence core team have decided to spend some time each day responding to your questions.\n\nRemember that if you have received a response ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-09-30-community-spotlight-september-2022.html",relativePath:"_posts/2022-09-30-community-spotlight-september-2022.md",key:"v-793e7375",path:"/blog/2022/10/11/community-spotlight-september-2022/",headers:[{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Developer Week\n\nA Cadence talk by Ender Demirkaya and Ben Slater has been accepted for Developer Week Enterprise.\n\nThe talk is scheduled to for 16th November so please make a note in your calendars.\n\nSharing Knowledge\n\nOver the last few months we have had a continual stream of Cadence questions in our Slack #support channel or on StackOverflow. As a result of the increased interest some members from the Cadence core team have decided to spend some time each day responding to your questions.\n\nRemember that if you have received a response ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - October 2022",frontmatter:{title:"Cadence Community Spotlight Update - October 2022",date:"2022-10-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Meetup Postponed\n\nIt's always great to get the community together and we had planned to run another Cadence Meetup in early November. Unfortunately we didn't have enough time to get things organised so we've decided to postpone it. So please watch out for an announcement for the new Cadence meetup date.\n\nDoordash Technnical Showcase Featuring Cadence\n\nWe have had some great feedback from people who attended Technical Showcase that was run this month by Doordash. It featured their financial products but also highlighted some of the key technologies they use...and guess what Cadence is one of them!\n\nIf you missed the session then you will be happy to know that it was recorded and we've inlcuded a link to the the recording on Youtube.\n\nThanks to ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-10-31-community-spotlight-october-2022.html",relativePath:"_posts/2022-10-31-community-spotlight-october-2022.md",key:"v-5f5271a9",path:"/blog/2022/10/31/community-spotlight-october-2022/",headers:[{level:2,title:"Cadence Meetup Postponed",slug:"cadence-meetup-postponed"},{level:2,title:"Doordash Technnical Showcase Featuring Cadence",slug:"doordash-technnical-showcase-featuring-cadence"},{level:2,title:"iWF Support for Cadence",slug:"iwf-support-for-cadence"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Meetup Postponed\n\nIt's always great to get the community together and we had planned to run another Cadence Meetup in early November. Unfortunately we didn't have enough time to get things organised so we've decided to postpone it. So please watch out for an announcement for the new Cadence meetup date.\n\nDoordash Technnical Showcase Featuring Cadence\n\nWe have had some great feedback from people who attended Technical Showcase that was run this month by Doordash. It featured their financial products but also highlighted some of the key technologies they use...and guess what Cadence is one of them!\n\nIf you missed the session then you will be happy to know that it was recorded and we've inlcuded a link to the the recording on Youtube.\n\nThanks to ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - November 2022",frontmatter:{title:"Cadence Community Spotlight Update - November 2022",date:"2022-11-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence @ Uber\n\nThis month Uber Engineering published a really nice article on one of the ways they are using Cadence. The article is called How Uber Optimizes the Timing of Push Notifications using ML and Linear Programming.\n\nThe Uber team take you through the details of the problem that they are looking to solve, so you can understand the scope limitations and depedencies - so please take a look.\n\nCadence @ DeveloperWeek Enterprise\n\nDevNetwork run a series of conferences and during November Cadence was featured in at DeveloperWeek Enterprise. Ender Demirkaya and [Ben Slater](https://www.linkedin.com/in/ ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-11-30-community-spotlight-november-2022.html",relativePath:"_posts/2022-11-30-community-spotlight-november-2022.md",key:"v-185e9f52",path:"/blog/2022/11/30/community-spotlight-november-2022/",headers:[{level:2,title:"Cadence @ Uber",slug:"cadence-uber"},{level:2,title:"Cadence @ DeveloperWeek Enterprise",slug:"cadence-developerweek-enterprise"},{level:2,title:"Cadence at W-JAX",slug:"cadence-at-w-jax"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence @ Uber\n\nThis month Uber Engineering published a really nice article on one of the ways they are using Cadence. The article is called How Uber Optimizes the Timing of Push Notifications using ML and Linear Programming.\n\nThe Uber team take you through the details of the problem that they are looking to solve, so you can understand the scope limitations and depedencies - so please take a look.\n\nCadence @ DeveloperWeek Enterprise\n\nDevNetwork run a series of conferences and during November Cadence was featured in at DeveloperWeek Enterprise. Ender Demirkaya and [Ben Slater](https://www.linkedin.com/in/ ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - February 2023",frontmatter:{title:"Cadence Community Spotlight Update - February 2023",date:"2023-02-28T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\nWe've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.\n\nThe survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.\n\nUse this Survey Link\n\nPlease take a few minutes to give us your opinion.\n\nCadence and Temporal\nDuring user surveys we've had a few queries about whether Cadence and Temporal are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-02-28-community-spotlight-february.html",relativePath:"_posts/2023-02-28-community-spotlight-february.md",key:"v-55690947",path:"/blog/2023/02/28/community-spotlight-february/",headers:[{level:2,title:"Community Survey",slug:"community-survey"},{level:2,title:"Cadence and Temporal",slug:"cadence-and-temporal"},{level:2,title:"Cadence at DoorDash",slug:"cadence-at-doordash"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCommunity Survey\nWe've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.\n\nThe survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.\n\nUse this Survey Link\n\nPlease take a few minutes to give us your opinion.\n\nCadence and Temporal\nDuring user surveys we've had a few queries about whether Cadence and Temporal are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - March 2023",frontmatter:{title:"Cadence Community Spotlight Update - March 2023",date:"2023-03-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Open Source Summit, North America\nWe are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's Open Source Summit, North America in Vancouver on 10th - 12th May 2023.\n\nThe talk called Cadence: The New Open Source Project for Building Complex Distributed Applications will be given by Ender Demirkaya and Emrah Seker If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!\n\nCommunity Activity\nOur Slack #support channel has been very active over the last fe ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-03-31-community-spotlight-march-2023.html",relativePath:"_posts/2023-03-31-community-spotlight-march-2023.md",key:"v-9e2dfeb2",path:"/blog/2023/03/31/community-spotlight-march-2023/",headers:[{level:2,title:"Cadence at Open Source Summit, North America",slug:"cadence-at-open-source-summit-north-america"},{level:2,title:"Community Activity",slug:"community-activity"},{level:2,title:"Cadence Developer Advocate",slug:"cadence-developer-advocate"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence at Open Source Summit, North America\nWe are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's Open Source Summit, North America in Vancouver on 10th - 12th May 2023.\n\nThe talk called Cadence: The New Open Source Project for Building Complex Distributed Applications will be given by Ender Demirkaya and Emrah Seker If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!\n\nCommunity Activity\nOur Slack #support channel has been very active over the last fe ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - January 2023",frontmatter:{title:"Cadence Community Spotlight Update - January 2023",date:"2023-01-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nClosing Down Cadence Office Hours\nWe have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.\n\nThanks very much to Ender Demirkayaand the Uber team for making themselves available for these sessions.\n\nPlease remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support Slack channel. If you al ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-01-31-community-spotlight-january-2023.html",relativePath:"_posts/2023-01-31-community-spotlight-january-2023.md",key:"v-1ea4d8b9",path:"/blog/2023/01/31/community-spotlight-january-2023/",headers:[{level:2,title:"Closing Down Cadence Office Hours",slug:"closing-down-cadence-office-hours"},{level:2,title:"Update on iWF Support for Cadence",slug:"update-on-iwf-support-for-cadence"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nClosing Down Cadence Office Hours\nWe have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.\n\nThanks very much to Ender Demirkayaand the Uber team for making themselves available for these sessions.\n\nPlease remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support Slack channel. If you al ...",id:"post",pid:"post"},{title:"2023 Cadence Community Survey Results",frontmatter:{title:"2023 Cadence Community Survey Results",date:"2023-06-08T00:00:00.000Z",author:"Ender Demirkaya",authorlink:"https://www.linkedin.com/in/enderdemirkaya/",description:"We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our Slack workspace, cadenceworkflow.io Blog and LinkedIn. After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.\n\nHere are some highlights in text and you can check out the visuals to get more details:\n\nusing.png\n\njob_role.png\n\nMost of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-06-08-survey-results.html",relativePath:"_posts/2023-06-08-survey-results.md",key:"v-2315d60a",path:"/blog/2023/06/08/survey-results/",summary:"We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our Slack workspace, cadenceworkflow.io Blog and LinkedIn. After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.\n\nHere are some highlights in text and you can check out the visuals to get more details:\n\nusing.png\n\njob_role.png\n\nMost of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - December 2022",frontmatter:{title:"Cadence Community Spotlight Update - December 2022",date:"2022-12-23T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"I know we are a little early this month as many people will be taking some time out for holidays.\n\nHappy Holidays\n\nWe'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.\n\nLet's keep going!...This is a great way to build a sustainable community.\n\nWe are sure that 2023 will be even more exciting as we continue to develop Cadence.\n\nCadence in the News!\n\nBelow are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.\n\nCadence iWF\n\nChild Workflow Cookbook\n\n[Cadence Connection Examples Using TLS](https://www.instaclus ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2022-12-23-community-spotlight-december-2022.html",relativePath:"_posts/2022-12-23-community-spotlight-december-2022.md",key:"v-6582ae57",path:"/blog/2022/12/23/community-spotlight-december-2022/",headers:[{level:2,title:"Happy Holidays",slug:"happy-holidays"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"I know we are a little early this month as many people will be taking some time out for holidays.\n\nHappy Holidays\n\nWe'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.\n\nLet's keep going!...This is a great way to build a sustainable community.\n\nWe are sure that 2023 will be even more exciting as we continue to develop Cadence.\n\nCadence in the News!\n\nBelow are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.\n\nCadence iWF\n\nChild Workflow Cookbook\n\n[Cadence Connection Examples Using TLS](https://www.instaclus ...",id:"post",pid:"post"},{title:"Understanding components of Cadence application",frontmatter:{title:"Understanding components of Cadence application",date:"2023-07-01T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:"Cadence is a powerful, scalable, and fault-tolerant workflow orchestration framework that helps developers implement and manage complex workflow tasks. In most cases, developers contribute activities and workflows directly to their codebases, and they may not have a full understanding of the components behind a running Cadence application. We receive numerous inquiries about setting up Cadence in a local environment from scratch for testing. Therefore, in this article, we will explore the components that power a Cadence cluster.\n\nThere are three critical components that are essential for any Cadence application:\nA running Cadence backend server.\nA registered Cadence domain.\nA running Cadence worker that registers all workflows and activities.\n\nLet's go over these components in more details.\n\nThe Cadence backend serves as the heart of your Cadence application. It is responsible for processing and scheduling your workflows and activities. While the backend relies on various dep ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-06-28-components-of-cadence-application-setup.html",relativePath:"_posts/2023-06-28-components-of-cadence-application-setup.md",key:"v-4ff003f7",path:"/blog/2023/07/01/components-of-cadence-application-setup/",summary:"Cadence is a powerful, scalable, and fault-tolerant workflow orchestration framework that helps developers implement and manage complex workflow tasks. In most cases, developers contribute activities and workflows directly to their codebases, and they may not have a full understanding of the components behind a running Cadence application. We receive numerous inquiries about setting up Cadence in a local environment from scratch for testing. Therefore, in this article, we will explore the components that power a Cadence cluster.\n\nThere are three critical components that are essential for any Cadence application:\nA running Cadence backend server.\nA registered Cadence domain.\nA running Cadence worker that registers all workflows and activities.\n\nLet's go over these components in more details.\n\nThe Cadence backend serves as the heart of your Cadence application. It is responsible for processing and scheduling your workflows and activities. While the backend relies on various dep ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - June 2023",frontmatter:{title:"Cadence Community Spotlight Update - June 2023",date:"2023-06-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"We've had a short break but now we are back. Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Release 1.0\n\nJust in case you missed it - at the end of April Cadence v1.0 was officially released. This release is a significant milestone for the project and the community. It indicates that we are confident in the stability of the code that we can recommend it and promote it widely to more users. Kudos to everyone that worked together to make this release happen.\n\nAnd the Uber team also gave Cadence a writeup on the Uber Engineering Blog so please take a look.\n\nCommunity Survey Results\n\nThe results of our Community Survey have been published and you can find [the details right here on our blog](https://cadenceworkflow.io/blog/2 ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-06-30-community-spotlight-june-2023.html",relativePath:"_posts/2023-06-30-community-spotlight-june-2023.md",key:"v-7ca21f57",path:"/blog/2023/06/30/community-spotlight-june-2023/",headers:[{level:2,title:"Cadence Release 1.0",slug:"cadence-release-1-0"},{level:2,title:"Community Survey Results",slug:"community-survey-results"},{level:2,title:"Cadence Video Open Source Summit, North America",slug:"cadence-video-open-source-summit-north-america"},{level:2,title:"Overcoming Potential Workflow Versioning Maintenance Challenges",slug:"overcoming-potential-workflow-versioning-maintenance-challenges"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"We've had a short break but now we are back. Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nCadence Release 1.0\n\nJust in case you missed it - at the end of April Cadence v1.0 was officially released. This release is a significant milestone for the project and the community. It indicates that we are confident in the stability of the code that we can recommend it and promote it widely to more users. Kudos to everyone that worked together to make this release happen.\n\nAnd the Uber team also gave Cadence a writeup on the Uber Engineering Blog so please take a look.\n\nCommunity Survey Results\n\nThe results of our Community Survey have been published and you can find [the details right here on our blog](https://cadenceworkflow.io/blog/2 ...",id:"post",pid:"post"},{title:"Implement a Cadence worker service from scratch",frontmatter:{title:"Implement a Cadence worker service from scratch",date:"2023-07-05T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'In the previous blog, we have introduced three critical components for a Cadence application: the Cadence backend, domain, and worker. Among these, the worker service is the most crucial focus for developers as it hosts the activities and workflows of a Cadence application. In this blog, I will provide a short tutorial on how to implement a simple worker service from scratch in Go.\n\nTo finish this tutorial, there are two prerequisites you need to finish first\nRegister a Cadence domain for your worker. For this tutorial, I\'ve already registered a domain named test-domain\nStart the Cadence backend server in background.\n\nTo get started, let\'s simply use the native HTTP package built in Go to start a process listening to port 3000. You may customize the port for your worker, but the port you choose should not conflict with existing port for your Cadence backend.\n\npackage main\n\nimport (\n\t"fmt"\n\t"net/http"\n)\n\nfunc main( ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-05-implement-cadence-worker-from-scratch.html",relativePath:"_posts/2023-07-05-implement-cadence-worker-from-scratch.md",key:"v-6df5dc97",path:"/blog/2023/07/05/implement-cadence-worker-from-scratch/",summary:'In the previous blog, we have introduced three critical components for a Cadence application: the Cadence backend, domain, and worker. Among these, the worker service is the most crucial focus for developers as it hosts the activities and workflows of a Cadence application. In this blog, I will provide a short tutorial on how to implement a simple worker service from scratch in Go.\n\nTo finish this tutorial, there are two prerequisites you need to finish first\nRegister a Cadence domain for your worker. For this tutorial, I\'ve already registered a domain named test-domain\nStart the Cadence backend server in background.\n\nTo get started, let\'s simply use the native HTTP package built in Go to start a process listening to port 3000. You may customize the port for your worker, but the port you choose should not conflict with existing port for your Cadence backend.\n\npackage main\n\nimport (\n\t"fmt"\n\t"net/http"\n)\n\nfunc main( ...',id:"post",pid:"post"},{title:"Write your first workflow with Cadence",frontmatter:{title:"Write your first workflow with Cadence",date:"2023-07-16T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'We have covered basic components of Cadence and how to implement a Cadence worker on local environment in previous blogs. In this blog, let\'s write your very first HelloWorld workflow with Cadence. I\'ve started the Cadence backend server in background and registered a domain named test-domain. You may use the code snippet for the worker service in this blog Let\'s first write a activity, which takes a single string argument and print a log in the console.\n\nfunc helloWorldActivity(ctx context.Context, name string) (string, error) {\n\tlogger := activity.GetLogger(ctx)\n\tlogger.Info("helloworld activity started")\n\treturn "Hello " + name + "!", nil\n}\n\nThen let\'s write a workflow that invokes this activity\nfunc helloWorldWorkflow(ctx workflow.Context, name string) error {\n\tao := workflow.ActivityOptions{\n ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-16-write-your-first-workflow-with-cadence.html",relativePath:"_posts/2023-07-16-write-your-first-workflow-with-cadence.md",key:"v-45466bdb",path:"/blog/2023/07/16/write-your-first-workflow-with-cadence/",summary:'We have covered basic components of Cadence and how to implement a Cadence worker on local environment in previous blogs. In this blog, let\'s write your very first HelloWorld workflow with Cadence. I\'ve started the Cadence backend server in background and registered a domain named test-domain. You may use the code snippet for the worker service in this blog Let\'s first write a activity, which takes a single string argument and print a log in the console.\n\nfunc helloWorldActivity(ctx context.Context, name string) (string, error) {\n\tlogger := activity.GetLogger(ctx)\n\tlogger.Info("helloworld activity started")\n\treturn "Hello " + name + "!", nil\n}\n\nThen let\'s write a workflow that invokes this activity\nfunc helloWorldWorkflow(ctx workflow.Context, name string) error {\n\tao := workflow.ActivityOptions{\n ...',id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - July 2023",frontmatter:{title:"Cadence Community Spotlight Update - July 2023",date:"2023-07-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nGetting Started with Cadence\n\nAre you new to Cadence and want to understand the basic concepts and architecture? Well we have some great information for you!\n\nCommunity member Chris Qin has written a short blog post that takes you through the the three main components that make up a Cadence application. Please take a look and feel free to give us your comments and feedback.\n\nThanks Chris for sharing your knowledge and helping others to get started.\n\nCadence Go Client v1.0 Released\n\nThis month saw the release of v1.0 of the Cadence Go Client. Note that the work done on this release was as a result ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-31-community-spotlight-july-2023.html",relativePath:"_posts/2023-07-31-community-spotlight-july-2023.md",key:"v-bed2d0d2",path:"/blog/2023/07/31/community-spotlight-july-2023/",headers:[{level:2,title:"Getting Started with Cadence",slug:"getting-started-with-cadence"},{level:2,title:"Cadence Go Client v1.0 Released",slug:"cadence-go-client-v1-0-released"},{level:2,title:"Cadence Release Strategy",slug:"cadence-release-strategy"},{level:2,title:"Cadence Helm Charts",slug:"cadence-helm-charts"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nGetting Started with Cadence\n\nAre you new to Cadence and want to understand the basic concepts and architecture? Well we have some great information for you!\n\nCommunity member Chris Qin has written a short blog post that takes you through the the three main components that make up a Cadence application. Please take a look and feel free to give us your comments and feedback.\n\nThanks Chris for sharing your knowledge and helping others to get started.\n\nCadence Go Client v1.0 Released\n\nThis month saw the release of v1.0 of the Cadence Go Client. Note that the work done on this release was as a result ...",id:"post",pid:"post"},{title:"Bad practices and Anti-patterns with Cadence (Part 1)",frontmatter:{title:"Bad practices and Anti-patterns with Cadence (Part 1)",date:"2023-07-10T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.\n\nReusing the same workflow ID for very active/continuous running workflows\n\nCadence organizes workflows based on their unique IDs, using a process called partitioning. If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the continueAsNew function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the prob ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-07-10-cadence-bad-practices-part-1.html",relativePath:"_posts/2023-07-10-cadence-bad-practices-part-1.md",key:"v-32adf8e6",path:"/blog/2023/07/10/cadence-bad-practices-part-1/",summary:'In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.\n\nReusing the same workflow ID for very active/continuous running workflows\n\nCadence organizes workflows based on their unique IDs, using a process called partitioning. If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the continueAsNew function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the prob ...',id:"post",pid:"post"},{title:"Non-deterministic errors, replayers and shadowers",frontmatter:{title:"Non-deterministic errors, replayers and shadowers",date:"2023-08-27T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:'It is conceivable that developers constantly update their Cadence workflow code based upon new business use cases and needs. However,\nthe definition of a Cadence workflow must be deterministic because behind the scenes cadence uses event sourcing to construct\nthe workflow state by replaying the historical events stored for this specific workflow. Introducing components that are not compatible\nwith an existing running workflow will yield to non-deterministic errors and sometimes developers find it tricky to debug. Consider the\nfollowing workflow that executes two activities.\n\nfunc SampleWorkflow(ctx workflow.Context, data string) (string, error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Minute,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n var result1 string\n err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n if err != nil {\n return "", err\n }\n v ...',layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-08-28-nondeterministic-errors-replayers-shadowers.html",relativePath:"_posts/2023-08-28-nondeterministic-errors-replayers-shadowers.md",key:"v-54c8d717",path:"/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/",summary:'It is conceivable that developers constantly update their Cadence workflow code based upon new business use cases and needs. However,\nthe definition of a Cadence workflow must be deterministic because behind the scenes cadence uses event sourcing to construct\nthe workflow state by replaying the historical events stored for this specific workflow. Introducing components that are not compatible\nwith an existing running workflow will yield to non-deterministic errors and sometimes developers find it tricky to debug. Consider the\nfollowing workflow that executes two activities.\n\nfunc SampleWorkflow(ctx workflow.Context, data string) (string, error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Minute,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n var result1 string\n err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n if err != nil {\n return "", err\n }\n v ...',id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - August 2023",frontmatter:{title:"Cadence Community Spotlight Update - August 2023",date:"2023-08-31T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nMore Cadence How To's\n\nYou might have noticed that we have had a few more contributions to our blog from Chris Qin. Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.\n\nHere are the latest topics:\n\nBad Practices and Anti-Patterns with Cadence - Part 1\n\nNon-Determistic Errors, Replayers and Shadowers\n\nEven if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.\n\nChris is also going to take a look at ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-08-31-community-spotlight-august-2023.html",relativePath:"_posts/2023-08-31-community-spotlight-august-2023.md",key:"v-0b00b852",path:"/blog/2023/08/31/community-spotlight-august-2023/",headers:[{level:2,title:"More Cadence How To's",slug:"more-cadence-how-to-s"},{level:2,title:"More iWF Examaples",slug:"more-iwf-examaples"},{level:2,title:"Cadence At the Helm!",slug:"cadence-at-the-helm"},{level:2,title:"Community Support!",slug:"community-support"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nPlease see below for a roundup of the highlights:\n\nMore Cadence How To's\n\nYou might have noticed that we have had a few more contributions to our blog from Chris Qin. Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.\n\nHere are the latest topics:\n\nBad Practices and Anti-Patterns with Cadence - Part 1\n\nNon-Determistic Errors, Replayers and Shadowers\n\nEven if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.\n\nChris is also going to take a look at ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - November 2023",frontmatter:{title:"Cadence Community Spotlight Update - November 2023",date:"2023-11-30T00:00:00.000Z",author:"Sharan Foga",authorlink:"https://www.linkedin.com/in/sfoga/",description:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nIt's been a couple of months since our last update so we have a lot of updates to share with you.\n\nPlease see below for a roundup of the highlights:\n\nProposal for Cadence Native Authentication\n\nCommunity member Mantas Sidlauskas has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:\n\nCadence Native Authentication Proposal\n\n This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the result ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2023-11-30-community-spotlight-update-november-2023.html",relativePath:"_posts/2023-11-30-community-spotlight-update-november-2023.md",key:"v-6e3f5451",path:"/blog/2023/11/30/community-spotlight-update-november-2023/",headers:[{level:2,title:"Proposal for Cadence Native Authentication",slug:"proposal-for-cadence-native-authentication"},{level:2,title:"iWF Deep Dive and More!",slug:"iwf-deep-dive-and-more"},{level:2,title:"New Go Samples for Cadence",slug:"new-go-samples-for-cadence"},{level:2,title:"Cadence Retrospective",slug:"cadence-retrospective"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Upcoming Events",slug:"upcoming-events"}],summary:"Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!\n\nIt's been a couple of months since our last update so we have a lot of updates to share with you.\n\nPlease see below for a roundup of the highlights:\n\nProposal for Cadence Native Authentication\n\nCommunity member Mantas Sidlauskas has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:\n\nCadence Native Authentication Proposal\n\n This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the result ...",id:"post",pid:"post"},{title:"2024 Cadence Yearly Roadmap Update",frontmatter:{title:"2024 Cadence Yearly Roadmap Update",date:"2024-07-11T00:00:00.000Z",author:"Ender Demirkaya",authorlink:"https://www.linkedin.com/in/enderdemirkaya/",description:"\n\nIf you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:\n\nWhat’s a workflow? (everyone has a different definition)\nWhy does it matter to be code-driven?\nBenefits of Cadence\n\nWhat is a Workflow?\n\nworkflow.png\n\nIn the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-07-11-yearly-roadmap-update.html",relativePath:"_posts/2024-07-11-yearly-roadmap-update.md",key:"v-44d49837",path:"/blog/2024/07/11/yearly-roadmap-update/",headers:[{level:2,title:"Introduction",slug:"introduction"},{level:3,title:"What is a Workflow?",slug:"what-is-a-workflow"},{level:3,title:"Code-Driven Workflows",slug:"code-driven-workflows"},{level:3,title:"Benefits",slug:"benefits"},{level:2,title:"Project Support",slug:"project-support"},{level:3,title:"Team",slug:"team"},{level:3,title:"Community",slug:"community"},{level:3,title:"Scale",slug:"scale"},{level:3,title:"Managed Solutions",slug:"managed-solutions"},{level:2,title:"After V1 Release",slug:"after-v1-release"},{level:3,title:"Frequent Releases",slug:"frequent-releases"},{level:3,title:"Zonal Isolation",slug:"zonal-isolation"},{level:3,title:"Narrowing Blast Radius",slug:"narrowing-blast-radius"},{level:3,title:"Async APIs",slug:"async-apis"},{level:3,title:"Pinot as Visibility Store",slug:"pinot-as-visibility-store"},{level:3,title:"Code Coverage",slug:"code-coverage"},{level:3,title:"Replayer Improvements",slug:"replayer-improvements"},{level:3,title:"Global Rate Limiters",slug:"global-rate-limiters"},{level:3,title:"Regular Failover Drills",slug:"regular-failover-drills"},{level:3,title:"Cadence Web v4",slug:"cadence-web-v4"},{level:3,title:"Code Review Time Non-determinism Checks",slug:"code-review-time-non-determinism-checks"},{level:3,title:"Domain Reports",slug:"domain-reports"},{level:3,title:"Client Based Migrations",slug:"client-based-migrations"},{level:2,title:"Roadmap (Next Year)",slug:"roadmap-next-year"},{level:3,title:"Database efficiency",slug:"database-efficiency"},{level:3,title:"Helm Charts",slug:"helm-charts"},{level:3,title:"Dashboard Templates",slug:"dashboard-templates"},{level:3,title:"Client V2 Modernization",slug:"client-v2-modernization"},{level:3,title:"Higher Parallelization and Prioritization in Task Processing",slug:"higher-parallelization-and-prioritization-in-task-processing"},{level:3,title:"Timer and Cron Burst Handling",slug:"timer-and-cron-burst-handling"},{level:3,title:"High zonal skew handling",slug:"high-zonal-skew-handling"},{level:3,title:"Tasklist Improvements",slug:"tasklist-improvements"},{level:3,title:"Shard Movement/Assignment Improvements",slug:"shard-movement-assignment-improvements"},{level:3,title:"Worker Heartbeats",slug:"worker-heartbeats"},{level:3,title:"Domain and Workflow Diagnostics",slug:"domain-and-workflow-diagnostics"},{level:3,title:"Self Serve Operations",slug:"self-serve-operations"},{level:3,title:"Cost Estimation",slug:"cost-estimation"},{level:3,title:"Domain Reports (continue)",slug:"domain-reports-continue"},{level:3,title:"Non-determinism Detection Improvements (continue)",slug:"non-determinism-detection-improvements-continue"},{level:3,title:"Domain Migrations (continue)",slug:"domain-migrations-continue"},{level:2,title:"Community",slug:"community-2"}],summary:"\n\nIf you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:\n\nWhat’s a workflow? (everyone has a different definition)\nWhy does it matter to be code-driven?\nBenefits of Cadence\n\nWhat is a Workflow?\n\nworkflow.png\n\nIn the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might ...",id:"post",pid:"post"},{title:"Cadence non-derministic errors common question Q&A (part 1)",frontmatter:{title:"Cadence non-derministic errors common question Q&A (part 1)",date:"2024-03-10T00:00:00.000Z",author:"Chris Qin",authorlink:"https://www.linkedin.com/in/chrisqin0610/",description:"\n\nNO. This change will not trigger non-deterministic error.\n\nAn Activity is the smallest unit of execution for Cadence and what happens inside activities are not recorded as historical events and therefore will not be replayed. In short, this change is deterministic and it is fine to modify logic inside activities.\n\nDoes changing the workflow definition trigger non-determinstic errors?\n\nYES. This is a very typical non-deterministic error.\n\nWhen a new workflow code change is deployed, Cadence will find if it is compatible with\nCadence history. Changes to workflow definition will fail the replay process of Cadence\nas it finds the new workflow definition imcompatible with previous historical events.\n\nHere is a list of common workflow definition changes.\nChanging workflow parameter counts\nChanging workflow parameter types\nChanging workflow return types\n\nThe following changes are not categorized as definition changes and therefore will not\ntrigger non-deterministic e ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-02-15-cadence-non-deterministic-common-qa.html",relativePath:"_posts/2024-02-15-cadence-non-deterministic-common-qa.md",key:"v-39909852",path:"/blog/2024/03/10/cadence-non-deterministic-common-qa/",headers:[{level:3,title:"If I change code logic inside an Cadence activity (for example, my activity is calling database A but now I want it to call database B), will it trigger an non-deterministic error?",slug:"if-i-change-code-logic-inside-an-cadence-activity-for-example-my-activity-is-calling-database-a-but-now-i-want-it-to-call-database-b-will-it-trigger-an-non-deterministic-error"},{level:3,title:"Does changing the workflow definition trigger non-determinstic errors?",slug:"does-changing-the-workflow-definition-trigger-non-determinstic-errors"},{level:3,title:"Does changing activity definitions trigger non-determinstic errors?",slug:"does-changing-activity-definitions-trigger-non-determinstic-errors"},{level:3,title:"What changes inside workflows may potentially trigger non-deterministic errors?",slug:"what-changes-inside-workflows-may-potentially-trigger-non-deterministic-errors"},{level:3,title:"Are Cadence signals replayed? If definition of signal is changed, will it trigger non-deterministic errors?",slug:"are-cadence-signals-replayed-if-definition-of-signal-is-changed-will-it-trigger-non-deterministic-errors"},{level:3,title:"If I have new business requirement and really need to change the definition of a workflow, what should I do?",slug:"if-i-have-new-business-requirement-and-really-need-to-change-the-definition-of-a-workflow-what-should-i-do"},{level:3,title:"Does changes to local activities' definition trigger non-deterministic errors?",slug:"does-changes-to-local-activities-definition-trigger-non-deterministic-errors"}],summary:"\n\nNO. This change will not trigger non-deterministic error.\n\nAn Activity is the smallest unit of execution for Cadence and what happens inside activities are not recorded as historical events and therefore will not be replayed. In short, this change is deterministic and it is fine to modify logic inside activities.\n\nDoes changing the workflow definition trigger non-determinstic errors?\n\nYES. This is a very typical non-deterministic error.\n\nWhen a new workflow code change is deployed, Cadence will find if it is compatible with\nCadence history. Changes to workflow definition will fail the replay process of Cadence\nas it finds the new workflow definition imcompatible with previous historical events.\n\nHere is a list of common workflow definition changes.\nChanging workflow parameter counts\nChanging workflow parameter types\nChanging workflow return types\n\nThe following changes are not categorized as definition changes and therefore will not\ntrigger non-deterministic e ...",id:"post",pid:"post"},{title:"Minimizing blast radius in Cadence: Introducing Workflow ID-based Rate Limits",frontmatter:{title:"Minimizing blast radius in Cadence: Introducing Workflow ID-based Rate Limits",subtitle:"test",date:"2024-09-05T00:00:00.000Z",author:"Jakob Haahr Taankvist",authorlink:"https://www.linkedin.com/in/jakob-taankvist/",description:"At Uber, we run several big multitenant Cadence clusters with hundreds of domains in each. The clusters being multi-tenant means potential noisy neighbor effects between domains.\n\nAn essential aspect of avoiding this is managing how workflows interact with our infrastructure to prevent any single workflow from causing instability for the whole cluster. To this end, we are excited to introduce Workflow ID-based rate limits — a new feature designed to protect our clusters from problematic workflows and ensure stability across the board.\n\nWhy Workflow ID-based Rate Limits?\nWe already have rate limits for how many requests can be sent to a domain. However, since Cadence is sharded on the workflow ID, a user-provided input, an overused workflow with a particular id might overwhelm a shard by making too many requests. There are two main ways this happens:\n\nA user starts, or signals the ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-09-05-workflow-specific-rate-limits.html",relativePath:"_posts/2024-09-05-workflow-specific-rate-limits.md",key:"v-15401a12",path:"/blog/2024/09/05/workflow-specific-rate-limits/",headers:[{level:2,title:"Why Workflow ID-based Rate Limits?",slug:"why-workflow-id-based-rate-limits"},{level:2,title:"Why not Shard Rate Limits?",slug:"why-not-shard-rate-limits"},{level:2,title:"How Does It Work?",slug:"how-does-it-work"},{level:3,title:"How do I Enable It?",slug:"how-do-i-enable-it"},{level:2,title:"Monitoring and Troubleshooting",slug:"monitoring-and-troubleshooting"},{level:2,title:"Conclusion",slug:"conclusion"}],summary:"At Uber, we run several big multitenant Cadence clusters with hundreds of domains in each. The clusters being multi-tenant means potential noisy neighbor effects between domains.\n\nAn essential aspect of avoiding this is managing how workflows interact with our infrastructure to prevent any single workflow from causing instability for the whole cluster. To this end, we are excited to introduce Workflow ID-based rate limits — a new feature designed to protect our clusters from problematic workflows and ensure stability across the board.\n\nWhy Workflow ID-based Rate Limits?\nWe already have rate limits for how many requests can be sent to a domain. However, since Cadence is sharded on the workflow ID, a user-provided input, an overused workflow with a particular id might overwhelm a shard by making too many requests. There are two main ways this happens:\n\nA user starts, or signals the ...",id:"post",pid:"post"},{title:"Cadence Community Spotlight Update - March 2024",frontmatter:{title:"Cadence Community Spotlight Update - March 2024",date:"2023-03-11T00:00:00.000Z",author:"Kevin Corbett",authorlink:"https://github.com/kcorbett-netapp",description:"Welcome back to the latest in our regular Cadence community spotlight updates where we aim to deliver you news from in and around the Cadence community!\nIt’s been a few months since our last update, so I have a bunch of exciting updates to share.\n\nLet’s get started!\n\nProposal for Cadence Plugin System\nCommunity member Mantas Sidlauskas drafted a thorough proposal around putting together a plugin system in Cadence. Aimed at enhancing the flexibility of integrating various components like storage, document search, and archival, this system encourages the use of external plugins, promoting innovation and reducing dependency complications. Your insights and feedback are crucial; learn more and contribute your thoughts at the link below:\n\nCadence Plugin System Proposal\n\nA huge thank you to Mantas for i ...",layout:"Post",permalink:"/blog/:year/:month/:day/:slug"},regularPath:"/_posts/2024-3-11-community-spotlight-update-march-2024.html",relativePath:"_posts/2024-3-11-community-spotlight-update-march-2024.md",key:"v-480f0a7a",path:"/blog/2023/03/11/community-spotlight-update-march-2024/",headers:[{level:2,title:"Proposal for Cadence Plugin System",slug:"proposal-for-cadence-plugin-system"},{level:2,title:"Admin API Permissions Rethinking",slug:"admin-api-permissions-rethinking"},{level:2,title:"New Java Samples for Cadence: Signal Workflow Interactions",slug:"new-java-samples-for-cadence-signal-workflow-interactions"},{level:2,title:"New GoLang client & Cadence Web Enhancements",slug:"new-golang-client-cadence-web-enhancements"},{level:2,title:"Release Updates: v1.2.6 & v1.2.7",slug:"release-updates-v1-2-6-v1-2-7"},{level:2,title:"Cadence in the News!",slug:"cadence-in-the-news"},{level:2,title:"Recent Events",slug:"recent-events"}],summary:"Welcome back to the latest in our regular Cadence community spotlight updates where we aim to deliver you news from in and around the Cadence community!\nIt’s been a few months since our last update, so I have a bunch of exciting updates to share.\n\nLet’s get started!\n\nProposal for Cadence Plugin System\nCommunity member Mantas Sidlauskas drafted a thorough proposal around putting together a plugin system in Cadence. Aimed at enhancing the flexibility of integrating various components like storage, document search, and archival, this system encourages the use of external plugins, promoting innovation and reducing dependency complications. Your insights and feedback are crucial; learn more and contribute your thoughts at the link below:\n\nCadence Plugin System Proposal\n\nA huge thank you to Mantas for i ...",id:"post",pid:"post"},{frontmatter:{layout:"Layout",title:"Post"},regularPath:"/blog/",key:"v-424df898",path:"/blog/"},{frontmatter:{layout:"FrontmatterKey",title:"Tag"},regularPath:"/tag/",key:"v-b1564aac",path:"/tag/"},{frontmatter:{layout:"Layout",title:"Page 2 | Post"},regularPath:"/blog/page/2/",key:"v-c3507bb6",path:"/blog/page/2/"},{frontmatter:{layout:"Layout",title:"Page 3 | Post"},regularPath:"/blog/page/3/",key:"v-c3507b78",path:"/blog/page/3/"},{frontmatter:{layout:"Layout",title:"Page 4 | Post"},regularPath:"/blog/page/4/",key:"v-c3507b3a",path:"/blog/page/4/"},{frontmatter:{layout:"Layout",title:"Page 5 | Post"},regularPath:"/blog/page/5/",key:"v-c3507afc",path:"/blog/page/5/"},{frontmatter:{layout:"Layout",title:"Page 6 | Post"},regularPath:"/blog/page/6/",key:"v-c3507abe",path:"/blog/page/6/"},{frontmatter:{layout:"Layout",title:"Page 7 | Post"},regularPath:"/blog/page/7/",key:"v-c3507a80",path:"/blog/page/7/"}],themeConfig:{logo:"/img/logo-white.svg",nav:[{text:"Docs",items:[{text:"Get Started",link:"/docs/get-started/"},{text:"Use cases",link:"/docs/use-cases/"},{text:"Concepts",link:"/docs/concepts/"},{text:"Java client",link:"/docs/java-client/"},{text:"Go client",link:"/docs/go-client/"},{text:"Command line interface",link:"/docs/cli/"},{text:"Operation Guide",link:"/docs/operation-guide/"},{text:"Glossary",link:"/GLOSSARY"},{text:"About",link:"/docs/about/"}]},{text:"Blog",link:"/blog/"},{text:"Client",items:[{text:"Java Docs",link:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client"},{text:"Java Client",link:"https://mvnrepository.com/artifact/com.uber.cadence/cadence-client"},{text:"Go Docs",link:"https://godoc.org/go.uber.org/cadence"},{text:"Go Client",link:"https://github.com/uber-go/cadence-client/releases/latest"}]},{text:"Community",items:[{text:"Github Discussion",link:"https://github.com/uber/cadence/discussions"},{text:"StackOverflow",link:"https://stackoverflow.com/questions/tagged/cadence-workflow"},{text:"Github Issues",link:"https://github.com/uber/cadence/issues"},{text:"Slack",link:"http://t.uber.com/cadence-slack"},{text:"Office Hours Calendar",link:"https://calendar.google.com/event?action=TEMPLATE&tmeid=MjFwOW01NWhlZ3MyZWJkcmo2djVsMjNkNzNfMjAyMjA3MjVUMTYwMDAwWiBlNnI0MGdwM2MycjAxMDU0aWQ3ZTk5ZGxhY0Bn&tmsrc=e6r40gp3c2r01054id7e99dlac%40group.calendar.google.com&scp=ALL"}]},{text:"GitHub",items:[{text:"Cadence Service and CLI",link:"https://github.com/uber/cadence"},{text:"Cadence Go Client",link:"https://github.com/uber-go/cadence-client"},{text:"Cadence Go Client Samples",link:"https://github.com/uber-common/cadence-samples"},{text:"Cadence Java Client",link:"https://github.com/uber-java/cadence-client"},{text:"Cadence Java Client Samples",link:"https://github.com/uber/cadence-java-samples"},{text:"Cadence Web UI",link:"https://github.com/uber/cadence-web"},{text:"Cadence Docs",link:"https://github.com/uber/cadence-docs"}]},{text:"Docker",items:[{text:"Cadence Service",link:"https://hub.docker.com/r/ubercadence/server/tags"},{text:"Cadence CLI",link:"https://hub.docker.com/r/ubercadence/cli/tags"},{text:"Cadence Web UI",link:"https://hub.docker.com/r/ubercadence/web/tags"}]}],directories:[{dirname:"_posts",id:"post",itemPermalink:"/blog/:year/:month/:day/:slug",path:"/blog/"}],feed:{canonical_base:"/",count:5,json:!0},footer:{copyright:[{text:"© 2024 Uber Technologies, Inc."}]},summaryLength:1e3,summary:!0,pwa:!1}};n(273);o.a.component("BaseListLayout",()=>Promise.all([n.e(0),n.e(2)]).then(n.bind(null,356))),o.a.component("BlogTag",()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,357))),o.a.component("BlogTags",()=>Promise.all([n.e(0),n.e(7)]).then(n.bind(null,358))),o.a.component("NavLink",()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,359)));n(274),n(33);var wt={tag:{}};class kt{constructor(e,t){this._metaMap=Object.assign({},e),Object.keys(this._metaMap).forEach(e=>{const{pageKeys:n}=this._metaMap[e];this._metaMap[e].pages=n.map(e=>Object(Ye.b)(t,e))})}get length(){return Object.keys(this._metaMap).length}get map(){return this._metaMap}get pages(){return this.list}get list(){return this.toArray()}toArray(){const e=[];return Object.keys(this._metaMap).forEach(t=>{const{pages:n,path:o}=this._metaMap[t];e.push({name:t,pages:n,path:o})}),e}getItemByName(e){return this._metaMap[e]}}var Ct=[{pid:"post",id:"post",filter:function(e,t,n){return e.pid===n&&e.id===t},sorter:{post:(e,t)=>{const o=n(119);return o(e.frontmatter.date)-o(t.frontmatter.date)>0?-1:1}}.post,pages:[{path:"/blog/",interval:[0,4]},{path:"/blog/page/2/",interval:[5,9]},{path:"/blog/page/3/",interval:[10,14]},{path:"/blog/page/4/",interval:[15,19]},{path:"/blog/page/5/",interval:[20,24]},{path:"/blog/page/6/",interval:[25,29]},{path:"/blog/page/7/",interval:[30,32]}],prevText:"Prev",nextText:"Next"}],_t=n(63);const xt=n.n(_t)()("plugin-blog:pagination");class St{constructor(e,t,n){xt("pagination",e);const{pages:o,prevText:r,nextText:i}=e,{path:a}=n;this._prevText=r,this._nextText=i;for(let e=0,t=o.length;ee.filter(t,e.id,e.pid)).sort(e.sorter)}setIndexPage(e){this._indexPage=e}get length(){return this._paginationPages.length}get pages(){const[e,t]=this._currentPage.interval;return this._matchedPages.slice(e,t+1)}get hasPrev(){return 0!==this.paginationIndex}get prevLink(){return this.hasPrev?this.paginationIndex-1==0&&this._indexPage?this._indexPage:this._paginationPages[this.paginationIndex-1].path:null}get hasNext(){return this.paginationIndex!==this.length-1}get nextLink(){return this.hasNext?this._paginationPages[this.paginationIndex+1].path:null}get prevText(){return this._prevText}get nextText(){return this._nextText}getSpecificPageLink(e){return this._paginationPages[e].path}}const Ot=new class{constructor(e){this.paginations=e}get pages(){return o.a.$vuepress.$get("siteData").pages}getPagination(e,t,n){xt("id",t),xt("pid",e);const o=this.paginations.filter(n=>n.id===t&&n.pid===e)[0];return new St(o,this.pages,n)}}(Ct);var jt={comment:{enabled:!1,service:""},email:{enabled:!1},feed:{rss:!0,atom:!1,json:!0}},Pt=[({Vue:e,options:t,router:n,siteData:o})=>{n.beforeResolve((e,t,n)=>{const o="undefined"!=typeof window?window:null;!o||"/"===t.path||e.path.startsWith("/blog")?n():o.location.href=e.fullPath})},{},({Vue:e})=>{e.mixin({computed:{$dataBlock(){return this.$options.__data__block__}}})},{},{},({Vue:e})=>{const t=Object.keys(wt).map(e=>{const t=wt[e],n="$"+e;return{[n](){const{pages:e}=this.$site;return new kt(t,e)},["$current"+(e.charAt(0).toUpperCase()+e.slice(1))](){const e=this.$route.meta.id;return this[n].getItemByName(e)}}}).reduce((e,t)=>(Object.assign(e,t),e),{});t.$frontmatterKey=function(){const e=this["$"+this.$route.meta.id];return e||null},e.mixin({computed:t})},({Vue:e})=>{e.mixin({computed:{$pagination(){return this.$route.meta.pid&&this.$route.meta.id?this.$getPagination(this.$route.meta.pid,this.$route.meta.id):null}},methods:{$getPagination(e,t){return t=t||e,Ot.getPagination(e,t,this.$route)}}})},({Vue:e})=>{const t={$service:()=>jt};e.mixin({computed:t})}],$t=[];class Tt extends class{constructor(){this.store=new o.a({data:{state:{}}})}$get(e){return this.store.state[e]}$set(e,t){o.a.set(this.store.state,e,t)}$emit(...e){this.store.$emit(...e)}$on(...e){this.store.$on(...e)}}{}Object.assign(Tt.prototype,{getPageAsyncComponent:Ye.e,getLayoutAsyncComponent:Ye.d,getAsyncComponent:Ye.c,getVueComponent:Ye.f});var At={install(e){const t=new Tt;e.$vuepress=t,e.prototype.$vuepress=t}};function Et(e,t){const n=t.toLowerCase();return e.options.routes.some(e=>e.path.toLowerCase()===n)}var It={props:{pageKey:String,slotKey:{type:String,default:"default"}},render(e){const t=this.pageKey||this.$parent.$page.key;return Object(Ye.h)("pageKey",t),o.a.component(t)||o.a.component(t,Object(Ye.e)(t)),o.a.component(t)?e(t):e("")}},Lt={functional:!0,props:{slotKey:String,required:!0},render:(e,{props:t,slots:n})=>e("div",{class:["content__"+t.slotKey]},n()[t.slotKey])},Mt={computed:{openInNewWindowTitle(){return this.$themeLocaleConfig.openNewWindowText||"(opens new window)"}}},Dt=(n(277),n(278),n(4)),Nt=Object(Dt.a)(Mt,(function(){var e=this._self._c;return e("span",[e("svg",{staticClass:"icon outbound",attrs:{xmlns:"http://www.w3.org/2000/svg","aria-hidden":"true",focusable:"false",x:"0px",y:"0px",viewBox:"0 0 100 100",width:"15",height:"15"}},[e("path",{attrs:{fill:"currentColor",d:"M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"}}),this._v(" "),e("polygon",{attrs:{fill:"currentColor",points:"45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"}})]),this._v(" "),e("span",{staticClass:"sr-only"},[this._v(this._s(this.openInNewWindowTitle))])])}),[],!1,null,null,null).exports,Ft={functional:!0,render(e,{parent:t,children:n}){if(t._isMounted)return n;t.$once("hook:mounted",()=>{t.$forceUpdate()})}};o.a.config.productionTip=!1,o.a.use(Ve),o.a.use(At),o.a.mixin(function(e,t,n=o.a){!function(e){e.locales&&Object.keys(e.locales).forEach(t=>{e.locales[t].path=t});Object.freeze(e)}(t),n.$vuepress.$set("siteData",t);const r=new(e(n.$vuepress.$get("siteData"))),i=Object.getOwnPropertyDescriptors(Object.getPrototypeOf(r)),a={};return Object.keys(i).reduce((e,t)=>(t.startsWith("$")&&(e[t]=i[t].get),e),a),{computed:a}}(e=>class{setPage(e){this.__page=e}get $site(){return e}get $themeConfig(){return this.$site.themeConfig}get $frontmatter(){return this.$page.frontmatter}get $localeConfig(){const{locales:e={}}=this.$site;let t,n;for(const o in e)"/"===o?n=e[o]:0===this.$page.path.indexOf(o)&&(t=e[o]);return t||n||{}}get $siteTitle(){return this.$localeConfig.title||this.$site.title||""}get $canonicalUrl(){const{canonicalUrl:e}=this.$page.frontmatter;return"string"==typeof e&&e}get $title(){const e=this.$page,{metaTitle:t}=this.$page.frontmatter;if("string"==typeof t)return t;const n=this.$siteTitle,o=e.frontmatter.home?null:e.frontmatter.title||e.title;return n?o?o+" | "+n:n:o||"VuePress"}get $description(){const e=function(e){if(e){const t=e.filter(e=>"description"===e.name)[0];if(t)return t.content}}(this.$page.frontmatter.meta);return e||(this.$page.frontmatter.description||this.$localeConfig.description||this.$site.description||"")}get $lang(){return this.$page.frontmatter.lang||this.$localeConfig.lang||"en-US"}get $localePath(){return this.$localeConfig.path||"/"}get $themeLocaleConfig(){return(this.$site.themeConfig.locales||{})[this.$localePath]||{}}get $page(){return this.__page?this.__page:function(e,t){for(let n=0;nn||(e.hash?!o.a.$vuepress.$get("disableScrollBehavior")&&{selector:decodeURIComponent(e.hash)}:{x:0,y:0})});!function(e){e.beforeEach((t,n,o)=>{if(Et(e,t.path))o();else if(/(\/|\.html)$/.test(t.path))if(/\/$/.test(t.path)){const n=t.path.replace(/\/$/,"")+".html";Et(e,n)?o(n):o()}else o();else{const n=t.path+"/",r=t.path+".html";Et(e,r)?o(r):Et(e,n)?o(n):o()}})}(n);const r={};try{await Promise.all(Pt.filter(e=>"function"==typeof e).map(t=>t({Vue:o.a,options:r,router:n,siteData:bt,isServer:e})))}catch(e){console.error(e)}return{app:new o.a(Object.assign(r,{router:n,render:e=>e("div",{attrs:{id:"app"}},[e("RouterView",{ref:"layout"}),e("div",{class:"global-ui"},$t.map(t=>e(t)))])})),router:n}}(!1).then(({app:e,router:t})=>{t.onReady(()=>{e.$mount("#app")})})}]); \ No newline at end of file diff --git a/assets/js/app.6b6ea2ea.js b/assets/js/app.6b6ea2ea.js deleted file mode 100644 index ebd3d7dc1..000000000 --- a/assets/js/app.6b6ea2ea.js +++ /dev/null @@ -1,16 +0,0 @@ -(window.webpackJsonp=window.webpackJsonp||[]).push([[0],[]]);!function(e){function t(t){for(var o,r,s=t[0],c=t[1],l=t[2],u=0,h=[];u=t||n<0||w&&e-l>=a}function k(){var e=p();if(b(e))return x(e);s=setTimeout(k,function(e){var n=t-(e-c);return w?h(n,a-(e-l)):n}(e))}function x(e){return s=void 0,g&&o?y(e):(o=i=void 0,r)}function _(){var e=p(),n=b(e);if(o=arguments,i=this,c=e,n){if(void 0===s)return v(c);if(w)return s=setTimeout(k,t),y(c)}return void 0===s&&(s=setTimeout(k,t)),r}return t=f(t)||0,m(n)&&(d=!!n.leading,a=(w="maxWait"in n)?u(f(n.maxWait)||0,t):a,g="trailing"in n?!!n.trailing:g),_.cancel=function(){void 0!==s&&clearTimeout(s),l=0,o=c=i=s=void 0},_.flush=function(){return void 0===s?r:x(p())},_}},function(e,t,n){var o,i; -/* NProgress, (c) 2013, 2014 Rico Sta. Cruz - http://ricostacruz.com/nprogress - * @license MIT */void 0===(i="function"==typeof(o=function(){var e,t,n={version:"0.2.0"},o=n.settings={minimum:.08,easing:"ease",positionUsing:"",speed:200,trickle:!0,trickleRate:.02,trickleSpeed:800,showSpinner:!0,barSelector:'[role="bar"]',spinnerSelector:'[role="spinner"]',parent:"body",template:'
'};function i(e,t,n){return en?n:e}function a(e){return 100*(-1+e)}n.configure=function(e){var t,n;for(t in e)void 0!==(n=e[t])&&e.hasOwnProperty(t)&&(o[t]=n);return this},n.status=null,n.set=function(e){var t=n.isStarted();e=i(e,o.minimum,1),n.status=1===e?null:e;var c=n.render(!t),l=c.querySelector(o.barSelector),d=o.speed,u=o.easing;return c.offsetWidth,r((function(t){""===o.positionUsing&&(o.positionUsing=n.getPositioningCSS()),s(l,function(e,t,n){var i;return(i="translate3d"===o.positionUsing?{transform:"translate3d("+a(e)+"%,0,0)"}:"translate"===o.positionUsing?{transform:"translate("+a(e)+"%,0)"}:{"margin-left":a(e)+"%"}).transition="all "+t+"ms "+n,i}(e,d,u)),1===e?(s(c,{transition:"none",opacity:1}),c.offsetWidth,setTimeout((function(){s(c,{transition:"all "+d+"ms linear",opacity:0}),setTimeout((function(){n.remove(),t()}),d)}),d)):setTimeout(t,d)})),this},n.isStarted=function(){return"number"==typeof n.status},n.start=function(){n.status||n.set(0);var e=function(){setTimeout((function(){n.status&&(n.trickle(),e())}),o.trickleSpeed)};return o.trickle&&e(),this},n.done=function(e){return e||n.status?n.inc(.3+.5*Math.random()).set(1):this},n.inc=function(e){var t=n.status;return t?("number"!=typeof e&&(e=(1-t)*i(Math.random()*t,.1,.95)),t=i(t+e,0,.994),n.set(t)):n.start()},n.trickle=function(){return n.inc(Math.random()*o.trickleRate)},e=0,t=0,n.promise=function(o){return o&&"resolved"!==o.state()?(0===t&&n.start(),e++,t++,o.always((function(){0==--t?(e=0,n.done()):n.set((e-t)/e)})),this):this},n.render=function(e){if(n.isRendered())return document.getElementById("nprogress");l(document.documentElement,"nprogress-busy");var t=document.createElement("div");t.id="nprogress",t.innerHTML=o.template;var i,r=t.querySelector(o.barSelector),c=e?"-100":a(n.status||0),d=document.querySelector(o.parent);return s(r,{transition:"all 0 linear",transform:"translate3d("+c+"%,0,0)"}),o.showSpinner||(i=t.querySelector(o.spinnerSelector))&&h(i),d!=document.body&&l(d,"nprogress-custom-parent"),d.appendChild(t),t},n.remove=function(){d(document.documentElement,"nprogress-busy"),d(document.querySelector(o.parent),"nprogress-custom-parent");var e=document.getElementById("nprogress");e&&h(e)},n.isRendered=function(){return!!document.getElementById("nprogress")},n.getPositioningCSS=function(){var e=document.body.style,t="WebkitTransform"in e?"Webkit":"MozTransform"in e?"Moz":"msTransform"in e?"ms":"OTransform"in e?"O":"";return t+"Perspective"in e?"translate3d":t+"Transform"in e?"translate":"margin"};var r=function(){var e=[];function t(){var n=e.shift();n&&n(t)}return function(n){e.push(n),1==e.length&&t()}}(),s=function(){var e=["Webkit","O","Moz","ms"],t={};function n(n){return n=n.replace(/^-ms-/,"ms-").replace(/-([\da-z])/gi,(function(e,t){return t.toUpperCase()})),t[n]||(t[n]=function(t){var n=document.body.style;if(t in n)return t;for(var o,i=e.length,a=t.charAt(0).toUpperCase()+t.slice(1);i--;)if((o=e[i]+a)in n)return o;return t}(n))}function o(e,t,o){t=n(t),e.style[t]=o}return function(e,t){var n,i,a=arguments;if(2==a.length)for(n in t)void 0!==(i=t[n])&&t.hasOwnProperty(n)&&o(e,n,i);else o(e,a[1],a[2])}}();function c(e,t){return("string"==typeof e?e:u(e)).indexOf(" "+t+" ")>=0}function l(e,t){var n=u(e),o=n+t;c(n,t)||(e.className=o.substring(1))}function d(e,t){var n,o=u(e);c(e,t)&&(n=o.replace(" "+t+" "," "),e.className=n.substring(1,n.length-1))}function u(e){return(" "+(e.className||"")+" ").replace(/\s+/gi," ")}function h(e){e&&e.parentNode&&e.parentNode.removeChild(e)}return n})?o.call(t,n,t,e):o)||(e.exports=i)},function(e,t,n){"use strict";var o=n(8),i=String,a=TypeError;e.exports=function(e){if(o(e))return e;throw new a(i(e)+" is not an object")}},function(e,t,n){"use strict";var o=n(1),i=n(50).f,a=n(13),r=n(95),s=n(36),c=n(63),l=n(124);e.exports=function(e,t){var n,d,u,h,p,m=e.target,f=e.global,w=e.stat;if(n=f?o:w?o[m]||s(m,{}):o[m]&&o[m].prototype)for(d in t){if(h=t[d],u=e.dontCallGetSet?(p=i(n,d))&&p.value:n[d],!l(f?d:m+(w?".":"#")+d,e.forced)&&void 0!==u){if(typeof h==typeof u)continue;c(h,u)}(e.sham||u&&u.sham)&&a(h,"sham",!0),r(n,d,h,e)}}},function(e,t,n){"use strict";var o=n(4);e.exports=!o((function(){var e=function(){}.bind();return"function"!=typeof e||e.hasOwnProperty("prototype")}))},function(e,t,n){"use strict";var o=n(47),i=n(51);e.exports=function(e){return o(i(e))}},function(e,t,n){"use strict";var o=n(1),i=n(2),a=function(e){return i(e)?e:void 0};e.exports=function(e,t){return arguments.length<2?a(o[e]):o[e]&&o[e][t]}},function(e,t,n){"use strict";var o=n(2),i=n(111),a=TypeError;e.exports=function(e){if(o(e))return e;throw new a(i(e)+" is not a function")}},function(e,t,n){"use strict";var o=n(1),i=n(59),a=n(9),r=n(61),s=n(57),c=n(56),l=o.Symbol,d=i("wks"),u=c?l.for||l:l&&l.withoutSetter||r;e.exports=function(e){return a(d,e)||(d[e]=s&&a(l,e)?l[e]:u("Symbol."+e)),d[e]}},function(e,t,n){"use strict";var o=n(51),i=Object;e.exports=function(e){return i(o(e))}},function(e,t,n){"use strict";var o=n(122);e.exports=function(e){return o(e.length)}},function(e,t,n){"use strict";var o=n(26),i=Function.prototype.call;e.exports=o?i.bind(i):function(){return i.apply(i,arguments)}},function(e,t,n){"use strict";e.exports=function(e,t){return{enumerable:!(1&e),configurable:!(2&e),writable:!(4&e),value:t}}},function(e,t,n){"use strict";var o=n(60),i=n(1),a=n(36),r=e.exports=i["__core-js_shared__"]||a("__core-js_shared__",{});(r.versions||(r.versions=[])).push({version:"3.36.0",mode:o?"pure":"global",copyright:"© 2014-2024 Denis Pushkarev (zloirock.ru)",license:"https://github.com/zloirock/core-js/blob/v3.36.0/LICENSE",source:"https://github.com/zloirock/core-js"})},function(e,t,n){"use strict";var o=n(1),i=Object.defineProperty;e.exports=function(e,t){try{i(o,e,{value:t,configurable:!0,writable:!0})}catch(n){o[e]=t}return t}},function(e,t,n){var o=n(147),i=n(11),a=Object.prototype,r=a.hasOwnProperty,s=a.propertyIsEnumerable,c=o(function(){return arguments}())?o:function(e){return i(e)&&r.call(e,"callee")&&!s.call(e,"callee")};e.exports=c},function(e,t,n){var o=n(10)(n(7),"Map");e.exports=o},function(e,t){e.exports=function(e){var t=typeof e;return null!=e&&("object"==t||"function"==t)}},function(e,t,n){var o=n(167),i=n(174),a=n(176),r=n(177),s=n(178);function c(e){var t=-1,n=null==e?0:e.length;for(this.clear();++t-1&&e%1==0&&e<=9007199254740991}},function(e,t,n){var o=n(6),i=n(44),a=/\.|\[(?:[^[\]]*|(["'])(?:(?!\1)[^\\]|\\.)*?\1)\]/,r=/^\w*$/;e.exports=function(e,t){if(o(e))return!1;var n=typeof e;return!("number"!=n&&"symbol"!=n&&"boolean"!=n&&null!=e&&!i(e))||(r.test(e)||!a.test(e)||null!=t&&e in Object(t))}},function(e,t,n){var o=n(12),i=n(11);e.exports=function(e){return"symbol"==typeof e||i(e)&&"[object Symbol]"==o(e)}},function(e,t){e.exports=function(e){return e}},function(e,t){function n(e,t){for(var n=0,o=e.length-1;o>=0;o--){var i=e[o];"."===i?e.splice(o,1):".."===i?(e.splice(o,1),n++):n&&(e.splice(o,1),n--)}if(t)for(;n--;n)e.unshift("..");return e}function o(e,t){if(e.filter)return e.filter(t);for(var n=[],o=0;o=-1&&!t;i--){var a=i>=0?arguments[i]:process.cwd();if("string"!=typeof a)throw new TypeError("Arguments to path.resolve must be strings");a&&(e=a+"/"+e,t="/"===a.charAt(0))}return(t?"/":"")+(e=n(o(e.split("/"),(function(e){return!!e})),!t).join("/"))||"."},t.normalize=function(e){var a=t.isAbsolute(e),r="/"===i(e,-1);return(e=n(o(e.split("/"),(function(e){return!!e})),!a).join("/"))||a||(e="."),e&&r&&(e+="/"),(a?"/":"")+e},t.isAbsolute=function(e){return"/"===e.charAt(0)},t.join=function(){var e=Array.prototype.slice.call(arguments,0);return t.normalize(o(e,(function(e,t){if("string"!=typeof e)throw new TypeError("Arguments to path.join must be strings");return e})).join("/"))},t.relative=function(e,n){function o(e){for(var t=0;t=0&&""===e[n];n--);return t>n?[]:e.slice(t,n-t+1)}e=t.resolve(e).substr(1),n=t.resolve(n).substr(1);for(var i=o(e.split("/")),a=o(n.split("/")),r=Math.min(i.length,a.length),s=r,c=0;c=1;--a)if(47===(t=e.charCodeAt(a))){if(!i){o=a;break}}else i=!1;return-1===o?n?"/":".":n&&1===o?"/":e.slice(0,o)},t.basename=function(e,t){var n=function(e){"string"!=typeof e&&(e+="");var t,n=0,o=-1,i=!0;for(t=e.length-1;t>=0;--t)if(47===e.charCodeAt(t)){if(!i){n=t+1;break}}else-1===o&&(i=!1,o=t+1);return-1===o?"":e.slice(n,o)}(e);return t&&n.substr(-1*t.length)===t&&(n=n.substr(0,n.length-t.length)),n},t.extname=function(e){"string"!=typeof e&&(e+="");for(var t=-1,n=0,o=-1,i=!0,a=0,r=e.length-1;r>=0;--r){var s=e.charCodeAt(r);if(47!==s)-1===o&&(i=!1,o=r+1),46===s?-1===t?t=r:1!==a&&(a=1):-1!==t&&(a=-1);else if(!i){n=r+1;break}}return-1===t||-1===o||0===a||1===a&&t===o-1&&t===n+1?"":e.slice(t,o)};var i="b"==="ab".substr(-1)?function(e,t,n){return e.substr(t,n)}:function(e,t,n){return t<0&&(t=e.length+t),e.substr(t,n)}},function(e,t,n){"use strict";var o=n(3),i=n(4),a=n(16),r=Object,s=o("".split);e.exports=i((function(){return!r("z").propertyIsEnumerable(0)}))?function(e){return"String"===a(e)?s(e,""):r(e)}:r},function(e,t,n){"use strict";e.exports={}},function(e,t){e.exports=function(e){return e.webpackPolyfill||(e.deprecate=function(){},e.paths=[],e.children||(e.children=[]),Object.defineProperty(e,"loaded",{enumerable:!0,get:function(){return e.l}}),Object.defineProperty(e,"id",{enumerable:!0,get:function(){return e.i}}),e.webpackPolyfill=1),e}},function(e,t,n){"use strict";var o=n(5),i=n(33),a=n(107),r=n(34),s=n(27),c=n(53),l=n(9),d=n(62),u=Object.getOwnPropertyDescriptor;t.f=o?u:function(e,t){if(e=s(e),t=c(t),d)try{return u(e,t)}catch(e){}if(l(e,t))return r(!i(a.f,e,t),e[t])}},function(e,t,n){"use strict";var o=n(52),i=TypeError;e.exports=function(e){if(o(e))throw new i("Can't call method on "+e);return e}},function(e,t,n){"use strict";e.exports=function(e){return null==e}},function(e,t,n){"use strict";var o=n(108),i=n(54);e.exports=function(e){var t=o(e,"string");return i(t)?t:t+""}},function(e,t,n){"use strict";var o=n(28),i=n(2),a=n(55),r=n(56),s=Object;e.exports=r?function(e){return"symbol"==typeof e}:function(e){var t=o("Symbol");return i(t)&&a(t.prototype,s(e))}},function(e,t,n){"use strict";var o=n(3);e.exports=o({}.isPrototypeOf)},function(e,t,n){"use strict";var o=n(57);e.exports=o&&!Symbol.sham&&"symbol"==typeof Symbol.iterator},function(e,t,n){"use strict";var o=n(58),i=n(4),a=n(1).String;e.exports=!!Object.getOwnPropertySymbols&&!i((function(){var e=Symbol("symbol detection");return!a(e)||!(Object(e)instanceof Symbol)||!Symbol.sham&&o&&o<41}))},function(e,t,n){"use strict";var o,i,a=n(1),r=n(109),s=a.process,c=a.Deno,l=s&&s.versions||c&&c.version,d=l&&l.v8;d&&(i=(o=d.split("."))[0]>0&&o[0]<4?1:+(o[0]+o[1])),!i&&r&&(!(o=r.match(/Edge\/(\d+)/))||o[1]>=74)&&(o=r.match(/Chrome\/(\d+)/))&&(i=+o[1]),e.exports=i},function(e,t,n){"use strict";var o=n(35);e.exports=function(e,t){return o[e]||(o[e]=t||{})}},function(e,t,n){"use strict";e.exports=!1},function(e,t,n){"use strict";var o=n(3),i=0,a=Math.random(),r=o(1..toString);e.exports=function(e){return"Symbol("+(void 0===e?"":e)+")_"+r(++i+a,36)}},function(e,t,n){"use strict";var o=n(5),i=n(4),a=n(100);e.exports=!o&&!i((function(){return 7!==Object.defineProperty(a("div"),"a",{get:function(){return 7}}).a}))},function(e,t,n){"use strict";var o=n(9),i=n(117),a=n(50),r=n(15);e.exports=function(e,t,n){for(var s=i(t),c=r.f,l=a.f,d=0;dd))return!1;var h=c.get(e),p=c.get(t);if(h&&p)return h==t&&p==e;var m=-1,f=!0,w=2&n?new o:void 0;for(c.set(e,t),c.set(t,e);++m-1&&e%1==0&&e]/;e.exports=function(e){var t,n=""+e,i=o.exec(n);if(!i)return n;var a="",r=0,s=0;for(r=i.index;rl;)i(o,n=t[l++])&&(~r(d,n)||c(d,n));return d}},function(e,t,n){"use strict";var o=n(25),i=n(1),a=n(128),r=n(129),s=i.WebAssembly,c=7!==new Error("e",{cause:7}).cause,l=function(e,t){var n={};n[e]=r(e,t,c),o({global:!0,constructor:!0,arity:1,forced:c},n)},d=function(e,t){if(s&&s[e]){var n={};n[e]=r("WebAssembly."+e,t,c),o({target:"WebAssembly",stat:!0,constructor:!0,arity:1,forced:c},n)}};l("Error",(function(e){return function(t){return a(e,this,arguments)}})),l("EvalError",(function(e){return function(t){return a(e,this,arguments)}})),l("RangeError",(function(e){return function(t){return a(e,this,arguments)}})),l("ReferenceError",(function(e){return function(t){return a(e,this,arguments)}})),l("SyntaxError",(function(e){return function(t){return a(e,this,arguments)}})),l("TypeError",(function(e){return function(t){return a(e,this,arguments)}})),l("URIError",(function(e){return function(t){return a(e,this,arguments)}})),d("CompileError",(function(e){return function(t){return a(e,this,arguments)}})),d("LinkError",(function(e){return function(t){return a(e,this,arguments)}})),d("RuntimeError",(function(e){return function(t){return a(e,this,arguments)}}))},function(e,t,n){e.exports=n(249)},function(e,t,n){"use strict";var o=n(25),i=n(125).left,a=n(126),r=n(58);o({target:"Array",proto:!0,forced:!n(127)&&r>79&&r<83||!a("reduce")},{reduce:function(e){var t=arguments.length;return i(this,e,t,t>1?arguments[1]:void 0)}})},function(e,t,n){"use strict";var o={}.propertyIsEnumerable,i=Object.getOwnPropertyDescriptor,a=i&&!o.call({1:2},1);t.f=a?function(e){var t=i(this,e);return!!t&&t.enumerable}:o},function(e,t,n){"use strict";var o=n(33),i=n(8),a=n(54),r=n(110),s=n(112),c=n(30),l=TypeError,d=c("toPrimitive");e.exports=function(e,t){if(!i(e)||a(e))return e;var n,c=r(e,d);if(c){if(void 0===t&&(t="default"),n=o(c,e,t),!i(n)||a(n))return n;throw new l("Can't convert object to primitive value")}return void 0===t&&(t="number"),s(e,t)}},function(e,t,n){"use strict";e.exports="undefined"!=typeof navigator&&String(navigator.userAgent)||""},function(e,t,n){"use strict";var o=n(29),i=n(52);e.exports=function(e,t){var n=e[t];return i(n)?void 0:o(n)}},function(e,t,n){"use strict";var o=String;e.exports=function(e){try{return o(e)}catch(e){return"Object"}}},function(e,t,n){"use strict";var o=n(33),i=n(2),a=n(8),r=TypeError;e.exports=function(e,t){var n,s;if("string"===t&&i(n=e.toString)&&!a(s=o(n,e)))return s;if(i(n=e.valueOf)&&!a(s=o(n,e)))return s;if("string"!==t&&i(n=e.toString)&&!a(s=o(n,e)))return s;throw new r("Can't convert object to primitive value")}},function(e,t,n){"use strict";var o=n(5),i=n(9),a=Function.prototype,r=o&&Object.getOwnPropertyDescriptor,s=i(a,"name"),c=s&&"something"===function(){}.name,l=s&&(!o||o&&r(a,"name").configurable);e.exports={EXISTS:s,PROPER:c,CONFIGURABLE:l}},function(e,t,n){"use strict";var o=n(3),i=n(2),a=n(35),r=o(Function.toString);i(a.inspectSource)||(a.inspectSource=function(e){return r(e)}),e.exports=a.inspectSource},function(e,t,n){"use strict";var o,i,a,r=n(116),s=n(1),c=n(8),l=n(13),d=n(9),u=n(35),h=n(102),p=n(48),m=s.TypeError,f=s.WeakMap;if(r||u.state){var w=u.state||(u.state=new f);w.get=w.get,w.has=w.has,w.set=w.set,o=function(e,t){if(w.has(e))throw new m("Object already initialized");return t.facade=e,w.set(e,t),t},i=function(e){return w.get(e)||{}},a=function(e){return w.has(e)}}else{var g=h("state");p[g]=!0,o=function(e,t){if(d(e,g))throw new m("Object already initialized");return t.facade=e,l(e,g,t),t},i=function(e){return d(e,g)?e[g]:{}},a=function(e){return d(e,g)}}e.exports={set:o,get:i,has:a,enforce:function(e){return a(e)?i(e):o(e,{})},getterFor:function(e){return function(t){var n;if(!c(t)||(n=i(t)).type!==e)throw new m("Incompatible receiver, "+e+" required");return n}}}},function(e,t,n){"use strict";var o=n(1),i=n(2),a=o.WeakMap;e.exports=i(a)&&/native code/.test(String(a))},function(e,t,n){"use strict";var o=n(28),i=n(3),a=n(118),r=n(123),s=n(24),c=i([].concat);e.exports=o("Reflect","ownKeys")||function(e){var t=a.f(s(e)),n=r.f;return n?c(t,n(e)):t}},function(e,t,n){"use strict";var o=n(103),i=n(99).concat("length","prototype");t.f=Object.getOwnPropertyNames||function(e){return o(e,i)}},function(e,t,n){"use strict";var o=n(27),i=n(120),a=n(32),r=function(e){return function(t,n,r){var s=o(t),c=a(s);if(0===c)return!e&&-1;var l,d=i(r,c);if(e&&n!=n){for(;c>d;)if((l=s[d++])!=l)return!0}else for(;c>d;d++)if((e||d in s)&&s[d]===n)return e||d||0;return!e&&-1}};e.exports={includes:r(!0),indexOf:r(!1)}},function(e,t,n){"use strict";var o=n(64),i=Math.max,a=Math.min;e.exports=function(e,t){var n=o(e);return n<0?i(n+t,0):a(n,t)}},function(e,t,n){"use strict";var o=Math.ceil,i=Math.floor;e.exports=Math.trunc||function(e){var t=+e;return(t>0?i:o)(t)}},function(e,t,n){"use strict";var o=n(64),i=Math.min;e.exports=function(e){var t=o(e);return t>0?i(t,9007199254740991):0}},function(e,t,n){"use strict";t.f=Object.getOwnPropertySymbols},function(e,t,n){"use strict";var o=n(4),i=n(2),a=/#|\.prototype\./,r=function(e,t){var n=c[s(e)];return n===d||n!==l&&(i(t)?o(t):!!t)},s=r.normalize=function(e){return String(e).replace(a,".").toLowerCase()},c=r.data={},l=r.NATIVE="N",d=r.POLYFILL="P";e.exports=r},function(e,t,n){"use strict";var o=n(29),i=n(31),a=n(47),r=n(32),s=TypeError,c="Reduce of empty array with no initial value",l=function(e){return function(t,n,l,d){var u=i(t),h=a(u),p=r(u);if(o(n),0===p&&l<2)throw new s(c);var m=e?p-1:0,f=e?-1:1;if(l<2)for(;;){if(m in h){d=h[m],m+=f;break}if(m+=f,e?m<0:p<=m)throw new s(c)}for(;e?m>=0:p>m;m+=f)m in h&&(d=n(d,h[m],m,u));return d}};e.exports={left:l(!1),right:l(!0)}},function(e,t,n){"use strict";var o=n(4);e.exports=function(e,t){var n=[][e];return!!n&&o((function(){n.call(null,t||function(){return 1},1)}))}},function(e,t,n){"use strict";var o=n(1),i=n(16);e.exports="process"===i(o.process)},function(e,t,n){"use strict";var o=n(26),i=Function.prototype,a=i.apply,r=i.call;e.exports="object"==typeof Reflect&&Reflect.apply||(o?r.bind(a):function(){return r.apply(a,arguments)})},function(e,t,n){"use strict";var o=n(28),i=n(9),a=n(13),r=n(55),s=n(65),c=n(63),l=n(133),d=n(134),u=n(135),h=n(138),p=n(139),m=n(5),f=n(60);e.exports=function(e,t,n,w){var g=w?2:1,y=e.split("."),v=y[y.length-1],b=o.apply(null,y);if(b){var k=b.prototype;if(!f&&i(k,"cause")&&delete k.cause,!n)return b;var x=o("Error"),_=t((function(e,t){var n=u(w?t:e,void 0),o=w?new b(e):new b;return void 0!==n&&a(o,"message",n),p(o,_,o.stack,2),this&&r(k,this)&&d(o,this,_),arguments.length>g&&h(o,arguments[g]),o}));if(_.prototype=k,"Error"!==v?s?s(_,x):c(_,x,{name:!0}):m&&"stackTraceLimit"in b&&(l(_,b,"stackTraceLimit"),l(_,b,"prepareStackTrace")),c(_,b),!f)try{k.name!==v&&a(k,"name",v),k.constructor=_}catch(e){}return _}}},function(e,t,n){"use strict";var o=n(3),i=n(29);e.exports=function(e,t,n){try{return o(i(Object.getOwnPropertyDescriptor(e,t)[n]))}catch(e){}}},function(e,t,n){"use strict";var o=n(132),i=String,a=TypeError;e.exports=function(e){if(o(e))return e;throw new a("Can't set "+i(e)+" as a prototype")}},function(e,t,n){"use strict";var o=n(8);e.exports=function(e){return o(e)||null===e}},function(e,t,n){"use strict";var o=n(15).f;e.exports=function(e,t,n){n in e||o(e,n,{configurable:!0,get:function(){return t[n]},set:function(e){t[n]=e}})}},function(e,t,n){"use strict";var o=n(2),i=n(8),a=n(65);e.exports=function(e,t,n){var r,s;return a&&o(r=t.constructor)&&r!==n&&i(s=r.prototype)&&s!==n.prototype&&a(e,s),e}},function(e,t,n){"use strict";var o=n(96);e.exports=function(e,t){return void 0===e?arguments.length<2?"":t:o(e)}},function(e,t,n){"use strict";var o=n(137),i=n(2),a=n(16),r=n(30)("toStringTag"),s=Object,c="Arguments"===a(function(){return arguments}());e.exports=o?a:function(e){var t,n,o;return void 0===e?"Undefined":null===e?"Null":"string"==typeof(n=function(e,t){try{return e[t]}catch(e){}}(t=s(e),r))?n:c?a(t):"Object"===(o=a(t))&&i(t.callee)?"Arguments":o}},function(e,t,n){"use strict";var o={};o[n(30)("toStringTag")]="z",e.exports="[object z]"===String(o)},function(e,t,n){"use strict";var o=n(8),i=n(13);e.exports=function(e,t){o(t)&&"cause"in t&&i(e,"cause",t.cause)}},function(e,t,n){"use strict";var o=n(13),i=n(140),a=n(141),r=Error.captureStackTrace;e.exports=function(e,t,n,s){a&&(r?r(e,t):o(e,"stack",i(n,s)))}},function(e,t,n){"use strict";var o=n(3),i=Error,a=o("".replace),r=String(new i("zxcasd").stack),s=/\n\s*at [^:]*:[^\n]*/,c=s.test(r);e.exports=function(e,t){if(c&&"string"==typeof e&&!i.prepareStackTrace)for(;t--;)e=a(e,s,"");return e}},function(e,t,n){"use strict";var o=n(4),i=n(34);e.exports=!o((function(){var e=new Error("a");return!("stack"in e)||(Object.defineProperty(e,"stack",i(1,7)),7!==e.stack)}))},function(e,t,n){"use strict";var o=n(5),i=n(143),a=TypeError,r=Object.getOwnPropertyDescriptor,s=o&&!function(){if(void 0!==this)return!0;try{Object.defineProperty([],"length",{writable:!1}).length=1}catch(e){return e instanceof TypeError}}();e.exports=s?function(e,t){if(i(e)&&!r(e,"length").writable)throw new a("Cannot set read only .length");return e.length=t}:function(e,t){return e.length=t}},function(e,t,n){"use strict";var o=n(16);e.exports=Array.isArray||function(e){return"Array"===o(e)}},function(e,t,n){"use strict";var o=TypeError;e.exports=function(e){if(e>9007199254740991)throw o("Maximum allowed index exceeded");return e}},function(e,t,n){var o=n(66),i=n(146);e.exports=function e(t,n,a,r,s){var c=-1,l=t.length;for(a||(a=i),s||(s=[]);++c0&&a(d)?n>1?e(d,n-1,a,r,s):o(s,d):r||(s[s.length]=d)}return s}},function(e,t,n){var o=n(14),i=n(37),a=n(6),r=o?o.isConcatSpreadable:void 0;e.exports=function(e){return a(e)||i(e)||!!(r&&e&&e[r])}},function(e,t,n){var o=n(12),i=n(11);e.exports=function(e){return i(e)&&"[object Arguments]"==o(e)}},function(e,t,n){var o=n(14),i=Object.prototype,a=i.hasOwnProperty,r=i.toString,s=o?o.toStringTag:void 0;e.exports=function(e){var t=a.call(e,s),n=e[s];try{e[s]=void 0;var o=!0}catch(e){}var i=r.call(e);return o&&(t?e[s]=n:delete e[s]),i}},function(e,t){var n=Object.prototype.toString;e.exports=function(e){return n.call(e)}},function(e,t,n){var o=n(151),i=n(207),a=n(45),r=n(6),s=n(218);e.exports=function(e){return"function"==typeof e?e:null==e?a:"object"==typeof e?r(e)?i(e[0],e[1]):o(e):s(e)}},function(e,t,n){var o=n(152),i=n(206),a=n(83);e.exports=function(e){var t=i(e);return 1==t.length&&t[0][2]?a(t[0][0],t[0][1]):function(n){return n===e||o(n,e,t)}}},function(e,t,n){var o=n(68),i=n(72);e.exports=function(e,t,n,a){var r=n.length,s=r,c=!a;if(null==e)return!s;for(e=Object(e);r--;){var l=n[r];if(c&&l[2]?l[1]!==e[l[0]]:!(l[0]in e))return!1}for(;++r-1}},function(e,t,n){var o=n(18);e.exports=function(e,t){var n=this.__data__,i=o(n,e);return i<0?(++this.size,n.push([e,t])):n[i][1]=t,this}},function(e,t,n){var o=n(17);e.exports=function(){this.__data__=new o,this.size=0}},function(e,t){e.exports=function(e){var t=this.__data__,n=t.delete(e);return this.size=t.size,n}},function(e,t){e.exports=function(e){return this.__data__.get(e)}},function(e,t){e.exports=function(e){return this.__data__.has(e)}},function(e,t,n){var o=n(17),i=n(38),a=n(40);e.exports=function(e,t){var n=this.__data__;if(n instanceof o){var r=n.__data__;if(!i||r.length<199)return r.push([e,t]),this.size=++n.size,this;n=this.__data__=new a(r)}return n.set(e,t),this.size=n.size,this}},function(e,t,n){var o=n(70),i=n(164),a=n(39),r=n(71),s=/^\[object .+?Constructor\]$/,c=Function.prototype,l=Object.prototype,d=c.toString,u=l.hasOwnProperty,h=RegExp("^"+d.call(u).replace(/[\\^$.*+?()[\]{}|]/g,"\\$&").replace(/hasOwnProperty|(function).*?(?=\\\()| for .+?(?=\\\])/g,"$1.*?")+"$");e.exports=function(e){return!(!a(e)||i(e))&&(o(e)?h:s).test(r(e))}},function(e,t,n){var o,i=n(165),a=(o=/[^.]+$/.exec(i&&i.keys&&i.keys.IE_PROTO||""))?"Symbol(src)_1."+o:"";e.exports=function(e){return!!a&&a in e}},function(e,t,n){var o=n(7)["__core-js_shared__"];e.exports=o},function(e,t){e.exports=function(e,t){return null==e?void 0:e[t]}},function(e,t,n){var o=n(168),i=n(17),a=n(38);e.exports=function(){this.size=0,this.__data__={hash:new o,map:new(a||i),string:new o}}},function(e,t,n){var o=n(169),i=n(170),a=n(171),r=n(172),s=n(173);function c(e){var t=-1,n=null==e?0:e.length;for(this.clear();++t0){if(++t>=800)return arguments[0]}else t=0;return e.apply(void 0,arguments)}}},function(e,t,n){var o=n(74),i=n(230),a=n(235),r=n(75),s=n(236),c=n(41);e.exports=function(e,t,n){var l=-1,d=i,u=e.length,h=!0,p=[],m=p;if(n)h=!1,d=a;else if(u>=200){var f=t?null:s(e);if(f)return c(f);h=!1,d=r,m=new o}else m=t?[]:p;e:for(;++l-1}},function(e,t,n){var o=n(232),i=n(233),a=n(234);e.exports=function(e,t,n){return t==t?a(e,t,n):o(e,i,n)}},function(e,t){e.exports=function(e,t,n,o){for(var i=e.length,a=n+(o?1:-1);o?a--:++a=0&&Math.floor(t)===t&&isFinite(e)}function f(e){return r(e)&&"function"==typeof e.then&&"function"==typeof e.catch}function w(e){return null==e?"":Array.isArray(e)||h(e)&&e.toString===u?JSON.stringify(e,g,2):String(e)}function g(e,t){return t&&t.__v_isRef?t.value:t}function y(e){var t=parseFloat(e);return isNaN(t)?e:t}function v(e,t){for(var n=Object.create(null),o=e.split(","),i=0;i-1)return e.splice(o,1)}}var x=Object.prototype.hasOwnProperty;function _(e,t){return x.call(e,t)}function T(e){var t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}var S=/-(\w)/g,C=T((function(e){return e.replace(S,(function(e,t){return t?t.toUpperCase():""}))})),I=T((function(e){return e.charAt(0).toUpperCase()+e.slice(1)})),A=/\B([A-Z])/g,E=T((function(e){return e.replace(A,"-$1").toLowerCase()}));var P=Function.prototype.bind?function(e,t){return e.bind(t)}:function(e,t){function n(n){var o=arguments.length;return o?o>1?e.apply(t,arguments):e.call(t,n):e.call(t)}return n._length=e.length,n};function W(e,t){t=t||0;for(var n=e.length-t,o=new Array(n);n--;)o[n]=e[n+t];return o}function q(e,t){for(var n in t)e[n]=t[n];return e}function D(e){for(var t={},n=0;n0,Z=X&&X.indexOf("edge/")>0;X&&X.indexOf("android");var ee=X&&/iphone|ipad|ipod|ios/.test(X);X&&/chrome\/\d+/.test(X),X&&/phantomjs/.test(X);var te,ne=X&&X.match(/firefox\/(\d+)/),oe={}.watch,ie=!1;if(K)try{var ae={};Object.defineProperty(ae,"passive",{get:function(){ie=!0}}),window.addEventListener("test-passive",null,ae)}catch(e){}var re=function(){return void 0===te&&(te=!K&&"undefined"!=typeof global&&(global.process&&"server"===global.process.env.VUE_ENV)),te},se=K&&window.__VUE_DEVTOOLS_GLOBAL_HOOK__;function ce(e){return"function"==typeof e&&/native code/.test(e.toString())}var le,de="undefined"!=typeof Symbol&&ce(Symbol)&&"undefined"!=typeof Reflect&&ce(Reflect.ownKeys);le="undefined"!=typeof Set&&ce(Set)?Set:function(){function e(){this.set=Object.create(null)}return e.prototype.has=function(e){return!0===this.set[e]},e.prototype.add=function(e){this.set[e]=!0},e.prototype.clear=function(){this.set=Object.create(null)},e}();var ue=null;function he(e){void 0===e&&(e=null),e||ue&&ue._scope.off(),ue=e,e&&e._scope.on()}var pe=function(){function e(e,t,n,o,i,a,r,s){this.tag=e,this.data=t,this.children=n,this.text=o,this.elm=i,this.ns=void 0,this.context=a,this.fnContext=void 0,this.fnOptions=void 0,this.fnScopeId=void 0,this.key=t&&t.key,this.componentOptions=r,this.componentInstance=void 0,this.parent=void 0,this.raw=!1,this.isStatic=!1,this.isRootInsert=!0,this.isComment=!1,this.isCloned=!1,this.isOnce=!1,this.asyncFactory=s,this.asyncMeta=void 0,this.isAsyncPlaceholder=!1}return Object.defineProperty(e.prototype,"child",{get:function(){return this.componentInstance},enumerable:!1,configurable:!0}),e}(),me=function(e){void 0===e&&(e="");var t=new pe;return t.text=e,t.isComment=!0,t};function fe(e){return new pe(void 0,void 0,void 0,String(e))}function we(e){var t=new pe(e.tag,e.data,e.children&&e.children.slice(),e.text,e.elm,e.context,e.componentOptions,e.asyncFactory);return t.ns=e.ns,t.isStatic=e.isStatic,t.key=e.key,t.isComment=e.isComment,t.fnContext=e.fnContext,t.fnOptions=e.fnOptions,t.fnScopeId=e.fnScopeId,t.asyncMeta=e.asyncMeta,t.isCloned=!0,t}"function"==typeof SuppressedError&&SuppressedError;var ge=0,ye=[],ve=function(){function e(){this._pending=!1,this.id=ge++,this.subs=[]}return e.prototype.addSub=function(e){this.subs.push(e)},e.prototype.removeSub=function(e){this.subs[this.subs.indexOf(e)]=null,this._pending||(this._pending=!0,ye.push(this))},e.prototype.depend=function(t){e.target&&e.target.addDep(this)},e.prototype.notify=function(e){var t=this.subs.filter((function(e){return e}));for(var n=0,o=t.length;n0&&(Xe((l=e(l,"".concat(n||"","_").concat(o)))[0])&&Xe(u)&&(h[d]=fe(u.text+l[0].text),l.shift()),h.push.apply(h,l)):c(l)?Xe(u)?h[d]=fe(u.text+l):""!==l&&h.push(fe(l)):Xe(l)&&Xe(u)?h[d]=fe(u.text+l.text):(s(t._isVList)&&r(l.tag)&&a(l.key)&&r(n)&&(l.key="__vlist".concat(n,"_").concat(o,"__")),h.push(l)));return h}(e):void 0}function Xe(e){return r(e)&&r(e.text)&&!1===e.isComment}function Qe(e,t){var n,o,a,s,c=null;if(i(e)||"string"==typeof e)for(c=new Array(e.length),n=0,o=e.length;n0,s=t?!!t.$stable:!r,c=t&&t.$key;if(t){if(t._normalized)return t._normalized;if(s&&i&&i!==o&&c===i.$key&&!r&&!i.$hasNormal)return i;for(var l in a={},t)t[l]&&"$"!==l[0]&&(a[l]=wt(e,n,l,t[l]))}else a={};for(var d in n)d in a||(a[d]=gt(n,d));return t&&Object.isExtensible(t)&&(t._normalized=a),B(a,"$stable",s),B(a,"$key",c),B(a,"$hasNormal",r),a}function wt(e,t,n,o){var a=function(){var t=ue;he(e);var n=arguments.length?o.apply(null,arguments):o({}),a=(n=n&&"object"==typeof n&&!i(n)?[n]:Ke(n))&&n[0];return he(t),n&&(!a||1===n.length&&a.isComment&&!mt(a))?void 0:n};return o.proxy&&Object.defineProperty(t,n,{get:a,enumerable:!0,configurable:!0}),a}function gt(e,t){return function(){return e[t]}}function yt(e){return{get attrs(){if(!e._attrsProxy){var t=e._attrsProxy={};B(t,"_v_attr_proxy",!0),vt(t,e.$attrs,o,e,"$attrs")}return e._attrsProxy},get listeners(){e._listenersProxy||vt(e._listenersProxy={},e.$listeners,o,e,"$listeners");return e._listenersProxy},get slots(){return function(e){e._slotsProxy||kt(e._slotsProxy={},e.$scopedSlots);return e._slotsProxy}(e)},emit:P(e.$emit,e),expose:function(t){t&&Object.keys(t).forEach((function(n){return Fe(e,t,n)}))}}}function vt(e,t,n,o,i){var a=!1;for(var r in t)r in e?t[r]!==n[r]&&(a=!0):(a=!0,bt(e,r,o,i));for(var r in e)r in t||(a=!0,delete e[r]);return a}function bt(e,t,n,o){Object.defineProperty(e,t,{enumerable:!0,configurable:!0,get:function(){return n[o][t]}})}function kt(e,t){for(var n in t)e[n]=t[n];for(var n in e)n in t||delete e[n]}var xt=null;function _t(e,t){return(e.__esModule||de&&"Module"===e[Symbol.toStringTag])&&(e=e.default),d(e)?t.extend(e):e}function Tt(e){if(i(e))for(var t=0;tdocument.createEvent("Event").timeStamp&&(ln=function(){return dn.now()})}var un=function(e,t){if(e.post){if(!t.post)return 1}else if(t.post)return-1;return e.id-t.id};function hn(){var e,t;for(cn=ln(),rn=!0,tn.sort(un),sn=0;snsn&&tn[n].id>e.id;)n--;tn.splice(n+1,0,e)}else tn.push(e);an||(an=!0,Lt(hn))}}function mn(e,t){if(e){for(var n=Object.create(null),o=de?Reflect.ownKeys(e):Object.keys(e),i=0;i-1)if(a&&!_(i,"default"))r=!1;else if(""===r||r===E(e)){var c=Nn(String,i.type);(c<0||s-1:"string"==typeof e?e.split(",").indexOf(t)>-1:!!p(e)&&e.test(t)}function Qn(e,t){var n=e.cache,o=e.keys,i=e._vnode,a=e.$vnode;for(var r in n){var s=n[r];if(s){var c=s.name;c&&!t(c)&&Jn(n,r,o,i)}}a.componentOptions.children=void 0}function Jn(e,t,n,o){var i=e[t];!i||o&&i.tag===o.tag||i.componentInstance.$destroy(),e[t]=null,k(n,t)}Vn.prototype._init=function(e){var t=this;t._uid=Gn++,t._isVue=!0,t.__v_skip=!0,t._scope=new He(!0),t._scope.parent=void 0,t._scope._vm=!0,e&&e._isComponent?function(e,t){var n=e.$options=Object.create(e.constructor.options),o=t._parentVnode;n.parent=t.parent,n._parentVnode=o;var i=o.componentOptions;n.propsData=i.propsData,n._parentListeners=i.listeners,n._renderChildren=i.children,n._componentTag=i.tag,t.render&&(n.render=t.render,n.staticRenderFns=t.staticRenderFns)}(t,e):t.$options=Pn(Bn(t.constructor),e||{},t),t._renderProxy=t,t._self=t,function(e){var t=e.$options,n=t.parent;if(n&&!t.abstract){for(;n.$options.abstract&&n.$parent;)n=n.$parent;n.$children.push(e)}e.$parent=n,e.$root=n?n.$root:e,e.$children=[],e.$refs={},e._provided=n?n._provided:Object.create(null),e._watcher=null,e._inactive=null,e._directInactive=!1,e._isMounted=!1,e._isDestroyed=!1,e._isBeingDestroyed=!1}(t),function(e){e._events=Object.create(null),e._hasHookEvent=!1;var t=e.$options._parentListeners;t&&Kt(e,t)}(t),function(e){e._vnode=null,e._staticTrees=null;var t=e.$options,n=e.$vnode=t._parentVnode,i=n&&n.context;e.$slots=ht(t._renderChildren,i),e.$scopedSlots=n?ft(e.$parent,n.data.scopedSlots,e.$slots):o,e._c=function(t,n,o,i){return St(e,t,n,o,i,!1)},e.$createElement=function(t,n,o,i){return St(e,t,n,o,i,!0)};var a=n&&n.data;qe(e,"$attrs",a&&a.attrs||o,null,!0),qe(e,"$listeners",t._parentListeners||o,null,!0)}(t),en(t,"beforeCreate",void 0,!1),function(e){var t=mn(e.$options.inject,e);t&&(Ae(!1),Object.keys(t).forEach((function(n){qe(e,n,t[n])})),Ae(!0))}(t),Ln(t),function(e){var t=e.$options.provide;if(t){var n=l(t)?t.call(e):t;if(!d(n))return;for(var o=$e(e),i=de?Reflect.ownKeys(n):Object.keys(n),a=0;a1?W(n):n;for(var o=W(arguments,1),i='event handler for "'.concat(e,'"'),a=0,r=n.length;aparseInt(this.max)&&Jn(e,t[0],t,this._vnode),this.vnodeToCache=null}}},created:function(){this.cache=Object.create(null),this.keys=[]},destroyed:function(){for(var e in this.cache)Jn(this.cache,e,this.keys)},mounted:function(){var e=this;this.cacheVNode(),this.$watch("include",(function(t){Qn(e,(function(e){return Xn(t,e)}))})),this.$watch("exclude",(function(t){Qn(e,(function(e){return!Xn(t,e)}))}))},updated:function(){this.cacheVNode()},render:function(){var e=this.$slots.default,t=Tt(e),n=t&&t.componentOptions;if(n){var o=Kn(n),i=this.include,a=this.exclude;if(i&&(!o||!Xn(i,o))||a&&o&&Xn(a,o))return t;var r=this.cache,s=this.keys,c=null==t.key?n.Ctor.cid+(n.tag?"::".concat(n.tag):""):t.key;r[c]?(t.componentInstance=r[c].componentInstance,k(s,c),s.push(c)):(this.vnodeToCache=t,this.keyToCache=c),t.data.keepAlive=!0}return t||e&&e[0]}}};!function(e){var t={get:function(){return $}};Object.defineProperty(e,"config",t),e.util={warn:_n,extend:q,mergeOptions:Pn,defineReactive:qe},e.set=De,e.delete=Oe,e.nextTick=Lt,e.observable=function(e){return We(e),e},e.options=Object.create(null),M.forEach((function(t){e.options[t+"s"]=Object.create(null)})),e.options._base=e,q(e.options.components,eo),function(e){e.use=function(e){var t=this._installedPlugins||(this._installedPlugins=[]);if(t.indexOf(e)>-1)return this;var n=W(arguments,1);return n.unshift(this),l(e.install)?e.install.apply(e,n):l(e)&&e.apply(null,n),t.push(e),this}}(e),function(e){e.mixin=function(e){return this.options=Pn(this.options,e),this}}(e),Yn(e),function(e){M.forEach((function(t){e[t]=function(e,n){return n?("component"===t&&h(n)&&(n.name=n.name||e,n=this.options._base.extend(n)),"directive"===t&&l(n)&&(n={bind:n,update:n}),this.options[t+"s"][e]=n,n):this.options[t+"s"][e]}}))}(e)}(Vn),Object.defineProperty(Vn.prototype,"$isServer",{get:re}),Object.defineProperty(Vn.prototype,"$ssrContext",{get:function(){return this.$vnode&&this.$vnode.ssrContext}}),Object.defineProperty(Vn,"FunctionalRenderContext",{value:fn}),Vn.version="2.7.16";var to=v("style,class"),no=v("input,textarea,option,select,progress"),oo=v("contenteditable,draggable,spellcheck"),io=v("events,caret,typing,plaintext-only"),ao=v("allowfullscreen,async,autofocus,autoplay,checked,compact,controls,declare,default,defaultchecked,defaultmuted,defaultselected,defer,disabled,enabled,formnovalidate,hidden,indeterminate,inert,ismap,itemscope,loop,multiple,muted,nohref,noresize,noshade,novalidate,nowrap,open,pauseonexit,readonly,required,reversed,scoped,seamless,selected,sortable,truespeed,typemustmatch,visible"),ro="http://www.w3.org/1999/xlink",so=function(e){return":"===e.charAt(5)&&"xlink"===e.slice(0,5)},co=function(e){return so(e)?e.slice(6,e.length):""},lo=function(e){return null==e||!1===e};function uo(e){for(var t=e.data,n=e,o=e;r(o.componentInstance);)(o=o.componentInstance._vnode)&&o.data&&(t=ho(o.data,t));for(;r(n=n.parent);)n&&n.data&&(t=ho(t,n.data));return function(e,t){if(r(e)||r(t))return po(e,mo(t));return""}(t.staticClass,t.class)}function ho(e,t){return{staticClass:po(e.staticClass,t.staticClass),class:r(e.class)?[e.class,t.class]:t.class}}function po(e,t){return e?t?e+" "+t:e:t||""}function mo(e){return Array.isArray(e)?function(e){for(var t,n="",o=0,i=e.length;o-1?zo(e,t,n):ao(t)?lo(n)?e.removeAttribute(t):(n="allowfullscreen"===t&&"EMBED"===e.tagName?"true":t,e.setAttribute(t,n)):oo(t)?e.setAttribute(t,function(e,t){return lo(t)||"false"===t?"false":"contenteditable"===e&&io(t)?t:"true"}(t,n)):so(t)?lo(n)?e.removeAttributeNS(ro,co(t)):e.setAttributeNS(ro,t,n):zo(e,t,n)}function zo(e,t,n){if(lo(n))e.removeAttribute(t);else{if(Q&&!J&&"TEXTAREA"===e.tagName&&"placeholder"===t&&""!==n&&!e.__ieph){var o=function(t){t.stopImmediatePropagation(),e.removeEventListener("input",o)};e.addEventListener("input",o),e.__ieph=!0}e.setAttribute(t,n)}}var Lo={create:No,update:No};function Fo(e,t){var n=t.elm,o=t.data,i=e.data;if(!(a(o.staticClass)&&a(o.class)&&(a(i)||a(i.staticClass)&&a(i.class)))){var s=uo(t),c=n._transitionClasses;r(c)&&(s=po(s,mo(c))),s!==n._prevClass&&(n.setAttribute("class",s),n._prevClass=s)}}var Mo,Ho={create:Fo,update:Fo};function $o(e,t,n){var o=Mo;return function i(){var a=t.apply(null,arguments);null!==a&&Bo(e,i,n,o)}}var Uo=Wt&&!(ne&&Number(ne[1])<=53);function Go(e,t,n,o){if(Uo){var i=cn,a=t;t=a._wrapper=function(e){if(e.target===e.currentTarget||e.timeStamp>=i||e.timeStamp<=0||e.target.ownerDocument!==document)return a.apply(this,arguments)}}Mo.addEventListener(e,t,ie?{capture:n,passive:o}:n)}function Bo(e,t,n,o){(o||Mo).removeEventListener(e,t._wrapper||t,n)}function Vo(e,t){if(!a(e.data.on)||!a(t.data.on)){var n=t.data.on||{},o=e.data.on||{};Mo=t.elm||e.elm,function(e){if(r(e.__r)){var t=Q?"change":"input";e[t]=[].concat(e.__r,e[t]||[]),delete e.__r}r(e.__c)&&(e.change=[].concat(e.__c,e.change||[]),delete e.__c)}(n),Be(n,o,Go,Bo,$o,t.context),Mo=void 0}}var Yo,Ko={create:Vo,update:Vo,destroy:function(e){return Vo(e,So)}};function Xo(e,t){if(!a(e.data.domProps)||!a(t.data.domProps)){var n,o,i=t.elm,c=e.data.domProps||{},l=t.data.domProps||{};for(n in(r(l.__ob__)||s(l._v_attr_proxy))&&(l=t.data.domProps=q({},l)),c)n in l||(i[n]="");for(n in l){if(o=l[n],"textContent"===n||"innerHTML"===n){if(t.children&&(t.children.length=0),o===c[n])continue;1===i.childNodes.length&&i.removeChild(i.childNodes[0])}if("value"===n&&"PROGRESS"!==i.tagName){i._value=o;var d=a(o)?"":String(o);Qo(i,d)&&(i.value=d)}else if("innerHTML"===n&&go(i.tagName)&&a(i.innerHTML)){(Yo=Yo||document.createElement("div")).innerHTML="".concat(o,"");for(var u=Yo.firstChild;i.firstChild;)i.removeChild(i.firstChild);for(;u.firstChild;)i.appendChild(u.firstChild)}else if(o!==c[n])try{i[n]=o}catch(e){}}}}function Qo(e,t){return!e.composing&&("OPTION"===e.tagName||function(e,t){var n=!0;try{n=document.activeElement!==e}catch(e){}return n&&e.value!==t}(e,t)||function(e,t){var n=e.value,o=e._vModifiers;if(r(o)){if(o.number)return y(n)!==y(t);if(o.trim)return n.trim()!==t.trim()}return n!==t}(e,t))}var Jo={create:Xo,update:Xo},Zo=T((function(e){var t={},n=/:(.+)/;return e.split(/;(?![^(]*\))/g).forEach((function(e){if(e){var o=e.split(n);o.length>1&&(t[o[0].trim()]=o[1].trim())}})),t}));function ei(e){var t=ti(e.style);return e.staticStyle?q(e.staticStyle,t):t}function ti(e){return Array.isArray(e)?D(e):"string"==typeof e?Zo(e):e}var ni,oi=/^--/,ii=/\s*!important$/,ai=function(e,t,n){if(oi.test(t))e.style.setProperty(t,n);else if(ii.test(n))e.style.setProperty(E(t),n.replace(ii,""),"important");else{var o=si(t);if(Array.isArray(n))for(var i=0,a=n.length;i-1?t.split(di).forEach((function(t){return e.classList.add(t)})):e.classList.add(t);else{var n=" ".concat(e.getAttribute("class")||""," ");n.indexOf(" "+t+" ")<0&&e.setAttribute("class",(n+t).trim())}}function hi(e,t){if(t&&(t=t.trim()))if(e.classList)t.indexOf(" ")>-1?t.split(di).forEach((function(t){return e.classList.remove(t)})):e.classList.remove(t),e.classList.length||e.removeAttribute("class");else{for(var n=" ".concat(e.getAttribute("class")||""," "),o=" "+t+" ";n.indexOf(o)>=0;)n=n.replace(o," ");(n=n.trim())?e.setAttribute("class",n):e.removeAttribute("class")}}function pi(e){if(e){if("object"==typeof e){var t={};return!1!==e.css&&q(t,mi(e.name||"v")),q(t,e),t}return"string"==typeof e?mi(e):void 0}}var mi=T((function(e){return{enterClass:"".concat(e,"-enter"),enterToClass:"".concat(e,"-enter-to"),enterActiveClass:"".concat(e,"-enter-active"),leaveClass:"".concat(e,"-leave"),leaveToClass:"".concat(e,"-leave-to"),leaveActiveClass:"".concat(e,"-leave-active")}})),fi=K&&!J,wi="transition",gi="transitionend",yi="animation",vi="animationend";fi&&(void 0===window.ontransitionend&&void 0!==window.onwebkittransitionend&&(wi="WebkitTransition",gi="webkitTransitionEnd"),void 0===window.onanimationend&&void 0!==window.onwebkitanimationend&&(yi="WebkitAnimation",vi="webkitAnimationEnd"));var bi=K?window.requestAnimationFrame?window.requestAnimationFrame.bind(window):setTimeout:function(e){return e()};function ki(e){bi((function(){bi(e)}))}function xi(e,t){var n=e._transitionClasses||(e._transitionClasses=[]);n.indexOf(t)<0&&(n.push(t),ui(e,t))}function _i(e,t){e._transitionClasses&&k(e._transitionClasses,t),hi(e,t)}function Ti(e,t,n){var o=Ci(e,t),i=o.type,a=o.timeout,r=o.propCount;if(!i)return n();var s="transition"===i?gi:vi,c=0,l=function(){e.removeEventListener(s,d),n()},d=function(t){t.target===e&&++c>=r&&l()};setTimeout((function(){c0&&(n="transition",d=r,u=a.length):"animation"===t?l>0&&(n="animation",d=l,u=c.length):u=(n=(d=Math.max(r,l))>0?r>l?"transition":"animation":null)?"transition"===n?a.length:c.length:0,{type:n,timeout:d,propCount:u,hasTransform:"transition"===n&&Si.test(o[wi+"Property"])}}function Ii(e,t){for(;e.length1}function Di(e,t){!0!==t.data.show&&Ei(t)}var Oi=function(e){var t,n,o={},l=e.modules,d=e.nodeOps;for(t=0;tm?b(e,a(n[g+1])?null:n[g+1].elm,n,p,g,o):p>g&&x(t,u,m)}(u,f,g,n,l):r(g)?(r(e.text)&&d.setTextContent(u,""),b(u,null,g,0,g.length-1,n)):r(f)?x(f,0,f.length-1):r(e.text)&&d.setTextContent(u,""):e.text!==t.text&&d.setTextContent(u,t.text),r(m)&&r(p=m.hook)&&r(p=p.postpatch)&&p(e,t)}}}function C(e,t,n){if(s(n)&&r(e.parent))e.parent.data.pendingInsert=t;else for(var o=0;o-1,r.selected!==a&&(r.selected=a);else if(R(Li(r),o))return void(e.selectedIndex!==s&&(e.selectedIndex=s));i||(e.selectedIndex=-1)}}function zi(e,t){return t.every((function(t){return!R(t,e)}))}function Li(e){return"_value"in e?e._value:e.value}function Fi(e){e.target.composing=!0}function Mi(e){e.target.composing&&(e.target.composing=!1,Hi(e.target,"input"))}function Hi(e,t){var n=document.createEvent("HTMLEvents");n.initEvent(t,!0,!0),e.dispatchEvent(n)}function $i(e){return!e.componentInstance||e.data&&e.data.transition?e:$i(e.componentInstance._vnode)}var Ui={model:ji,show:{bind:function(e,t,n){var o=t.value,i=(n=$i(n)).data&&n.data.transition,a=e.__vOriginalDisplay="none"===e.style.display?"":e.style.display;o&&i?(n.data.show=!0,Ei(n,(function(){e.style.display=a}))):e.style.display=o?a:"none"},update:function(e,t,n){var o=t.value;!o!=!t.oldValue&&((n=$i(n)).data&&n.data.transition?(n.data.show=!0,o?Ei(n,(function(){e.style.display=e.__vOriginalDisplay})):Pi(n,(function(){e.style.display="none"}))):e.style.display=o?e.__vOriginalDisplay:"none")},unbind:function(e,t,n,o,i){i||(e.style.display=e.__vOriginalDisplay)}}},Gi={name:String,appear:Boolean,css:Boolean,mode:String,type:String,enterClass:String,leaveClass:String,enterToClass:String,leaveToClass:String,enterActiveClass:String,leaveActiveClass:String,appearClass:String,appearActiveClass:String,appearToClass:String,duration:[Number,String,Object]};function Bi(e){var t=e&&e.componentOptions;return t&&t.Ctor.options.abstract?Bi(Tt(t.children)):e}function Vi(e){var t={},n=e.$options;for(var o in n.propsData)t[o]=e[o];var i=n._parentListeners;for(var o in i)t[C(o)]=i[o];return t}function Yi(e,t){if(/\d-keep-alive$/.test(t.tag))return e("keep-alive",{props:t.componentOptions.propsData})}var Ki=function(e){return e.tag||mt(e)},Xi=function(e){return"show"===e.name},Qi={name:"transition",props:Gi,abstract:!0,render:function(e){var t=this,n=this.$slots.default;if(n&&(n=n.filter(Ki)).length){0;var o=this.mode;0;var i=n[0];if(function(e){for(;e=e.parent;)if(e.data.transition)return!0}(this.$vnode))return i;var a=Bi(i);if(!a)return i;if(this._leaving)return Yi(e,i);var r="__transition-".concat(this._uid,"-");a.key=null==a.key?a.isComment?r+"comment":r+a.tag:c(a.key)?0===String(a.key).indexOf(r)?a.key:r+a.key:a.key;var s=(a.data||(a.data={})).transition=Vi(this),l=this._vnode,d=Bi(l);if(a.data.directives&&a.data.directives.some(Xi)&&(a.data.show=!0),d&&d.data&&!function(e,t){return t.key===e.key&&t.tag===e.tag}(a,d)&&!mt(d)&&(!d.componentInstance||!d.componentInstance._vnode.isComment)){var u=d.data.transition=q({},s);if("out-in"===o)return this._leaving=!0,Ve(u,"afterLeave",(function(){t._leaving=!1,t.$forceUpdate()})),Yi(e,i);if("in-out"===o){if(mt(a))return l;var h,p=function(){h()};Ve(s,"afterEnter",p),Ve(s,"enterCancelled",p),Ve(u,"delayLeave",(function(e){h=e}))}}return i}}},Ji=q({tag:String,moveClass:String},Gi);function Zi(e){e.elm._moveCb&&e.elm._moveCb(),e.elm._enterCb&&e.elm._enterCb()}function ea(e){e.data.newPos=e.elm.getBoundingClientRect()}function ta(e){var t=e.data.pos,n=e.data.newPos,o=t.left-n.left,i=t.top-n.top;if(o||i){e.data.moved=!0;var a=e.elm.style;a.transform=a.WebkitTransform="translate(".concat(o,"px,").concat(i,"px)"),a.transitionDuration="0s"}}delete Ji.mode;var na={Transition:Qi,TransitionGroup:{props:Ji,beforeMount:function(){var e=this,t=this._update;this._update=function(n,o){var i=Qt(e);e.__patch__(e._vnode,e.kept,!1,!0),e._vnode=e.kept,i(),t.call(e,n,o)}},render:function(e){for(var t=this.tag||this.$vnode.data.tag||"span",n=Object.create(null),o=this.prevChildren=this.children,i=this.$slots.default||[],a=this.children=[],r=Vi(this),s=0;s-1?vo[e]=t.constructor===window.HTMLUnknownElement||t.constructor===window.HTMLElement:vo[e]=/HTMLUnknownElement/.test(t.toString())},q(Vn.options.directives,Ui),q(Vn.options.components,na),Vn.prototype.__patch__=K?Oi:O,Vn.prototype.$mount=function(e,t){return function(e,t,n){var o;e.$el=t,e.$options.render||(e.$options.render=me),en(e,"beforeMount"),o=function(){e._update(e._render(),n)},new Gt(e,o,O,{before:function(){e._isMounted&&!e._isDestroyed&&en(e,"beforeUpdate")}},!0),n=!1;var i=e._preWatchers;if(i)for(var a=0;a=0&&(t=e.slice(o),e=e.slice(0,o));var i=e.indexOf("?");return i>=0&&(n=e.slice(i+1),e=e.slice(0,i)),{path:e,query:n,hash:t}}(i.path||""),l=t&&t.path||"/",d=c.path?_a(c.path,l,n||i.append):l,u=function(e,t,n){void 0===t&&(t={});var o,i=n||da;try{o=i(e||"")}catch(e){o={}}for(var a in t){var r=t[a];o[a]=Array.isArray(r)?r.map(la):la(r)}return o}(c.query,i.query,o&&o.options.parseQuery),h=i.hash||c.hash;return h&&"#"!==h.charAt(0)&&(h="#"+h),{_normalized:!0,path:d,query:u,hash:h}}var Ua,Ga=function(){},Ba={name:"RouterLink",props:{to:{type:[String,Object],required:!0},tag:{type:String,default:"a"},custom:Boolean,exact:Boolean,exactPath:Boolean,append:Boolean,replace:Boolean,activeClass:String,exactActiveClass:String,ariaCurrentValue:{type:String,default:"page"},event:{type:[String,Array],default:"click"}},render:function(e){var t=this,n=this.$router,o=this.$route,i=n.resolve(this.to,o,this.append),a=i.location,r=i.route,s=i.href,c={},l=n.options.linkActiveClass,d=n.options.linkExactActiveClass,u=null==l?"router-link-active":l,h=null==d?"router-link-exact-active":d,p=null==this.activeClass?u:this.activeClass,m=null==this.exactActiveClass?h:this.exactActiveClass,f=r.redirectedFrom?pa(null,$a(r.redirectedFrom),null,n):r;c[m]=ya(o,f,this.exactPath),c[p]=this.exact||this.exactPath?c[m]:function(e,t){return 0===e.path.replace(ha,"/").indexOf(t.path.replace(ha,"/"))&&(!t.hash||e.hash===t.hash)&&function(e,t){for(var n in t)if(!(n in e))return!1;return!0}(e.query,t.query)}(o,f);var w=c[m]?this.ariaCurrentValue:null,g=function(e){Va(e)&&(t.replace?n.replace(a,Ga):n.push(a,Ga))},y={click:Va};Array.isArray(this.event)?this.event.forEach((function(e){y[e]=g})):y[this.event]=g;var v={class:c},b=!this.$scopedSlots.$hasNormal&&this.$scopedSlots.default&&this.$scopedSlots.default({href:s,route:r,navigate:g,isActive:c[p],isExactActive:c[m]});if(b){if(1===b.length)return b[0];if(b.length>1||!b.length)return 0===b.length?e():e("span",{},b)}if("a"===this.tag)v.on=y,v.attrs={href:s,"aria-current":w};else{var k=function e(t){var n;if(t)for(var o=0;o-1&&(s.params[h]=n.params[h]);return s.path=Ha(d.path,s.params),c(d,s,r)}if(s.path){s.params={};for(var p=0;p-1}function Tr(e,t){return _r(e)&&e._isRouter&&(null==t||e.type===t)}function Sr(e,t,n){var o=function(i){i>=e.length?n():e[i]?t(e[i],(function(){o(i+1)})):o(i+1)};o(0)}function Cr(e){return function(t,n,o){var i=!1,a=0,r=null;Ir(e,(function(e,t,n,s){if("function"==typeof e&&void 0===e.cid){i=!0,a++;var c,l=Pr((function(t){var i;((i=t).__esModule||Er&&"Module"===i[Symbol.toStringTag])&&(t=t.default),e.resolved="function"==typeof t?t:Ua.extend(t),n.components[s]=t,--a<=0&&o()})),d=Pr((function(e){var t="Failed to resolve async component "+s+": "+e;r||(r=_r(e)?e:new Error(t),o(r))}));try{c=e(l,d)}catch(e){d(e)}if(c)if("function"==typeof c.then)c.then(l,d);else{var u=c.component;u&&"function"==typeof u.then&&u.then(l,d)}}})),i||o()}}function Ir(e,t){return Ar(e.map((function(e){return Object.keys(e.components).map((function(n){return t(e.components[n],e.instances[n],e,n)}))})))}function Ar(e){return Array.prototype.concat.apply([],e)}var Er="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag;function Pr(e){var t=!1;return function(){for(var n=[],o=arguments.length;o--;)n[o]=arguments[o];if(!t)return t=!0,e.apply(this,n)}}var Wr=function(e,t){this.router=e,this.base=function(e){if(!e)if(Ya){var t=document.querySelector("base");e=(e=t&&t.getAttribute("href")||"/").replace(/^https?:\/\/[^\/]+/,"")}else e="/";"/"!==e.charAt(0)&&(e="/"+e);return e.replace(/\/$/,"")}(t),this.current=fa,this.pending=null,this.ready=!1,this.readyCbs=[],this.readyErrorCbs=[],this.errorCbs=[],this.listeners=[]};function qr(e,t,n,o){var i=Ir(e,(function(e,o,i,a){var r=function(e,t){"function"!=typeof e&&(e=Ua.extend(e));return e.options[t]}(e,t);if(r)return Array.isArray(r)?r.map((function(e){return n(e,o,i,a)})):n(r,o,i,a)}));return Ar(o?i.reverse():i)}function Dr(e,t){if(t)return function(){return e.apply(t,arguments)}}Wr.prototype.listen=function(e){this.cb=e},Wr.prototype.onReady=function(e,t){this.ready?e():(this.readyCbs.push(e),t&&this.readyErrorCbs.push(t))},Wr.prototype.onError=function(e){this.errorCbs.push(e)},Wr.prototype.transitionTo=function(e,t,n){var o,i=this;try{o=this.router.match(e,this.current)}catch(e){throw this.errorCbs.forEach((function(t){t(e)})),e}var a=this.current;this.confirmTransition(o,(function(){i.updateRoute(o),t&&t(o),i.ensureURL(),i.router.afterHooks.forEach((function(e){e&&e(o,a)})),i.ready||(i.ready=!0,i.readyCbs.forEach((function(e){e(o)})))}),(function(e){n&&n(e),e&&!i.ready&&(Tr(e,yr.redirected)&&a===fa||(i.ready=!0,i.readyErrorCbs.forEach((function(t){t(e)}))))}))},Wr.prototype.confirmTransition=function(e,t,n){var o=this,i=this.current;this.pending=e;var a,r,s=function(e){!Tr(e)&&_r(e)&&(o.errorCbs.length?o.errorCbs.forEach((function(t){t(e)})):console.error(e)),n&&n(e)},c=e.matched.length-1,l=i.matched.length-1;if(ya(e,i)&&c===l&&e.matched[c]===i.matched[l])return this.ensureURL(),e.hash&&rr(this.router,i,e,!1),s(((r=kr(a=i,e,yr.duplicated,'Avoided redundant navigation to current location: "'+a.fullPath+'".')).name="NavigationDuplicated",r));var d=function(e,t){var n,o=Math.max(e.length,t.length);for(n=0;n0)){var t=this.router,n=t.options.scrollBehavior,o=fr&&n;o&&this.listeners.push(ar());var i=function(){var n=e.current,i=jr(e.base);e.current===fa&&i===e._startLocation||e.transitionTo(i,(function(e){o&&rr(t,e,n,!0)}))};window.addEventListener("popstate",i),this.listeners.push((function(){window.removeEventListener("popstate",i)}))}},t.prototype.go=function(e){window.history.go(e)},t.prototype.push=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){wr(Ta(o.base+e.fullPath)),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){gr(Ta(o.base+e.fullPath)),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.ensureURL=function(e){if(jr(this.base)!==this.current.fullPath){var t=Ta(this.base+this.current.fullPath);e?wr(t):gr(t)}},t.prototype.getCurrentLocation=function(){return jr(this.base)},t}(Wr);function jr(e){var t=window.location.pathname,n=t.toLowerCase(),o=e.toLowerCase();return!e||n!==o&&0!==n.indexOf(Ta(o+"/"))||(t=t.slice(e.length)),(t||"/")+window.location.search+window.location.hash}var Nr=function(e){function t(t,n,o){e.call(this,t,n),o&&function(e){var t=jr(e);if(!/^\/#/.test(t))return window.location.replace(Ta(e+"/#"+t)),!0}(this.base)||Rr()}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.setupListeners=function(){var e=this;if(!(this.listeners.length>0)){var t=this.router.options.scrollBehavior,n=fr&&t;n&&this.listeners.push(ar());var o=function(){var t=e.current;Rr()&&e.transitionTo(zr(),(function(o){n&&rr(e.router,o,t,!0),fr||Mr(o.fullPath)}))},i=fr?"popstate":"hashchange";window.addEventListener(i,o),this.listeners.push((function(){window.removeEventListener(i,o)}))}},t.prototype.push=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){Fr(e.fullPath),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){Mr(e.fullPath),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.go=function(e){window.history.go(e)},t.prototype.ensureURL=function(e){var t=this.current.fullPath;zr()!==t&&(e?Fr(t):Mr(t))},t.prototype.getCurrentLocation=function(){return zr()},t}(Wr);function Rr(){var e=zr();return"/"===e.charAt(0)||(Mr("/"+e),!1)}function zr(){var e=window.location.href,t=e.indexOf("#");return t<0?"":e=e.slice(t+1)}function Lr(e){var t=window.location.href,n=t.indexOf("#");return(n>=0?t.slice(0,n):t)+"#"+e}function Fr(e){fr?wr(Lr(e)):window.location.hash=e}function Mr(e){fr?gr(Lr(e)):window.location.replace(Lr(e))}var Hr=function(e){function t(t,n){e.call(this,t,n),this.stack=[],this.index=-1}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.push=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index+1).concat(e),o.index++,t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index).concat(e),t&&t(e)}),n)},t.prototype.go=function(e){var t=this,n=this.index+e;if(!(n<0||n>=this.stack.length)){var o=this.stack[n];this.confirmTransition(o,(function(){var e=t.current;t.index=n,t.updateRoute(o),t.router.afterHooks.forEach((function(t){t&&t(o,e)}))}),(function(e){Tr(e,yr.duplicated)&&(t.index=n)}))}},t.prototype.getCurrentLocation=function(){var e=this.stack[this.stack.length-1];return e?e.fullPath:"/"},t.prototype.ensureURL=function(){},t}(Wr),$r=function(e){void 0===e&&(e={}),this.app=null,this.apps=[],this.options=e,this.beforeHooks=[],this.resolveHooks=[],this.afterHooks=[],this.matcher=Qa(e.routes||[],this);var t=e.mode||"hash";switch(this.fallback="history"===t&&!fr&&!1!==e.fallback,this.fallback&&(t="hash"),Ya||(t="abstract"),this.mode=t,t){case"history":this.history=new Or(this,e.base);break;case"hash":this.history=new Nr(this,e.base,this.fallback);break;case"abstract":this.history=new Hr(this,e.base);break;default:0}},Ur={currentRoute:{configurable:!0}};$r.prototype.match=function(e,t,n){return this.matcher.match(e,t,n)},Ur.currentRoute.get=function(){return this.history&&this.history.current},$r.prototype.init=function(e){var t=this;if(this.apps.push(e),e.$once("hook:destroyed",(function(){var n=t.apps.indexOf(e);n>-1&&t.apps.splice(n,1),t.app===e&&(t.app=t.apps[0]||null),t.app||t.history.teardown()})),!this.app){this.app=e;var n=this.history;if(n instanceof Or||n instanceof Nr){var o=function(e){n.setupListeners(),function(e){var o=n.current,i=t.options.scrollBehavior;fr&&i&&"fullPath"in e&&rr(t,e,o,!1)}(e)};n.transitionTo(n.getCurrentLocation(),o,o)}n.listen((function(e){t.apps.forEach((function(t){t._route=e}))}))}},$r.prototype.beforeEach=function(e){return Br(this.beforeHooks,e)},$r.prototype.beforeResolve=function(e){return Br(this.resolveHooks,e)},$r.prototype.afterEach=function(e){return Br(this.afterHooks,e)},$r.prototype.onReady=function(e,t){this.history.onReady(e,t)},$r.prototype.onError=function(e){this.history.onError(e)},$r.prototype.push=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.push(e,t,n)}));this.history.push(e,t,n)},$r.prototype.replace=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.replace(e,t,n)}));this.history.replace(e,t,n)},$r.prototype.go=function(e){this.history.go(e)},$r.prototype.back=function(){this.go(-1)},$r.prototype.forward=function(){this.go(1)},$r.prototype.getMatchedComponents=function(e){var t=e?e.matched?e:this.resolve(e).route:this.currentRoute;return t?[].concat.apply([],t.matched.map((function(e){return Object.keys(e.components).map((function(t){return e.components[t]}))}))):[]},$r.prototype.resolve=function(e,t,n){var o=$a(e,t=t||this.history.current,n,this),i=this.match(o,t),a=i.redirectedFrom||i.fullPath;return{location:o,route:i,href:function(e,t,n){var o="hash"===n?"#"+t:t;return e?Ta(e+"/"+o):o}(this.history.base,a,this.mode),normalizedTo:o,resolved:i}},$r.prototype.getRoutes=function(){return this.matcher.getRoutes()},$r.prototype.addRoute=function(e,t){this.matcher.addRoute(e,t),this.history.current!==fa&&this.history.transitionTo(this.history.getCurrentLocation())},$r.prototype.addRoutes=function(e){this.matcher.addRoutes(e),this.history.current!==fa&&this.history.transitionTo(this.history.getCurrentLocation())},Object.defineProperties($r.prototype,Ur);var Gr=$r;function Br(e,t){return e.push(t),function(){var n=e.indexOf(t);n>-1&&e.splice(n,1)}}$r.install=function e(t){if(!e.installed||Ua!==t){e.installed=!0,Ua=t;var n=function(e){return void 0!==e},o=function(e,t){var o=e.$options._parentVnode;n(o)&&n(o=o.data)&&n(o=o.registerRouteInstance)&&o(e,t)};t.mixin({beforeCreate:function(){n(this.$options.router)?(this._routerRoot=this,this._router=this.$options.router,this._router.init(this),t.util.defineReactive(this,"_route",this._router.history.current)):this._routerRoot=this.$parent&&this.$parent._routerRoot||this,o(this,this)},destroyed:function(){o(this)}}),Object.defineProperty(t.prototype,"$router",{get:function(){return this._routerRoot._router}}),Object.defineProperty(t.prototype,"$route",{get:function(){return this._routerRoot._route}}),t.component("RouterView",ka),t.component("RouterLink",Ba);var i=t.config.optionMergeStrategies;i.beforeRouteEnter=i.beforeRouteLeave=i.beforeRouteUpdate=i.created}},$r.version="3.6.5",$r.isNavigationFailure=Tr,$r.NavigationFailureType=yr,$r.START_LOCATION=fa,Ya&&window.Vue&&window.Vue.use($r);n(106);n(104),n(94);var Vr={"components/AlgoliaSearchBox":()=>Promise.all([n.e(0),n.e(13)]).then(n.bind(null,323)),"components/DropdownLink":()=>Promise.all([n.e(0),n.e(14)]).then(n.bind(null,265)),"components/DropdownTransition":()=>Promise.all([n.e(0),n.e(19)]).then(n.bind(null,253)),"components/Home":()=>Promise.all([n.e(0),n.e(16)]).then(n.bind(null,295)),"components/NavLink":()=>n.e(22).then(n.bind(null,252)),"components/NavLinks":()=>Promise.all([n.e(0),n.e(12)]).then(n.bind(null,277)),"components/Navbar":()=>Promise.all([n.e(0),n.e(1)]).then(n.bind(null,320)),"components/Page":()=>Promise.all([n.e(0),n.e(11)]).then(n.bind(null,296)),"components/PageEdit":()=>Promise.all([n.e(0),n.e(17)]).then(n.bind(null,279)),"components/PageNav":()=>Promise.all([n.e(0),n.e(15)]).then(n.bind(null,280)),"components/Sidebar":()=>Promise.all([n.e(0),n.e(10)]).then(n.bind(null,297)),"components/SidebarButton":()=>Promise.all([n.e(0),n.e(20)]).then(n.bind(null,298)),"components/SidebarGroup":()=>Promise.all([n.e(0),n.e(3)]).then(n.bind(null,278)),"components/SidebarLink":()=>Promise.all([n.e(0),n.e(18)]).then(n.bind(null,266)),"components/SidebarLinks":()=>Promise.all([n.e(0),n.e(3)]).then(n.bind(null,264)),"global-components/Badge":()=>Promise.all([n.e(0),n.e(4)]).then(n.bind(null,330)),"global-components/CodeBlock":()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,324)),"global-components/CodeGroup":()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,325)),"layouts/404":()=>n.e(7).then(n.bind(null,326)),"layouts/Layout":()=>Promise.all([n.e(0),n.e(1),n.e(2)]).then(n.bind(null,327)),NotFound:()=>n.e(7).then(n.bind(null,326)),Layout:()=>Promise.all([n.e(0),n.e(1),n.e(2)]).then(n.bind(null,327))},Yr={"v-9cd9f09c":()=>n.e(26).then(n.bind(null,331)),"v-4bb753c4":()=>n.e(27).then(n.bind(null,332)),"v-5261e03c":()=>n.e(21).then(n.bind(null,333)),"v-40447742":()=>n.e(28).then(n.bind(null,334)),"v-696d6f80":()=>n.e(29).then(n.bind(null,335)),"v-5ab4294a":()=>n.e(30).then(n.bind(null,336)),"v-c2f362bc":()=>n.e(31).then(n.bind(null,337)),"v-d5dcd2a0":()=>n.e(32).then(n.bind(null,338)),"v-88def7ac":()=>n.e(33).then(n.bind(null,339)),"v-7a5c92a2":()=>n.e(34).then(n.bind(null,340)),"v-1bc7fd02":()=>n.e(35).then(n.bind(null,341)),"v-a14b6054":()=>n.e(36).then(n.bind(null,342)),"v-28bf3ec2":()=>n.e(37).then(n.bind(null,343)),"v-c99e5abc":()=>n.e(38).then(n.bind(null,344)),"v-36ed9422":()=>n.e(39).then(n.bind(null,345)),"v-611b8c3c":()=>n.e(41).then(n.bind(null,346)),"v-6b66fa18":()=>n.e(40).then(n.bind(null,347)),"v-13d0c1ca":()=>n.e(43).then(n.bind(null,348)),"v-163bae3c":()=>n.e(42).then(n.bind(null,349)),"v-8d905b7c":()=>n.e(44).then(n.bind(null,350)),"v-2d8e6278":()=>n.e(46).then(n.bind(null,351)),"v-7b43cf3c":()=>n.e(47).then(n.bind(null,352)),"v-e240404c":()=>n.e(45).then(n.bind(null,353)),"v-78a9ec22":()=>n.e(49).then(n.bind(null,354)),"v-1c104a48":()=>n.e(48).then(n.bind(null,355)),"v-eec246bc":()=>n.e(50).then(n.bind(null,356)),"v-5d616cea":()=>n.e(51).then(n.bind(null,357)),"v-3c665d38":()=>n.e(52).then(n.bind(null,358)),"v-c2670478":()=>n.e(53).then(n.bind(null,359)),"v-347319df":()=>n.e(54).then(n.bind(null,360)),"v-2f3b4398":()=>n.e(55).then(n.bind(null,361)),"v-44a96002":()=>n.e(56).then(n.bind(null,362)),"v-73f5d8c2":()=>n.e(57).then(n.bind(null,363)),"v-7106a8e2":()=>n.e(58).then(n.bind(null,364)),"v-4af1f23c":()=>n.e(59).then(n.bind(null,365)),"v-b64a802c":()=>n.e(60).then(n.bind(null,366)),"v-3c541bc2":()=>n.e(61).then(n.bind(null,367)),"v-423a333c":()=>n.e(62).then(n.bind(null,368)),"v-47638d30":()=>n.e(63).then(n.bind(null,369)),"v-65cef250":()=>n.e(64).then(n.bind(null,370)),"v-2ef7ad44":()=>n.e(66).then(n.bind(null,371)),"v-47e211a0":()=>n.e(65).then(n.bind(null,372)),"v-272408a2":()=>n.e(67).then(n.bind(null,373)),"v-d965e2bc":()=>n.e(68).then(n.bind(null,374)),"v-53d65f58":()=>n.e(70).then(n.bind(null,375)),"v-68ae0de4":()=>n.e(69).then(n.bind(null,376)),"v-56629f80":()=>n.e(71).then(n.bind(null,377)),"v-7a33750a":()=>n.e(72).then(n.bind(null,378)),"v-c1687e0a":()=>n.e(73).then(n.bind(null,379)),"v-861efabc":()=>n.e(75).then(n.bind(null,380)),"v-76c4aa02":()=>n.e(76).then(n.bind(null,381)),"v-e5936714":()=>n.e(74).then(n.bind(null,382)),"v-caeda73c":()=>n.e(78).then(n.bind(null,383)),"v-43760982":()=>n.e(77).then(n.bind(null,384)),"v-0327ca12":()=>n.e(79).then(n.bind(null,385)),"v-5fac5e6c":()=>n.e(80).then(n.bind(null,386)),"v-595589a2":()=>n.e(81).then(n.bind(null,387)),"v-67f3ae7c":()=>n.e(82).then(n.bind(null,388)),"v-d0383dd4":()=>n.e(84).then(n.bind(null,389)),"v-a1460e54":()=>n.e(85).then(n.bind(null,390)),"v-7732347a":()=>n.e(83).then(n.bind(null,391)),"v-0a1dd2ec":()=>n.e(86).then(n.bind(null,392)),"v-c8a8f07c":()=>n.e(87).then(n.bind(null,393)),"v-0b9844ac":()=>n.e(88).then(n.bind(null,394)),"v-35913a62":()=>n.e(90).then(n.bind(null,395)),"v-edf882bc":()=>n.e(89).then(n.bind(null,396)),"v-9d2716dc":()=>n.e(91).then(n.bind(null,397)),"v-d043b980":()=>n.e(92).then(n.bind(null,398)),"v-5df8103c":()=>n.e(24).then(n.bind(null,399)),"v-6fa6d57b":()=>n.e(94).then(n.bind(null,400)),"v-740be4db":()=>n.e(93).then(n.bind(null,401)),"v-6be5daf6":()=>n.e(95).then(n.bind(null,402)),"v-c3677d3c":()=>n.e(96).then(n.bind(null,403)),"v-6f38e6b6":()=>n.e(98).then(n.bind(null,404)),"v-3569388c":()=>n.e(99).then(n.bind(null,405)),"v-1a836dbc":()=>n.e(97).then(n.bind(null,406)),"v-fc381aca":()=>n.e(100).then(n.bind(null,407)),"v-3f3e4754":()=>n.e(101).then(n.bind(null,408)),"v-46aa6bb2":()=>n.e(102).then(n.bind(null,409)),"v-e574b140":()=>n.e(103).then(n.bind(null,410)),"v-00de750a":()=>n.e(104).then(n.bind(null,411)),"v-7256933b":()=>n.e(105).then(n.bind(null,412))};function Kr(e){const t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}const Xr=/-(\w)/g,Qr=Kr(e=>e.replace(Xr,(e,t)=>t?t.toUpperCase():"")),Jr=/\B([A-Z])/g,Zr=Kr(e=>e.replace(Jr,"-$1").toLowerCase()),es=Kr(e=>e.charAt(0).toUpperCase()+e.slice(1));function ts(e,t){if(!t)return;if(e(t))return e(t);return t.includes("-")?e(es(Qr(t))):e(es(t))||e(Zr(t))}const ns=Object.assign({},Vr,Yr),os=e=>ns[e],is=e=>Yr[e],as=e=>Vr[e],rs=e=>Vn.component(e);function ss(e){return ts(is,e)}function cs(e){return ts(as,e)}function ls(e){return ts(os,e)}function ds(e){return ts(rs,e)}function us(...e){return Promise.all(e.filter(e=>e).map(async e=>{if(!ds(e)&&ls(e)){const t=await ls(e)();Vn.component(e,t.default)}}))}function hs(e,t){"undefined"!=typeof window&&window.__VUEPRESS__&&(window.__VUEPRESS__[e]=t)}var ps=n(92),ms=n.n(ps),fs=n(93),ws=n.n(fs),gs={created(){if(this.siteMeta=this.$site.headTags.filter(([e])=>"meta"===e).map(([e,t])=>t),this.$ssrContext){const t=this.getMergedMetaTags();this.$ssrContext.title=this.$title,this.$ssrContext.lang=this.$lang,this.$ssrContext.pageMeta=(e=t)?e.map(e=>{let t="{t+=` ${n}="${ws()(e[n])}"`}),t+">"}).join("\n "):"",this.$ssrContext.canonicalLink=vs(this.$canonicalUrl)}var e},mounted(){this.currentMetaTags=[...document.querySelectorAll("meta")],this.updateMeta(),this.updateCanonicalLink()},methods:{updateMeta(){document.title=this.$title,document.documentElement.lang=this.$lang;const e=this.getMergedMetaTags();this.currentMetaTags=bs(e,this.currentMetaTags)},getMergedMetaTags(){const e=this.$page.frontmatter.meta||[];return ms()([{name:"description",content:this.$description}],e,this.siteMeta,ks)},updateCanonicalLink(){ys(),this.$canonicalUrl&&document.head.insertAdjacentHTML("beforeend",vs(this.$canonicalUrl))}},watch:{$page(){this.updateMeta(),this.updateCanonicalLink()}},beforeDestroy(){bs(null,this.currentMetaTags),ys()}};function ys(){const e=document.querySelector("link[rel='canonical']");e&&e.remove()}function vs(e=""){return e?``:""}function bs(e,t){if(t&&[...t].filter(e=>e.parentNode===document.head).forEach(e=>document.head.removeChild(e)),e)return e.map(e=>{const t=document.createElement("meta");return Object.keys(e).forEach(n=>{t.setAttribute(n,e[n])}),document.head.appendChild(t),t})}function ks(e){for(const t of["name","property","itemprop"])if(e.hasOwnProperty(t))return e[t]+t;return JSON.stringify(e)}var xs=n(22),_s=n.n(xs),Ts={mounted(){window.addEventListener("scroll",this.onScroll)},methods:{onScroll:_s()((function(){this.setActiveHash()}),300),setActiveHash(){const e=[].slice.call(document.querySelectorAll(".sidebar-link")),t=[].slice.call(document.querySelectorAll(".header-anchor")).filter(t=>e.some(e=>e.hash===t.hash)),n=Math.max(window.pageYOffset,document.documentElement.scrollTop,document.body.scrollTop),o=Math.max(document.documentElement.scrollHeight,document.body.scrollHeight),i=window.innerHeight+n;for(let e=0;e=a.parentElement.offsetTop+10&&(!r||n{this.$nextTick(()=>{this.$vuepress.$set("disableScrollBehavior",!1)})})}}}},beforeDestroy(){window.removeEventListener("scroll",this.onScroll)}},Ss=n(23),Cs=n.n(Ss),Is={mounted(){Cs.a.configure({showSpinner:!1}),this.$router.beforeEach((e,t,n)=>{e.path===t.path||Vn.component(e.name)||Cs.a.start(),n()}),this.$router.afterEach(()=>{Cs.a.done(),this.isSidebarOpen=!1})}},As={props:{parent:Object,code:String,options:{align:String,color:String,backgroundTransition:Boolean,backgroundColor:String,successText:String,staticIcon:Boolean}},data:()=>({success:!1,originalBackground:null,originalTransition:null}),computed:{alignStyle(){let e={};return e[this.options.align]="7.5px",e},iconClass(){return this.options.staticIcon?"":"hover"}},mounted(){this.originalTransition=this.parent.style.transition,this.originalBackground=this.parent.style.background},beforeDestroy(){this.parent.style.transition=this.originalTransition,this.parent.style.background=this.originalBackground},methods:{hexToRgb(e){let t=/^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(e);return t?{r:parseInt(t[1],16),g:parseInt(t[2],16),b:parseInt(t[3],16)}:null},copyToClipboard(e){if(navigator.clipboard)navigator.clipboard.writeText(this.code).then(()=>{this.setSuccessTransitions()},()=>{});else{let e=document.createElement("textarea");document.body.appendChild(e),e.value=this.code,e.select(),document.execCommand("Copy"),e.remove(),this.setSuccessTransitions()}},setSuccessTransitions(){if(clearTimeout(this.successTimeout),this.options.backgroundTransition){this.parent.style.transition="background 350ms";let e=this.hexToRgb(this.options.backgroundColor);this.parent.style.background=`rgba(${e.r}, ${e.g}, ${e.b}, 0.1)`}this.success=!0,this.successTimeout=setTimeout(()=>{this.options.backgroundTransition&&(this.parent.style.background=this.originalBackground,this.parent.style.transition=this.originalTransition),this.success=!1},500)}}},Es=(n(239),n(0)),Ps=Object(Es.a)(As,(function(){var e=this,t=e._self._c;return t("div",{staticClass:"code-copy"},[t("svg",{class:e.iconClass,style:e.alignStyle,attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24"},on:{click:e.copyToClipboard}},[t("path",{attrs:{fill:"none",d:"M0 0h24v24H0z"}}),e._v(" "),t("path",{attrs:{fill:e.options.color,d:"M16 1H4c-1.1 0-2 .9-2 2v14h2V3h12V1zm-1 4l6 6v10c0 1.1-.9 2-2 2H7.99C6.89 23 6 22.1 6 21l.01-14c0-1.1.89-2 1.99-2h7zm-1 7h5.5L14 6.5V12z"}})]),e._v(" "),t("span",{class:e.success?"success":"",style:e.alignStyle},[e._v("\n "+e._s(e.options.successText)+"\n ")])])}),[],!1,null,"49140617",null).exports,Ws=(n(240),[gs,Ts,Is,{updated(){this.update()},methods:{update(){setTimeout(()=>{document.querySelectorAll('div[class*="language-"] pre').forEach(e=>{if(e.classList.contains("code-copy-added"))return;let t=new(Vn.extend(Ps));t.options={align:"bottom",color:"#27b1ff",backgroundTransition:!0,backgroundColor:"#0075b8",successText:"Copied!",staticIcon:!1},t.code=e.innerText,t.parent=e,t.$mount(),e.classList.add("code-copy-added"),e.appendChild(t.$el)})},100)}}}]),qs={name:"GlobalLayout",computed:{layout(){const e=this.getLayout();return hs("layout",e),Vn.component(e)}},methods:{getLayout(){if(this.$page.path){const e=this.$page.frontmatter.layout;return e&&(this.$vuepress.getLayoutAsyncComponent(e)||this.$vuepress.getVueComponent(e))?e:"Layout"}return"NotFound"}}},Ds=Object(Es.a)(qs,(function(){return(0,this._self._c)(this.layout,{tag:"component"})}),[],!1,null,null,null).exports;!function(e,t,n){switch(t){case"components":e[t]||(e[t]={}),Object.assign(e[t],n);break;case"mixins":e[t]||(e[t]=[]),e[t].push(...n);break;default:throw new Error("Unknown option name.")}}(Ds,"mixins",Ws);const Os=[{name:"v-9cd9f09c",path:"/GLOSSARY.html",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-9cd9f09c").then(n)}},{name:"v-4bb753c4",path:"/docs/get-started/installation/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-4bb753c4").then(n)}},{path:"/docs/get-started/installation/index.html",redirect:"/docs/get-started/installation/"},{path:"/docs/01-get-started/01-server-installation.html",redirect:"/docs/get-started/installation/"},{name:"v-5261e03c",path:"/docs/get-started/golang-hello-world/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5261e03c").then(n)}},{path:"/docs/get-started/golang-hello-world/index.html",redirect:"/docs/get-started/golang-hello-world/"},{path:"/docs/01-get-started/03-golang-hello-world.html",redirect:"/docs/get-started/golang-hello-world/"},{name:"v-40447742",path:"/docs/get-started/java-hello-world/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-40447742").then(n)}},{path:"/docs/get-started/java-hello-world/index.html",redirect:"/docs/get-started/java-hello-world/"},{path:"/docs/01-get-started/02-java-hello-world.html",redirect:"/docs/get-started/java-hello-world/"},{name:"v-696d6f80",path:"/docs/get-started/video-tutorials/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-696d6f80").then(n)}},{path:"/docs/get-started/video-tutorials/index.html",redirect:"/docs/get-started/video-tutorials/"},{path:"/docs/01-get-started/04-video-tutorials.html",redirect:"/docs/get-started/video-tutorials/"},{name:"v-5ab4294a",path:"/docs/get-started/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5ab4294a").then(n)}},{path:"/docs/get-started/index.html",redirect:"/docs/get-started/"},{path:"/docs/01-get-started/",redirect:"/docs/get-started/"},{name:"v-c2f362bc",path:"/docs/use-cases/periodic-execution/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c2f362bc").then(n)}},{path:"/docs/use-cases/periodic-execution/index.html",redirect:"/docs/use-cases/periodic-execution/"},{path:"/docs/02-use-cases/01-periodic-execution.html",redirect:"/docs/use-cases/periodic-execution/"},{name:"v-d5dcd2a0",path:"/docs/use-cases/orchestration/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d5dcd2a0").then(n)}},{path:"/docs/use-cases/orchestration/index.html",redirect:"/docs/use-cases/orchestration/"},{path:"/docs/02-use-cases/02-orchestration.html",redirect:"/docs/use-cases/orchestration/"},{name:"v-88def7ac",path:"/docs/use-cases/polling/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-88def7ac").then(n)}},{path:"/docs/use-cases/polling/index.html",redirect:"/docs/use-cases/polling/"},{path:"/docs/02-use-cases/03-polling.html",redirect:"/docs/use-cases/polling/"},{name:"v-7a5c92a2",path:"/docs/use-cases/event-driven/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7a5c92a2").then(n)}},{path:"/docs/use-cases/event-driven/index.html",redirect:"/docs/use-cases/event-driven/"},{path:"/docs/02-use-cases/04-event-driven.html",redirect:"/docs/use-cases/event-driven/"},{name:"v-1bc7fd02",path:"/docs/use-cases/partitioned-scan/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-1bc7fd02").then(n)}},{path:"/docs/use-cases/partitioned-scan/index.html",redirect:"/docs/use-cases/partitioned-scan/"},{path:"/docs/02-use-cases/05-partitioned-scan.html",redirect:"/docs/use-cases/partitioned-scan/"},{name:"v-a14b6054",path:"/docs/use-cases/batch-job/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-a14b6054").then(n)}},{path:"/docs/use-cases/batch-job/index.html",redirect:"/docs/use-cases/batch-job/"},{path:"/docs/02-use-cases/06-batch-job.html",redirect:"/docs/use-cases/batch-job/"},{name:"v-28bf3ec2",path:"/docs/use-cases/provisioning/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-28bf3ec2").then(n)}},{path:"/docs/use-cases/provisioning/index.html",redirect:"/docs/use-cases/provisioning/"},{path:"/docs/02-use-cases/07-provisioning.html",redirect:"/docs/use-cases/provisioning/"},{name:"v-c99e5abc",path:"/docs/use-cases/deployment/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c99e5abc").then(n)}},{path:"/docs/use-cases/deployment/index.html",redirect:"/docs/use-cases/deployment/"},{path:"/docs/02-use-cases/08-deployment.html",redirect:"/docs/use-cases/deployment/"},{name:"v-36ed9422",path:"/docs/use-cases/operational-management/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-36ed9422").then(n)}},{path:"/docs/use-cases/operational-management/index.html",redirect:"/docs/use-cases/operational-management/"},{path:"/docs/02-use-cases/09-operational-management.html",redirect:"/docs/use-cases/operational-management/"},{name:"v-611b8c3c",path:"/docs/use-cases/dsl/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-611b8c3c").then(n)}},{path:"/docs/use-cases/dsl/index.html",redirect:"/docs/use-cases/dsl/"},{path:"/docs/02-use-cases/11-dsl.html",redirect:"/docs/use-cases/dsl/"},{name:"v-6b66fa18",path:"/docs/use-cases/interactive/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6b66fa18").then(n)}},{path:"/docs/use-cases/interactive/index.html",redirect:"/docs/use-cases/interactive/"},{path:"/docs/02-use-cases/10-interactive.html",redirect:"/docs/use-cases/interactive/"},{name:"v-13d0c1ca",path:"/docs/use-cases/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-13d0c1ca").then(n)}},{path:"/docs/use-cases/index.html",redirect:"/docs/use-cases/"},{path:"/docs/02-use-cases/",redirect:"/docs/use-cases/"},{name:"v-163bae3c",path:"/docs/use-cases/big-ml/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-163bae3c").then(n)}},{path:"/docs/use-cases/big-ml/index.html",redirect:"/docs/use-cases/big-ml/"},{path:"/docs/02-use-cases/12-big-ml.html",redirect:"/docs/use-cases/big-ml/"},{name:"v-8d905b7c",path:"/docs/concepts/workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-8d905b7c").then(n)}},{path:"/docs/concepts/workflows/index.html",redirect:"/docs/concepts/workflows/"},{path:"/docs/03-concepts/01-workflows.html",redirect:"/docs/concepts/workflows/"},{name:"v-2d8e6278",path:"/docs/concepts/events/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-2d8e6278").then(n)}},{path:"/docs/concepts/events/index.html",redirect:"/docs/concepts/events/"},{path:"/docs/03-concepts/03-events.html",redirect:"/docs/concepts/events/"},{name:"v-7b43cf3c",path:"/docs/concepts/queries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7b43cf3c").then(n)}},{path:"/docs/concepts/queries/index.html",redirect:"/docs/concepts/queries/"},{path:"/docs/03-concepts/04-queries.html",redirect:"/docs/concepts/queries/"},{name:"v-e240404c",path:"/docs/concepts/activities/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-e240404c").then(n)}},{path:"/docs/concepts/activities/index.html",redirect:"/docs/concepts/activities/"},{path:"/docs/03-concepts/02-activities.html",redirect:"/docs/concepts/activities/"},{name:"v-78a9ec22",path:"/docs/concepts/task-lists/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-78a9ec22").then(n)}},{path:"/docs/concepts/task-lists/index.html",redirect:"/docs/concepts/task-lists/"},{path:"/docs/03-concepts/06-task-lists.html",redirect:"/docs/concepts/task-lists/"},{name:"v-1c104a48",path:"/docs/concepts/topology/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-1c104a48").then(n)}},{path:"/docs/concepts/topology/index.html",redirect:"/docs/concepts/topology/"},{path:"/docs/03-concepts/05-topology.html",redirect:"/docs/concepts/topology/"},{name:"v-eec246bc",path:"/docs/concepts/archival/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-eec246bc").then(n)}},{path:"/docs/concepts/archival/index.html",redirect:"/docs/concepts/archival/"},{path:"/docs/03-concepts/07-archival.html",redirect:"/docs/concepts/archival/"},{name:"v-5d616cea",path:"/docs/concepts/cross-dc-replication/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5d616cea").then(n)}},{path:"/docs/concepts/cross-dc-replication/index.html",redirect:"/docs/concepts/cross-dc-replication/"},{path:"/docs/03-concepts/08-cross-dc-replication.html",redirect:"/docs/concepts/cross-dc-replication/"},{name:"v-3c665d38",path:"/docs/concepts/search-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3c665d38").then(n)}},{path:"/docs/concepts/search-workflows/index.html",redirect:"/docs/concepts/search-workflows/"},{path:"/docs/03-concepts/09-search-workflows.html",redirect:"/docs/concepts/search-workflows/"},{name:"v-c2670478",path:"/docs/concepts/http-api/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c2670478").then(n)}},{path:"/docs/concepts/http-api/index.html",redirect:"/docs/concepts/http-api/"},{path:"/docs/03-concepts/10-http-api.html",redirect:"/docs/concepts/http-api/"},{name:"v-347319df",path:"/docs/concepts/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-347319df").then(n)}},{path:"/docs/concepts/index.html",redirect:"/docs/concepts/"},{path:"/docs/03-concepts/",redirect:"/docs/concepts/"},{name:"v-2f3b4398",path:"/docs/java-client/client-overview/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-2f3b4398").then(n)}},{path:"/docs/java-client/client-overview/index.html",redirect:"/docs/java-client/client-overview/"},{path:"/docs/04-java-client/01-client-overview.html",redirect:"/docs/java-client/client-overview/"},{name:"v-44a96002",path:"/docs/java-client/workflow-interface/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-44a96002").then(n)}},{path:"/docs/java-client/workflow-interface/index.html",redirect:"/docs/java-client/workflow-interface/"},{path:"/docs/04-java-client/02-workflow-interface.html",redirect:"/docs/java-client/workflow-interface/"},{name:"v-73f5d8c2",path:"/docs/java-client/implementing-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-73f5d8c2").then(n)}},{path:"/docs/java-client/implementing-workflows/index.html",redirect:"/docs/java-client/implementing-workflows/"},{path:"/docs/04-java-client/03-implementing-workflows.html",redirect:"/docs/java-client/implementing-workflows/"},{name:"v-7106a8e2",path:"/docs/java-client/starting-workflow-executions/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7106a8e2").then(n)}},{path:"/docs/java-client/starting-workflow-executions/index.html",redirect:"/docs/java-client/starting-workflow-executions/"},{path:"/docs/04-java-client/04-starting-workflow-executions.html",redirect:"/docs/java-client/starting-workflow-executions/"},{name:"v-4af1f23c",path:"/docs/java-client/activity-interface/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-4af1f23c").then(n)}},{path:"/docs/java-client/activity-interface/index.html",redirect:"/docs/java-client/activity-interface/"},{path:"/docs/04-java-client/05-activity-interface.html",redirect:"/docs/java-client/activity-interface/"},{name:"v-b64a802c",path:"/docs/java-client/implementing-activities/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-b64a802c").then(n)}},{path:"/docs/java-client/implementing-activities/index.html",redirect:"/docs/java-client/implementing-activities/"},{path:"/docs/04-java-client/06-implementing-activities.html",redirect:"/docs/java-client/implementing-activities/"},{name:"v-3c541bc2",path:"/docs/java-client/versioning/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3c541bc2").then(n)}},{path:"/docs/java-client/versioning/index.html",redirect:"/docs/java-client/versioning/"},{path:"/docs/04-java-client/07-versioning.html",redirect:"/docs/java-client/versioning/"},{name:"v-423a333c",path:"/docs/java-client/distributed-cron/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-423a333c").then(n)}},{path:"/docs/java-client/distributed-cron/index.html",redirect:"/docs/java-client/distributed-cron/"},{path:"/docs/04-java-client/08-distributed-cron.html",redirect:"/docs/java-client/distributed-cron/"},{name:"v-47638d30",path:"/docs/java-client/workers/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-47638d30").then(n)}},{path:"/docs/java-client/workers/index.html",redirect:"/docs/java-client/workers/"},{path:"/docs/04-java-client/09-workers.html",redirect:"/docs/java-client/workers/"},{name:"v-65cef250",path:"/docs/java-client/signals/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-65cef250").then(n)}},{path:"/docs/java-client/signals/index.html",redirect:"/docs/java-client/signals/"},{path:"/docs/04-java-client/10-signals.html",redirect:"/docs/java-client/signals/"},{name:"v-2ef7ad44",path:"/docs/java-client/retries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-2ef7ad44").then(n)}},{path:"/docs/java-client/retries/index.html",redirect:"/docs/java-client/retries/"},{path:"/docs/04-java-client/12-retries.html",redirect:"/docs/java-client/retries/"},{name:"v-47e211a0",path:"/docs/java-client/queries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-47e211a0").then(n)}},{path:"/docs/java-client/queries/index.html",redirect:"/docs/java-client/queries/"},{path:"/docs/04-java-client/11-queries.html",redirect:"/docs/java-client/queries/"},{name:"v-272408a2",path:"/docs/java-client/child-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-272408a2").then(n)}},{path:"/docs/java-client/child-workflows/index.html",redirect:"/docs/java-client/child-workflows/"},{path:"/docs/04-java-client/13-child-workflows.html",redirect:"/docs/java-client/child-workflows/"},{name:"v-d965e2bc",path:"/docs/java-client/exception-handling/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d965e2bc").then(n)}},{path:"/docs/java-client/exception-handling/index.html",redirect:"/docs/java-client/exception-handling/"},{path:"/docs/04-java-client/14-exception-handling.html",redirect:"/docs/java-client/exception-handling/"},{name:"v-53d65f58",path:"/docs/java-client/side-effect/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-53d65f58").then(n)}},{path:"/docs/java-client/side-effect/index.html",redirect:"/docs/java-client/side-effect/"},{path:"/docs/04-java-client/16-side-effect.html",redirect:"/docs/java-client/side-effect/"},{name:"v-68ae0de4",path:"/docs/java-client/continue-as-new/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-68ae0de4").then(n)}},{path:"/docs/java-client/continue-as-new/index.html",redirect:"/docs/java-client/continue-as-new/"},{path:"/docs/04-java-client/15-continue-as-new.html",redirect:"/docs/java-client/continue-as-new/"},{name:"v-56629f80",path:"/docs/java-client/testing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-56629f80").then(n)}},{path:"/docs/java-client/testing/index.html",redirect:"/docs/java-client/testing/"},{path:"/docs/04-java-client/17-testing.html",redirect:"/docs/java-client/testing/"},{name:"v-7a33750a",path:"/docs/java-client/workflow-replay-shadowing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7a33750a").then(n)}},{path:"/docs/java-client/workflow-replay-shadowing/index.html",redirect:"/docs/java-client/workflow-replay-shadowing/"},{path:"/docs/04-java-client/18-workflow-replay-shadowing.html",redirect:"/docs/java-client/workflow-replay-shadowing/"},{name:"v-c1687e0a",path:"/docs/java-client/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c1687e0a").then(n)}},{path:"/docs/java-client/index.html",redirect:"/docs/java-client/"},{path:"/docs/04-java-client/",redirect:"/docs/java-client/"},{name:"v-861efabc",path:"/docs/go-client/create-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-861efabc").then(n)}},{path:"/docs/go-client/create-workflows/index.html",redirect:"/docs/go-client/create-workflows/"},{path:"/docs/05-go-client/02-create-workflows.html",redirect:"/docs/go-client/create-workflows/"},{name:"v-76c4aa02",path:"/docs/go-client/start-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-76c4aa02").then(n)}},{path:"/docs/go-client/start-workflows/index.html",redirect:"/docs/go-client/start-workflows/"},{path:"/docs/05-go-client/02.5-starting-workflows.html",redirect:"/docs/go-client/start-workflows/"},{name:"v-e5936714",path:"/docs/go-client/workers/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-e5936714").then(n)}},{path:"/docs/go-client/workers/index.html",redirect:"/docs/go-client/workers/"},{path:"/docs/05-go-client/01-workers.html",redirect:"/docs/go-client/workers/"},{name:"v-caeda73c",path:"/docs/go-client/execute-activity/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-caeda73c").then(n)}},{path:"/docs/go-client/execute-activity/index.html",redirect:"/docs/go-client/execute-activity/"},{path:"/docs/05-go-client/04-execute-activity.html",redirect:"/docs/go-client/execute-activity/"},{name:"v-43760982",path:"/docs/go-client/activities/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-43760982").then(n)}},{path:"/docs/go-client/activities/index.html",redirect:"/docs/go-client/activities/"},{path:"/docs/05-go-client/03-activities.html",redirect:"/docs/go-client/activities/"},{name:"v-0327ca12",path:"/docs/go-client/child-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-0327ca12").then(n)}},{path:"/docs/go-client/child-workflows/index.html",redirect:"/docs/go-client/child-workflows/"},{path:"/docs/05-go-client/05-child-workflows.html",redirect:"/docs/go-client/child-workflows/"},{name:"v-5fac5e6c",path:"/docs/go-client/retries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5fac5e6c").then(n)}},{path:"/docs/go-client/retries/index.html",redirect:"/docs/go-client/retries/"},{path:"/docs/05-go-client/06-retries.html",redirect:"/docs/go-client/retries/"},{name:"v-595589a2",path:"/docs/go-client/error-handling/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-595589a2").then(n)}},{path:"/docs/go-client/error-handling/index.html",redirect:"/docs/go-client/error-handling/"},{path:"/docs/05-go-client/07-error-handling.html",redirect:"/docs/go-client/error-handling/"},{name:"v-67f3ae7c",path:"/docs/go-client/signals/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-67f3ae7c").then(n)}},{path:"/docs/go-client/signals/index.html",redirect:"/docs/go-client/signals/"},{path:"/docs/05-go-client/08-signals.html",redirect:"/docs/go-client/signals/"},{name:"v-d0383dd4",path:"/docs/go-client/side-effect/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d0383dd4").then(n)}},{path:"/docs/go-client/side-effect/index.html",redirect:"/docs/go-client/side-effect/"},{path:"/docs/05-go-client/10-side-effect.html",redirect:"/docs/go-client/side-effect/"},{name:"v-a1460e54",path:"/docs/go-client/queries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-a1460e54").then(n)}},{path:"/docs/go-client/queries/index.html",redirect:"/docs/go-client/queries/"},{path:"/docs/05-go-client/11-queries.html",redirect:"/docs/go-client/queries/"},{name:"v-7732347a",path:"/docs/go-client/continue-as-new/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7732347a").then(n)}},{path:"/docs/go-client/continue-as-new/index.html",redirect:"/docs/go-client/continue-as-new/"},{path:"/docs/05-go-client/09-continue-as-new.html",redirect:"/docs/go-client/continue-as-new/"},{name:"v-0a1dd2ec",path:"/docs/go-client/activity-async-completion/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-0a1dd2ec").then(n)}},{path:"/docs/go-client/activity-async-completion/index.html",redirect:"/docs/go-client/activity-async-completion/"},{path:"/docs/05-go-client/12-activity-async-completion.html",redirect:"/docs/go-client/activity-async-completion/"},{name:"v-c8a8f07c",path:"/docs/go-client/workflow-testing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c8a8f07c").then(n)}},{path:"/docs/go-client/workflow-testing/index.html",redirect:"/docs/go-client/workflow-testing/"},{path:"/docs/05-go-client/13-workflow-testing.html",redirect:"/docs/go-client/workflow-testing/"},{name:"v-0b9844ac",path:"/docs/go-client/workflow-versioning/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-0b9844ac").then(n)}},{path:"/docs/go-client/workflow-versioning/index.html",redirect:"/docs/go-client/workflow-versioning/"},{path:"/docs/05-go-client/14-workflow-versioning.html",redirect:"/docs/go-client/workflow-versioning/"},{name:"v-35913a62",path:"/docs/go-client/distributed-cron/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-35913a62").then(n)}},{path:"/docs/go-client/distributed-cron/index.html",redirect:"/docs/go-client/distributed-cron/"},{path:"/docs/05-go-client/16-distributed-cron.html",redirect:"/docs/go-client/distributed-cron/"},{name:"v-edf882bc",path:"/docs/go-client/sessions/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-edf882bc").then(n)}},{path:"/docs/go-client/sessions/index.html",redirect:"/docs/go-client/sessions/"},{path:"/docs/05-go-client/15-sessions.html",redirect:"/docs/go-client/sessions/"},{name:"v-9d2716dc",path:"/docs/go-client/tracing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-9d2716dc").then(n)}},{path:"/docs/go-client/tracing/index.html",redirect:"/docs/go-client/tracing/"},{path:"/docs/05-go-client/17-tracing.html",redirect:"/docs/go-client/tracing/"},{name:"v-d043b980",path:"/docs/go-client/workflow-replay-shadowing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d043b980").then(n)}},{path:"/docs/go-client/workflow-replay-shadowing/index.html",redirect:"/docs/go-client/workflow-replay-shadowing/"},{path:"/docs/05-go-client/18-workflow-replay-shadowing.html",redirect:"/docs/go-client/workflow-replay-shadowing/"},{name:"v-5df8103c",path:"/docs/go-client/workflow-non-deterministic-errors/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5df8103c").then(n)}},{path:"/docs/go-client/workflow-non-deterministic-errors/index.html",redirect:"/docs/go-client/workflow-non-deterministic-errors/"},{path:"/docs/05-go-client/19-workflow-non-deterministic-error.html",redirect:"/docs/go-client/workflow-non-deterministic-errors/"},{name:"v-6fa6d57b",path:"/docs/cli/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6fa6d57b").then(n)}},{path:"/docs/cli/index.html",redirect:"/docs/cli/"},{path:"/docs/06-cli/",redirect:"/docs/cli/"},{name:"v-740be4db",path:"/docs/go-client/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-740be4db").then(n)}},{path:"/docs/go-client/index.html",redirect:"/docs/go-client/"},{path:"/docs/05-go-client/",redirect:"/docs/go-client/"},{name:"v-6be5daf6",path:"/docs/operation-guide/setup/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6be5daf6").then(n)}},{path:"/docs/operation-guide/setup/index.html",redirect:"/docs/operation-guide/setup/"},{path:"/docs/07-operation-guide/01-setup.html",redirect:"/docs/operation-guide/setup/"},{name:"v-c3677d3c",path:"/docs/operation-guide/maintain/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c3677d3c").then(n)}},{path:"/docs/operation-guide/maintain/index.html",redirect:"/docs/operation-guide/maintain/"},{path:"/docs/07-operation-guide/02-maintain.html",redirect:"/docs/operation-guide/maintain/"},{name:"v-6f38e6b6",path:"/docs/operation-guide/troubleshooting/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6f38e6b6").then(n)}},{path:"/docs/operation-guide/troubleshooting/index.html",redirect:"/docs/operation-guide/troubleshooting/"},{path:"/docs/07-operation-guide/04-troubleshooting.html",redirect:"/docs/operation-guide/troubleshooting/"},{name:"v-3569388c",path:"/docs/operation-guide/migration/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3569388c").then(n)}},{path:"/docs/operation-guide/migration/index.html",redirect:"/docs/operation-guide/migration/"},{path:"/docs/07-operation-guide/05-migration.html",redirect:"/docs/operation-guide/migration/"},{name:"v-1a836dbc",path:"/docs/operation-guide/monitor/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-1a836dbc").then(n)}},{path:"/docs/operation-guide/monitor/index.html",redirect:"/docs/operation-guide/monitor/"},{path:"/docs/07-operation-guide/03-monitoring.html",redirect:"/docs/operation-guide/monitor/"},{name:"v-fc381aca",path:"/docs/operation-guide/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-fc381aca").then(n)}},{path:"/docs/operation-guide/index.html",redirect:"/docs/operation-guide/"},{path:"/docs/07-operation-guide/",redirect:"/docs/operation-guide/"},{name:"v-3f3e4754",path:"/docs/workflow-troubleshooting/timeouts/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3f3e4754").then(n)}},{path:"/docs/workflow-troubleshooting/timeouts/index.html",redirect:"/docs/workflow-troubleshooting/timeouts/"},{path:"/docs/08-workflow-troubleshooting/01-timeouts.html",redirect:"/docs/workflow-troubleshooting/timeouts/"},{name:"v-46aa6bb2",path:"/docs/workflow-troubleshooting/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-46aa6bb2").then(n)}},{path:"/docs/workflow-troubleshooting/index.html",redirect:"/docs/workflow-troubleshooting/"},{path:"/docs/08-workflow-troubleshooting/",redirect:"/docs/workflow-troubleshooting/"},{name:"v-e574b140",path:"/docs/about/license/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-e574b140").then(n)}},{path:"/docs/about/license/index.html",redirect:"/docs/about/license/"},{path:"/docs/09-about/01-license.html",redirect:"/docs/about/license/"},{name:"v-00de750a",path:"/docs/about/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-00de750a").then(n)}},{path:"/docs/about/index.html",redirect:"/docs/about/"},{path:"/docs/09-about/",redirect:"/docs/about/"},{name:"v-7256933b",path:"/",component:Ds,beforeEnter:(e,t,n)=>{us("Layout","v-7256933b").then(n)}},{path:"/index.html",redirect:"/"},{path:"*",component:Ds}],js={title:"Cadence",description:"",base:"/",headTags:[["link",{rel:"icon",href:"/img/favicon.ico"}]],pages:[{title:"Glossary",frontmatter:{layout:"default",title:"Glossary",terms:{activity:"A business-level function that implements your application logic such as calling a service or transcoding a media file. An activity usually implements a single well-defined action; it can be short or long running. An activity can be implemented as a synchronous method or fully asynchronously involving multiple processes. An activity can be retried indefinitely according to the provided exponential retry policy. If for any reason an activity is not completed within the specified timeout, an error is reported to the workflow and the workflow decides how to handle it. There is no limit on potential activity duration.","activity task":"A task that contains an activity invocation information that is delivered to an activity worker through and an activity task list. An activity worker upon receiving activity task executes a correponding activity","activity task list":"Task list that is used to deliver activity task to activity worker","activity worker":"An object that is executed in the client application and receives activity task from an activity task list it is subscribed to. Once task is received it invokes a correspondent activity.",archival:"Archival is a feature that automatically moves event history from persistence to a blobstore after the workflow retention period. The purpose of archival is to be able to keep histories as long as needed while not overwhelming the persistence store. There are two reasons you may want to keep the histories after the retention period has passed: 1. Compliance: For legal reasons, histories may need to be stored for a long period of time. 2. Debugging: Old histories can still be accessed for debugging.",CLI:"Cadence command-line interface.","client stub":"A client-side proxy used to make remote invocations to an entity that it represents. For example, to start a workflow, a stub object that represents this workflow is created through a special API. Then this stub is used to start, query, or signal the corresponding workflow.\nThe Go client doesn't use this.",decision:"Any action taken by the workflow durable function is called a decision. For example: scheduling an activity, canceling a child workflow, or starting a timer. A decision task contains an optional list of decisions. Every decision is recorded in the event history as an event. See also [1] for more explanation","decision task":"Every time a new external event that might affect a workflow state is recorded, a decision task that contains it is added to a decision task list and then picked up by a workflow worker. After the new event is handled, the decision task is completed with a list of decision. Note that handling of a decision task is usually very fast and is not related to duration of operations that the workflow invokes. See also [1] for more explanation","decision task list":"Task list that is used to deliver decision task to workflow worker. From user's point of view, it can be viewed as a worker pool. It defines a pool of worker executing workflow or activity tasks.",domain:"Cadence is backed by a multitenant service. The unit of isolation is called a domain. Each domain acts as a namespace for task list names as well as workflow IDs. For example, when a workflow is started, it is started in a specific domain. Cadence guarantees a unique workflow ID within a domain, and supports running workflow executions to use the same workflow ID if they are in different domains. Various configuration options like retention period or archival destination are configured per domain as well through a special CRUD API or through the Cadence CLI. In the multi-cluster deployment, domain is a unit of fail-over. Each domain can only be active on a single Cadence cluster at a time. However, different domains can be active in different clusters and can fail-over independently.",event:"An indivisible operation performed by your application. For example, activity_task_started, task_failed, or timer_canceled. Events are recorded in the event history.","event history":"An append log of events for your application. History is durably persisted by the Cadence service, enabling seamless recovery of your application state from crashes or failures. It also serves as an audit log for debugging.","local activity":"A local activity is an activity that is invoked directly in the same process by a workflow code. It consumes much less resources than a normal activity, but imposes a lot of limitations like low duration and lack of rate limiting.",query:"A synchronous (from the caller's point of view) operation that is used to report a workflow state. Note that a query is inherently read only and cannot affect a workflow state.","run ID":"A UUID that a Cadence service assigns to each workflow run. If allowed by a configured policy, you might be able to re-execute a workflow, after it has closed or failed, with the same workflow id. Each such re-execution is called a run. run id is used to uniquely identify a run even if it shares a workflow id with others.",signal:"An external asynchronous request to a workflow. It can be used to deliver notifications or updates to a running workflow at any point in its existence.",task:"The context needed to execute a specific activity or workflow state transition. There are two types of tasks: an activity task and a decision task (aka workflow task). Note that a single activity execution corresponds to a single activity task, while a workflow execution employs multiple decision tasks.","task list":"Common name for activity task list and decision task list","task token":"A unique correlation ID for a Cadence activity. Activity completion calls take either task token or DomainName, WorkflowID, ActivityID arguments.",worker:"Also known as a worker service. A service that hosts the workflow and activity implementations. The worker polls the Cadence service for tasks, performs those tasks, and communicates task execution results back to the Cadence service. Worker services are developed, deployed, and operated by Cadence customers.",workflow:"A fault-oblivious stateful function that orchestrates activities. A workflow has full control over which activities are executed, and in which order. A workflow must not affect the external world directly, only through activities. What makes workflow code a workflow is that its state is preserved by Cadence. Therefore any failure of a worker process that hosts the workflow code does not affect the workflow execution. The workflow continues as if these failures did not happen. At the same time, activities can fail any moment for any reason. Because workflow code is fully fault-oblivious, it is guaranteed to get notifications about activity failures or timeouts and act accordingly. There is no limit on potential workflow duration.","workflow execution":"An instance of a workflow. The instance can be in the process of executing or it could have already completed execution.","workflow ID":"A unique identifier for a workflow execution. Cadence guarantees the uniqueness of an ID within a domain. An attempt to start a workflow with a duplicate ID results in an already started error.","workflow task":"Synonym of the decision task.","workflow worker":"An object that is executed in the client application and receives decision task from an decision task list it is subscribed to. Once task is received it is handled by a correponding workflow."},readingShow:"top"},regularPath:"/GLOSSARY.html",relativePath:"GLOSSARY.md",key:"v-9cd9f09c",path:"/GLOSSARY.html",codeSwitcherOptions:{},headersStr:null,content:"# Glossary\n\n1 What exactly is a Cadence decision task?",normalizedContent:"# glossary\n\n1 what exactly is a cadence decision task?",charsets:{}},{title:"Server Installation",frontmatter:{layout:"default",title:"Server Installation",permalink:"/docs/get-started/installation",readingShow:"top"},regularPath:"/docs/01-get-started/01-server-installation.html",relativePath:"docs/01-get-started/01-server-installation.md",key:"v-4bb753c4",path:"/docs/get-started/installation/",headers:[{level:2,title:"0. Prerequisite - Install docker",slug:"_0-prerequisite-install-docker",normalizedTitle:"0. prerequisite - install docker",charIndex:322},{level:2,title:"1. Run Cadence Server Using Docker Compose",slug:"_1-run-cadence-server-using-docker-compose",normalizedTitle:"1. run cadence server using docker compose",charIndex:461},{level:2,title:"2. Register a Domain Using the CLI",slug:"_2-register-a-domain-using-the-cli",normalizedTitle:"2. register a domain using the cli",charIndex:849},{level:2,title:"What's Next",slug:"what-s-next",normalizedTitle:"what's next",charIndex:1771},{level:2,title:"Troubleshooting",slug:"troubleshooting",normalizedTitle:"troubleshooting",charIndex:2055}],codeSwitcherOptions:{},headersStr:"0. Prerequisite - Install docker 1. Run Cadence Server Using Docker Compose 2. Register a Domain Using the CLI What's Next Troubleshooting",content:"# Install Cadence Service Locally\n\nTo get started with Cadence, you need to set up three components successfully.\n\n * A Cadence server hosting dependencies that Cadence relies on such as Cassandra, Elastic Search, etc\n * A Cadence domain for you workflow application\n * A Cadence worker service hosting your workflows\n\n\n# 0. Prerequisite - Install docker\n\nFollow the Docker installation instructions found here: https://docs.docker.com/engine/installation/\n\n\n# 1. Run Cadence Server Using Docker Compose\n\nDownload the Cadence docker-compose file:\n\n\ncurl -O https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose.yml && curl -O https://raw.githubusercontent.com/uber/cadence/master/docker/prometheus/prometheus.yml\n\n\nThen start Cadence Service by running:\n\ndocker-compose up\n\n\nPlease keep this process running at background.\n\n\n# 2. Register a Domain Using the CLI\n\nIn a new terminal, create a new domain called test-domain (or choose whatever name you like) by running:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain domain register -rd 1\n\n\nCheck that the domain is indeed registered:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain domain describe\nName: test-domain\nDescription:\nOwnerEmail:\nDomainData: map[]\nStatus: REGISTERED\nRetentionInDays: 1\nEmitMetrics: false\nActiveClusterName: active\nClusters: active\nArchivalStatus: DISABLED\nBad binaries to reset:\n+-----------------+----------+------------+--------+\n| BINARY CHECKSUM | OPERATOR | START TIME | REASON |\n+-----------------+----------+------------+--------+\n+-----------------+----------+------------+--------+\n>\n\n\nPlease remember the domains you created because they will be used in your worker implementation and Cadence CLI commands.\n\n\n# What's Next\n\nSo far you've successfully finished two prerequisites to your Cadence application. The next steps are to implement a simple worker service that hosts your workflows and to run your very first hello world Cadence workflow.\n\nGo to Java HelloWorld or Golang HelloWorld.\n\n\n# Troubleshooting\n\nThere can be various reasons that docker-compose up cannot succeed:\n\n * In case of the image being too old, update the docker image by docker pull ubercadence/server:master-auto-setup and retry\n * In case of the local docker env is messed up: docker system prune --all and retry (see details about it )\n * See logs of different container:\n * If Cassandra is not able to get up: docker logs -f docker_cassandra_1\n * If Cadence is not able to get up: docker logs -f docker_cadence_1\n * If Cadence Web is not able to get up: docker logs -f docker_cadence-web_1\n\nIf the above is still not working, open an issue in Server(main) repo.",normalizedContent:"# install cadence service locally\n\nto get started with cadence, you need to set up three components successfully.\n\n * a cadence server hosting dependencies that cadence relies on such as cassandra, elastic search, etc\n * a cadence domain for you workflow application\n * a cadence worker service hosting your workflows\n\n\n# 0. prerequisite - install docker\n\nfollow the docker installation instructions found here: https://docs.docker.com/engine/installation/\n\n\n# 1. run cadence server using docker compose\n\ndownload the cadence docker-compose file:\n\n\ncurl -o https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose.yml && curl -o https://raw.githubusercontent.com/uber/cadence/master/docker/prometheus/prometheus.yml\n\n\nthen start cadence service by running:\n\ndocker-compose up\n\n\nplease keep this process running at background.\n\n\n# 2. register a domain using the cli\n\nin a new terminal, create a new domain called test-domain (or choose whatever name you like) by running:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain domain register -rd 1\n\n\ncheck that the domain is indeed registered:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain domain describe\nname: test-domain\ndescription:\nowneremail:\ndomaindata: map[]\nstatus: registered\nretentionindays: 1\nemitmetrics: false\nactiveclustername: active\nclusters: active\narchivalstatus: disabled\nbad binaries to reset:\n+-----------------+----------+------------+--------+\n| binary checksum | operator | start time | reason |\n+-----------------+----------+------------+--------+\n+-----------------+----------+------------+--------+\n>\n\n\nplease remember the domains you created because they will be used in your worker implementation and cadence cli commands.\n\n\n# what's next\n\nso far you've successfully finished two prerequisites to your cadence application. the next steps are to implement a simple worker service that hosts your workflows and to run your very first hello world cadence workflow.\n\ngo to java helloworld or golang helloworld.\n\n\n# troubleshooting\n\nthere can be various reasons that docker-compose up cannot succeed:\n\n * in case of the image being too old, update the docker image by docker pull ubercadence/server:master-auto-setup and retry\n * in case of the local docker env is messed up: docker system prune --all and retry (see details about it )\n * see logs of different container:\n * if cassandra is not able to get up: docker logs -f docker_cassandra_1\n * if cadence is not able to get up: docker logs -f docker_cadence_1\n * if cadence web is not able to get up: docker logs -f docker_cadence-web_1\n\nif the above is still not working, open an issue in server(main) repo.",charsets:{cjk:!0}},{title:"Golang hello world",frontmatter:{layout:"default",title:"Golang hello world",permalink:"/docs/get-started/golang-hello-world",readingShow:"top"},regularPath:"/docs/01-get-started/03-golang-hello-world.html",relativePath:"docs/01-get-started/03-golang-hello-world.md",key:"v-5261e03c",path:"/docs/get-started/golang-hello-world/",headers:[{level:2,title:"Prerequisite",slug:"prerequisite",normalizedTitle:"prerequisite",charIndex:388},{level:2,title:"Step 1. Implement A Cadence Worker Service",slug:"step-1-implement-a-cadence-worker-service",normalizedTitle:"step 1. implement a cadence worker service",charIndex:922},{level:2,title:"Step 2. Write a simple Cadence hello world activity and workflow",slug:"step-2-write-a-simple-cadence-hello-world-activity-and-workflow",normalizedTitle:"step 2. write a simple cadence hello world activity and workflow",charIndex:4615},{level:2,title:"Step 3. Run the workflow with Cadence CLI",slug:"step-3-run-the-workflow-with-cadence-cli",normalizedTitle:"step 3. run the workflow with cadence cli",charIndex:5904},{level:2,title:"(Optional) Step 4. Monitor Cadence workflow with Cadence web UI",slug:"optional-step-4-monitor-cadence-workflow-with-cadence-web-ui",normalizedTitle:"(optional) step 4. monitor cadence workflow with cadence web ui",charIndex:6701},{level:2,title:"What is Next",slug:"what-is-next",normalizedTitle:"what is next",charIndex:7153}],codeSwitcherOptions:{},headersStr:"Prerequisite Step 1. Implement A Cadence Worker Service Step 2. Write a simple Cadence hello world activity and workflow Step 3. Run the workflow with Cadence CLI (Optional) Step 4. Monitor Cadence workflow with Cadence web UI What is Next",content:'# Golang Hello World\n\nThis section provides step-by-step instructions on how to write and run a HelloWorld workflow in Cadence with Golang. You will learn two critical building blocks of Cadence: activities and workflows. First, you will write an activity function that prints a "Hello World!" message in the log. Then, you will write a workflow function that executes this activity.\n\n\n# Prerequisite\n\nTo successfully run this hello world sample, follow this checklist of setting up Cadence environment\n\n 1. Your worker is running properly and you have registered the hello world activity and workflow to the worker\n 2. Your Cadence server is running (check your background docker container process)\n 3. You have successfully registered a domain for this workflow\n\nYou must finish part 2 and 3 by following the first section to proceed the next steps. We are using domain called test-domain for this tutorial project.\n\n\n# Step 1. Implement A Cadence Worker Service\n\nCreate a new main.go file in your local directory and paste the basic worker service layout.\n\npackage main\n\nimport (\n "net/http"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\nvar HostPort = "127.0.0.1:7833"\nvar Domain = "test-domain"\nvar TaskListName = "test-worker"\nvar ClientName = "test-worker"\nvar CadenceService = "cadence-frontend"\n\nfunc main() {\n startWorker(buildLogger(), buildCadenceClient())\n err := http.ListenAndServe(":8080", nil)\n if err != nil {\n panic(err)\n }\n}\n\nfunc buildLogger() *zap.Logger {\n config := zap.NewDevelopmentConfig()\n config.Level.SetLevel(zapcore.InfoLevel)\n\n var err error\n logger, err := config.Build()\n if err != nil {\n panic("Failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n\t\tName: ClientName,\n\t\tOutbounds: yarpc.Outbounds{\n\t\t CadenceService: {Unary: grpc.NewTransport().NewSingleOutbound(HostPort)},\n\t\t},\n\t })\n\t if err := dispatcher.Start(); err != nil {\n\t\tpanic("Failed to start dispatcher")\n\t }\n \n\t clientConfig := dispatcher.ClientConfig(CadenceService)\n \n\t return compatibility.NewThrift2ProtoAdapter(\n\t\tapiv1.NewDomainAPIYARPCClient(clientConfig),\n\t\tapiv1.NewWorkflowAPIYARPCClient(clientConfig),\n\t\tapiv1.NewWorkerAPIYARPCClient(clientConfig),\n\t\tapiv1.NewVisibilityAPIYARPCClient(clientConfig),\n\t )\n}\n\nfunc startWorker(logger *zap.Logger, service workflowserviceclient.Interface) {\n // TaskListName identifies set of client workflows, activities, and workers.\n // It could be your group or client or application name.\n workerOptions := worker.Options{\n Logger: logger,\n MetricsScope: tally.NewTestScope(TaskListName, map[string]string{}),\n }\n\n worker := worker.New(\n service,\n Domain,\n TaskListName,\n workerOptions)\n err := worker.Start()\n if err != nil {\n panic("Failed to start worker")\n }\n\n logger.Info("Started Worker.", zap.String("worker", TaskListName))\n}\n\n\nIn this worker service, we start a HTTP server and create a new Cadence client running continuously at the background. Then start the server on your local, you may see logs such like\n\n2023-07-03T11:46:46.266-0700 INFO internal/internal_worker.go:826 Worker has no workflows registered, so workflow worker will not be started. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03T11:46:46.267-0700 INFO internal/internal_worker.go:834 Started Workflow Worker {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03T11:46:46.267-0700 INFO internal/internal_worker.go:838 Worker has no activities registered, so activity worker will not be started. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03T11:46:46.267-0700 INFO cadence-worker/main.go:75 Started Worker. {"worker": "test-worker"}\n\n\nYou may see this because there are no activities and workflows registered to the worker. Let\'s proceed to next steps to write a hello world activity and workflow.\n\n\n# Step 2. Write a simple Cadence hello world activity and workflow\n\nLet\'s write a hello world activity, which take a single input called name and greet us after the workflow is finished.\n\nfunc helloWorldWorkflow(ctx workflow.Context, name string) error {\n\tao := workflow.ActivityOptions{\n\t\tScheduleToStartTimeout: time.Minute,\n\t\tStartToCloseTimeout: time.Minute,\n\t\tHeartbeatTimeout: time.Second * 20,\n\t}\n\tctx = workflow.WithActivityOptions(ctx, ao)\n\n\tlogger := workflow.GetLogger(ctx)\n\tlogger.Info("helloworld workflow started")\n\tvar helloworldResult string\n\terr := workflow.ExecuteActivity(ctx, helloWorldActivity, name).Get(ctx, &helloworldResult)\n\tif err != nil {\n\t\tlogger.Error("Activity failed.", zap.Error(err))\n\t\treturn err\n\t}\n\n\tlogger.Info("Workflow completed.", zap.String("Result", helloworldResult))\n\n\treturn nil\n}\n\nfunc helloWorldActivity(ctx context.Context, name string) (string, error) {\n\tlogger := activity.GetLogger(ctx)\n\tlogger.Info("helloworld activity started")\n\treturn "Hello " + name + "!", nil\n}\n\n\nDon\'t forget to register the workflow and activity to the worker.\n\nfunc init() {\n workflow.Register(helloWorldWorkflow)\n activity.Register(helloWorldActivity)\n}\n\n\nImport the context module if it was not automatically added.\n\nimport (\n "context"\n)\n\n\n\n# Step 3. Run the workflow with Cadence CLI\n\nRestart your worker and run the following command to interact with your workflow.\n\ncadence --domain test-domain workflow start --et 60 --tl test-worker --workflow_type main.helloWorldWorkflow --input \'"World"\'\n\n\nYou should see logs in your worker terminal like\n\n2023-07-16T11:30:02.717-0700 INFO cadence-worker/code.go:104 Workflow completed. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "11294@uber-C02F18EQMD6R@test-worker@5829c68e-ace0-472f-b5f3-6ccfc7903dd5", "WorkflowType": "main.helloWorldWorkflow", "WorkflowID": "8acbda3c-d240-4f27-8388-97c866b8bfb5", "RunID": "4b91341f-056f-4f0b-ab64-83bcc3a53e5a", "Result": "Hello World!"}\n\n\nCongratulations! You just launched your very first Cadence workflow from scratch\n\n\n# (Optional) Step 4. Monitor Cadence workflow with Cadence web UI\n\nWhen you start the Cadence backend server, it also automatically starts a front end portal for your workflow. Open you browser and go to\n\nhttp://localhost:8088\n\nYou may see a dashboard below\n\nType the domain you used for the tutorial, in this case, we type test-domain and hit enter. Then you can see a complete history of the workflows you have triggered associated to this domain.\n\n\n# What is Next\n\nNow you have completed the tutorials. You can continue to explore the key concepts in Cadence, and also how to use them with Go Client\n\nFor complete, ready to build samples covering all the key Cadence concepts go to Cadence-Samples for more examples.\n\nYou can also review Cadence-Client and go-docs for more documentation.',normalizedContent:'# golang hello world\n\nthis section provides step-by-step instructions on how to write and run a helloworld workflow in cadence with golang. you will learn two critical building blocks of cadence: activities and workflows. first, you will write an activity function that prints a "hello world!" message in the log. then, you will write a workflow function that executes this activity.\n\n\n# prerequisite\n\nto successfully run this hello world sample, follow this checklist of setting up cadence environment\n\n 1. your worker is running properly and you have registered the hello world activity and workflow to the worker\n 2. your cadence server is running (check your background docker container process)\n 3. you have successfully registered a domain for this workflow\n\nyou must finish part 2 and 3 by following the first section to proceed the next steps. we are using domain called test-domain for this tutorial project.\n\n\n# step 1. implement a cadence worker service\n\ncreate a new main.go file in your local directory and paste the basic worker service layout.\n\npackage main\n\nimport (\n "net/http"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\nvar hostport = "127.0.0.1:7833"\nvar domain = "test-domain"\nvar tasklistname = "test-worker"\nvar clientname = "test-worker"\nvar cadenceservice = "cadence-frontend"\n\nfunc main() {\n startworker(buildlogger(), buildcadenceclient())\n err := http.listenandserve(":8080", nil)\n if err != nil {\n panic(err)\n }\n}\n\nfunc buildlogger() *zap.logger {\n config := zap.newdevelopmentconfig()\n config.level.setlevel(zapcore.infolevel)\n\n var err error\n logger, err := config.build()\n if err != nil {\n panic("failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n dispatcher := yarpc.newdispatcher(yarpc.config{\n\t\tname: clientname,\n\t\toutbounds: yarpc.outbounds{\n\t\t cadenceservice: {unary: grpc.newtransport().newsingleoutbound(hostport)},\n\t\t},\n\t })\n\t if err := dispatcher.start(); err != nil {\n\t\tpanic("failed to start dispatcher")\n\t }\n \n\t clientconfig := dispatcher.clientconfig(cadenceservice)\n \n\t return compatibility.newthrift2protoadapter(\n\t\tapiv1.newdomainapiyarpcclient(clientconfig),\n\t\tapiv1.newworkflowapiyarpcclient(clientconfig),\n\t\tapiv1.newworkerapiyarpcclient(clientconfig),\n\t\tapiv1.newvisibilityapiyarpcclient(clientconfig),\n\t )\n}\n\nfunc startworker(logger *zap.logger, service workflowserviceclient.interface) {\n // tasklistname identifies set of client workflows, activities, and workers.\n // it could be your group or client or application name.\n workeroptions := worker.options{\n logger: logger,\n metricsscope: tally.newtestscope(tasklistname, map[string]string{}),\n }\n\n worker := worker.new(\n service,\n domain,\n tasklistname,\n workeroptions)\n err := worker.start()\n if err != nil {\n panic("failed to start worker")\n }\n\n logger.info("started worker.", zap.string("worker", tasklistname))\n}\n\n\nin this worker service, we start a http server and create a new cadence client running continuously at the background. then start the server on your local, you may see logs such like\n\n2023-07-03t11:46:46.266-0700 info internal/internal_worker.go:826 worker has no workflows registered, so workflow worker will not be started. {"domain": "test-domain", "tasklist": "test-worker", "workerid": "35987@uber-c02f18eqmd6r@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03t11:46:46.267-0700 info internal/internal_worker.go:834 started workflow worker {"domain": "test-domain", "tasklist": "test-worker", "workerid": "35987@uber-c02f18eqmd6r@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03t11:46:46.267-0700 info internal/internal_worker.go:838 worker has no activities registered, so activity worker will not be started. {"domain": "test-domain", "tasklist": "test-worker", "workerid": "35987@uber-c02f18eqmd6r@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03t11:46:46.267-0700 info cadence-worker/main.go:75 started worker. {"worker": "test-worker"}\n\n\nyou may see this because there are no activities and workflows registered to the worker. let\'s proceed to next steps to write a hello world activity and workflow.\n\n\n# step 2. write a simple cadence hello world activity and workflow\n\nlet\'s write a hello world activity, which take a single input called name and greet us after the workflow is finished.\n\nfunc helloworldworkflow(ctx workflow.context, name string) error {\n\tao := workflow.activityoptions{\n\t\tscheduletostarttimeout: time.minute,\n\t\tstarttoclosetimeout: time.minute,\n\t\theartbeattimeout: time.second * 20,\n\t}\n\tctx = workflow.withactivityoptions(ctx, ao)\n\n\tlogger := workflow.getlogger(ctx)\n\tlogger.info("helloworld workflow started")\n\tvar helloworldresult string\n\terr := workflow.executeactivity(ctx, helloworldactivity, name).get(ctx, &helloworldresult)\n\tif err != nil {\n\t\tlogger.error("activity failed.", zap.error(err))\n\t\treturn err\n\t}\n\n\tlogger.info("workflow completed.", zap.string("result", helloworldresult))\n\n\treturn nil\n}\n\nfunc helloworldactivity(ctx context.context, name string) (string, error) {\n\tlogger := activity.getlogger(ctx)\n\tlogger.info("helloworld activity started")\n\treturn "hello " + name + "!", nil\n}\n\n\ndon\'t forget to register the workflow and activity to the worker.\n\nfunc init() {\n workflow.register(helloworldworkflow)\n activity.register(helloworldactivity)\n}\n\n\nimport the context module if it was not automatically added.\n\nimport (\n "context"\n)\n\n\n\n# step 3. run the workflow with cadence cli\n\nrestart your worker and run the following command to interact with your workflow.\n\ncadence --domain test-domain workflow start --et 60 --tl test-worker --workflow_type main.helloworldworkflow --input \'"world"\'\n\n\nyou should see logs in your worker terminal like\n\n2023-07-16t11:30:02.717-0700 info cadence-worker/code.go:104 workflow completed. {"domain": "test-domain", "tasklist": "test-worker", "workerid": "11294@uber-c02f18eqmd6r@test-worker@5829c68e-ace0-472f-b5f3-6ccfc7903dd5", "workflowtype": "main.helloworldworkflow", "workflowid": "8acbda3c-d240-4f27-8388-97c866b8bfb5", "runid": "4b91341f-056f-4f0b-ab64-83bcc3a53e5a", "result": "hello world!"}\n\n\ncongratulations! you just launched your very first cadence workflow from scratch\n\n\n# (optional) step 4. monitor cadence workflow with cadence web ui\n\nwhen you start the cadence backend server, it also automatically starts a front end portal for your workflow. open you browser and go to\n\nhttp://localhost:8088\n\nyou may see a dashboard below\n\ntype the domain you used for the tutorial, in this case, we type test-domain and hit enter. then you can see a complete history of the workflows you have triggered associated to this domain.\n\n\n# what is next\n\nnow you have completed the tutorials. you can continue to explore the key concepts in cadence, and also how to use them with go client\n\nfor complete, ready to build samples covering all the key cadence concepts go to cadence-samples for more examples.\n\nyou can also review cadence-client and go-docs for more documentation.',charsets:{}},{title:"Java hello world",frontmatter:{layout:"default",title:"Java hello world",permalink:"/docs/get-started/java-hello-world",readingShow:"top"},regularPath:"/docs/01-get-started/02-java-hello-world.html",relativePath:"docs/01-get-started/02-java-hello-world.md",key:"v-40447742",path:"/docs/get-started/java-hello-world/",headers:[{level:2,title:"Include Cadence Java Client Dependency",slug:"include-cadence-java-client-dependency",normalizedTitle:"include cadence java client dependency",charIndex:295},{level:2,title:"Implement Hello World Workflow",slug:"implement-hello-world-workflow",normalizedTitle:"implement hello world workflow",charIndex:1932},{level:2,title:"Execute Hello World Workflow using the CLI",slug:"execute-hello-world-workflow-using-the-cli",normalizedTitle:"execute hello world workflow using the cli",charIndex:3650},{level:2,title:"List Workflows and Workflow History",slug:"list-workflows-and-workflow-history",normalizedTitle:"list workflows and workflow history",charIndex:7725},{level:2,title:"What is Next",slug:"what-is-next",normalizedTitle:"what is next",charIndex:10214}],codeSwitcherOptions:{},headersStr:"Include Cadence Java Client Dependency Implement Hello World Workflow Execute Hello World Workflow using the CLI List Workflows and Workflow History What is Next",content:'# Java Hello World\n\nThis section provides step by step instructions on how to write and run a HelloWorld with Java.\n\nFor complete, ready to build samples covering all the key Cadence concepts go to Cadence-Java-Samples.\n\nYou can also review Java-Client and java-docs for more documentation.\n\n\n# Include Cadence Java Client Dependency\n\nGo to the Maven Repository Uber Cadence Java Client Page and find the latest version of the library. Include it as a dependency into your Java project. For example if you are using Gradle the dependency looks like:\n\ncompile group: \'com.uber.cadence\', name: \'cadence-client\', version: \'\'\n\n\nAlso add the following dependencies that cadence-client relies on:\n\ncompile group: \'commons-configuration\', name: \'commons-configuration\', version: \'1.9\'\ncompile group: \'ch.qos.logback\', name: \'logback-classic\', version: \'1.2.3\'\n\n\nMake sure that the following code compiles:\n\nimport com.uber.cadence.workflow.Workflow;\nimport com.uber.cadence.workflow.WorkflowMethod;\nimport org.slf4j.Logger;\n\npublic class GettingStarted {\n\n private static Logger logger = Workflow.getLogger(GettingStarted.class);\n\n public interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n }\n\n}\n\n\nIf you are having problems setting up the build files use the Cadence Java Samples GitHub repository as a reference.\n\nAlso add the following logback config file somewhere in your classpath:\n\n\n \n \x3c!-- encoders are assigned the type\n ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --\x3e\n \n %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n\n \n \n \n \n \n \n\n\n\n\n# Implement Hello World Workflow\n\nLet\'s add HelloWorldImpl with the sayHello method that just logs the "Hello ..." and returns.\n\nimport com.uber.cadence.worker.Worker;\nimport com.uber.cadence.workflow.Workflow;\nimport com.uber.cadence.workflow.WorkflowMethod;\nimport org.slf4j.Logger;\n\npublic class GettingStarted {\n\n private static Logger logger = Workflow.getLogger(GettingStarted.class);\n\n public interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n }\n\n public static class HelloWorldImpl implements HelloWorld {\n\n @Override\n public void sayHello(String name) {\n logger.info("Hello " + name + "!");\n }\n }\n}\n\n\nTo link the implementation to the Cadence framework, it should be registered with a that connects to a Cadence Service. By default the connects to the locally running Cadence service.\n\npublic static void main(String[] args) {\n WorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(ClientOptions.defaultInstance()),\n WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());\n // Get worker to poll the task list.\n WorkerFactory factory = WorkerFactory.newInstance(workflowClient);\n Worker worker = factory.newWorker(TASK_LIST);\n worker.registerWorkflowImplementationTypes(HelloWorldImpl.class);\n factory.start();\n}\n\n\nThe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(String[] args) {\n Worker.Factory factory = new Worker.Factory("test-domain");\n Worker worker = factory.newWorker("HelloWorldTaskList");\n worker.registerWorkflowImplementationTypes(HelloWorldImpl.class);\n factory.start();\n}\n\n\n\n# Execute Hello World Workflow using the CLI\n\nNow run the program. Following is an example log:\n\n13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0\n13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'}, identity=45937@maxim-C02XD0AAJGH6}\n13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n\n\nNo Hello printed. This is expected because a is just a code host. The has to be started to execute. Let\'s use Cadence to start the workflow:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"World\\"\nStarted Workflow Id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run Id: e7c40431-8e23-485b-9649-e8f161219efe\n\n\nThe output of the program should change to:\n\n13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0\n13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'}, identity=45937@maxim-C02XD0AAJGH6}\n13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello World!\n\n\nLet\'s start another\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"Cadence\\"\nStarted Workflow Id: d2083532-9c68-49ab-90e1-d960175377a7, run Id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e\n\n\nAnd the output changed to:\n\n13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0\n13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'}, identity=45937@maxim-C02XD0AAJGH6}\n13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello World!\n13:42:34.994 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello Cadence!\n\n\n\n# List Workflows and Workflow History\n\nLet\'s list our in the\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow list\n WORKFLOW TYPE | WORKFLOW ID | RUN ID | START TIME | EXECUTION TIME | END TIME\n HelloWorld::sayHello | d2083532-9c68-49ab-90e1-d960175377a7 | 331bfa04-834b-45a7-861e-bcb9f6ddae3e | 20:42:34 | 20:42:34 | 20:42:35\n HelloWorld::sayHello | bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 | e7c40431-8e23-485b-9649-e8f161219efe | 20:40:28 | 20:40:28 | 20:40:29\n\n\nNow let\'s look at the history:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7\n 1 WorkflowExecutionStarted {WorkflowType:{Name:HelloWorld::sayHello},\n TaskList:{Name:HelloWorldTaskList},\n Input:["World"],\n ExecutionStartToCloseTimeoutSeconds:3600,\n TaskStartToCloseTimeoutSeconds:10,\n ContinuedFailureDetails:[],\n LastCompletionResult:[],\n Identity:cadence-cli@linuxkit-025000000001,\n Attempt:0,\n FirstDecisionTaskBackoffSeconds:0}\n 2 DecisionTaskScheduled {TaskList:{Name:HelloWorldTaskList},\n StartToCloseTimeoutSeconds:10,\n Attempt:0}\n 3 DecisionTaskStarted {ScheduledEventId:2,\n Identity:45937@maxim-C02XD0AAJGH6,\n RequestId:481a14e5-67a4-436e-9a23-7f7fb7f87ef3}\n 4 DecisionTaskCompleted {ExecutionContext:[],\n ScheduledEventId:2,\n StartedEventId:3,\n Identity:45937@maxim-C02XD0AAJGH6}\n 5 WorkflowExecutionCompleted {Result:[],\n DecisionTaskCompletedEventId:4}\n\n\nEven for such a trivial , the history gives a lot of useful information. For complex this is a really useful tool for production and development troubleshooting. History can be automatically archived to a long-term blob store (for example Amazon S3) upon completion for compliance, analytical, and troubleshooting purposes.\n\n\n# What is Next\n\nNow you have completed the tutorials. You can continue to explore the key concepts in Cadence, and also how to use them with Java Client',normalizedContent:'# java hello world\n\nthis section provides step by step instructions on how to write and run a helloworld with java.\n\nfor complete, ready to build samples covering all the key cadence concepts go to cadence-java-samples.\n\nyou can also review java-client and java-docs for more documentation.\n\n\n# include cadence java client dependency\n\ngo to the maven repository uber cadence java client page and find the latest version of the library. include it as a dependency into your java project. for example if you are using gradle the dependency looks like:\n\ncompile group: \'com.uber.cadence\', name: \'cadence-client\', version: \'\'\n\n\nalso add the following dependencies that cadence-client relies on:\n\ncompile group: \'commons-configuration\', name: \'commons-configuration\', version: \'1.9\'\ncompile group: \'ch.qos.logback\', name: \'logback-classic\', version: \'1.2.3\'\n\n\nmake sure that the following code compiles:\n\nimport com.uber.cadence.workflow.workflow;\nimport com.uber.cadence.workflow.workflowmethod;\nimport org.slf4j.logger;\n\npublic class gettingstarted {\n\n private static logger logger = workflow.getlogger(gettingstarted.class);\n\n public interface helloworld {\n @workflowmethod\n void sayhello(string name);\n }\n\n}\n\n\nif you are having problems setting up the build files use the cadence java samples github repository as a reference.\n\nalso add the following logback config file somewhere in your classpath:\n\n\n \n \x3c!-- encoders are assigned the type\n ch.qos.logback.classic.encoder.patternlayoutencoder by default --\x3e\n \n %d{hh:mm:ss.sss} [%thread] %-5level %logger{36} - %msg%n\n \n \n \n \n \n \n\n\n\n\n# implement hello world workflow\n\nlet\'s add helloworldimpl with the sayhello method that just logs the "hello ..." and returns.\n\nimport com.uber.cadence.worker.worker;\nimport com.uber.cadence.workflow.workflow;\nimport com.uber.cadence.workflow.workflowmethod;\nimport org.slf4j.logger;\n\npublic class gettingstarted {\n\n private static logger logger = workflow.getlogger(gettingstarted.class);\n\n public interface helloworld {\n @workflowmethod\n void sayhello(string name);\n }\n\n public static class helloworldimpl implements helloworld {\n\n @override\n public void sayhello(string name) {\n logger.info("hello " + name + "!");\n }\n }\n}\n\n\nto link the implementation to the cadence framework, it should be registered with a that connects to a cadence service. by default the connects to the locally running cadence service.\n\npublic static void main(string[] args) {\n workflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(clientoptions.defaultinstance()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n // get worker to poll the task list.\n workerfactory factory = workerfactory.newinstance(workflowclient);\n worker worker = factory.newworker(task_list);\n worker.registerworkflowimplementationtypes(helloworldimpl.class);\n factory.start();\n}\n\n\nthe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(string[] args) {\n worker.factory factory = new worker.factory("test-domain");\n worker worker = factory.newworker("helloworldtasklist");\n worker.registerworkflowimplementationtypes(helloworldimpl.class);\n factory.start();\n}\n\n\n\n# execute hello world workflow using the cli\n\nnow run the program. following is an example log:\n\n13:35:02.575 [main] info c.u.c.s.workflowservicetchannel - initialized tchannel for service cadence-frontend, libraryversion: 2.2.0, featureversion: 1.0.0\n13:35:02.671 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'workflow poller tasklist="helloworldtasklist", domain="test-domain", type="workflow"\'}, identity=45937@maxim-c02xd0aajgh6}\n13:35:02.673 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n\n\nno hello printed. this is expected because a is just a code host. the has to be started to execute. let\'s use cadence to start the workflow:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"world\\"\nstarted workflow id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run id: e7c40431-8e23-485b-9649-e8f161219efe\n\n\nthe output of the program should change to:\n\n13:35:02.575 [main] info c.u.c.s.workflowservicetchannel - initialized tchannel for service cadence-frontend, libraryversion: 2.2.0, featureversion: 1.0.0\n13:35:02.671 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'workflow poller tasklist="helloworldtasklist", domain="test-domain", type="workflow"\'}, identity=45937@maxim-c02xd0aajgh6}\n13:35:02.673 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] info c.u.c.samples.hello.gettingstarted - hello world!\n\n\nlet\'s start another\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"cadence\\"\nstarted workflow id: d2083532-9c68-49ab-90e1-d960175377a7, run id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e\n\n\nand the output changed to:\n\n13:35:02.575 [main] info c.u.c.s.workflowservicetchannel - initialized tchannel for service cadence-frontend, libraryversion: 2.2.0, featureversion: 1.0.0\n13:35:02.671 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'workflow poller tasklist="helloworldtasklist", domain="test-domain", type="workflow"\'}, identity=45937@maxim-c02xd0aajgh6}\n13:35:02.673 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] info c.u.c.samples.hello.gettingstarted - hello world!\n13:42:34.994 [workflow-root] info c.u.c.samples.hello.gettingstarted - hello cadence!\n\n\n\n# list workflows and workflow history\n\nlet\'s list our in the\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow list\n workflow type | workflow id | run id | start time | execution time | end time\n helloworld::sayhello | d2083532-9c68-49ab-90e1-d960175377a7 | 331bfa04-834b-45a7-861e-bcb9f6ddae3e | 20:42:34 | 20:42:34 | 20:42:35\n helloworld::sayhello | bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 | e7c40431-8e23-485b-9649-e8f161219efe | 20:40:28 | 20:40:28 | 20:40:29\n\n\nnow let\'s look at the history:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7\n 1 workflowexecutionstarted {workflowtype:{name:helloworld::sayhello},\n tasklist:{name:helloworldtasklist},\n input:["world"],\n executionstarttoclosetimeoutseconds:3600,\n taskstarttoclosetimeoutseconds:10,\n continuedfailuredetails:[],\n lastcompletionresult:[],\n identity:cadence-cli@linuxkit-025000000001,\n attempt:0,\n firstdecisiontaskbackoffseconds:0}\n 2 decisiontaskscheduled {tasklist:{name:helloworldtasklist},\n starttoclosetimeoutseconds:10,\n attempt:0}\n 3 decisiontaskstarted {scheduledeventid:2,\n identity:45937@maxim-c02xd0aajgh6,\n requestid:481a14e5-67a4-436e-9a23-7f7fb7f87ef3}\n 4 decisiontaskcompleted {executioncontext:[],\n scheduledeventid:2,\n startedeventid:3,\n identity:45937@maxim-c02xd0aajgh6}\n 5 workflowexecutioncompleted {result:[],\n decisiontaskcompletedeventid:4}\n\n\neven for such a trivial , the history gives a lot of useful information. for complex this is a really useful tool for production and development troubleshooting. history can be automatically archived to a long-term blob store (for example amazon s3) upon completion for compliance, analytical, and troubleshooting purposes.\n\n\n# what is next\n\nnow you have completed the tutorials. you can continue to explore the key concepts in cadence, and also how to use them with java client',charsets:{cjk:!0}},{title:"Video Tutorials",frontmatter:{layout:"default",title:"Video Tutorials",permalink:"/docs/get-started/video-tutorials",readingShow:"top"},regularPath:"/docs/01-get-started/04-video-tutorials.html",relativePath:"docs/01-get-started/04-video-tutorials.md",key:"v-696d6f80",path:"/docs/get-started/video-tutorials/",headers:[{level:2,title:"HelloWorld",slug:"helloworld",normalizedTitle:"helloworld",charIndex:88}],codeSwitcherOptions:{},headersStr:"HelloWorld",content:"# Overview\n\nAn Introduction to the Cadence programming model and value proposition.\n\n\n# HelloWorld\n\nA step-by-step video tutorial about how to install and run HellowWorld(Java).\n\n",normalizedContent:"# overview\n\nan introduction to the cadence programming model and value proposition.\n\n\n# helloworld\n\na step-by-step video tutorial about how to install and run hellowworld(java).\n\n",charsets:{}},{title:"Overview",frontmatter:{layout:"default",title:"Overview",description:"A large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous events, and communicate to external unreliable dependencies.",permalink:"/docs/get-started/",readingShow:"top"},regularPath:"/docs/01-get-started/",relativePath:"docs/01-get-started/index.md",key:"v-5ab4294a",path:"/docs/get-started/",headers:[{level:2,title:"What's Next",slug:"what-s-next",normalizedTitle:"what's next",charIndex:2059}],codeSwitcherOptions:{},headersStr:"What's Next",content:"# Overview\n\nA large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous , and communicate to external unreliable dependencies. The usual approach to building such applications is a hodgepodge of stateless services, databases, cron jobs, and queuing systems. This negatively impacts the developer productivity as most of the code is dedicated to plumbing, obscuring the actual business logic behind a myriad of low-level details. Such systems frequently have availability problems as it is hard to keep all the components healthy.\n\nThe Cadence solution is a fault-oblivious stateful programming model that obscures most of the complexities of building scalable distributed applications. In essence, Cadence provides a durable virtual memory that is not linked to a specific process, and preserves the full application state, including function stacks, with local variables across all sorts of host and software failures. This allows you to write code using the full power of a programming language while Cadence takes care of durability, availability, and scalability of the application.\n\nCadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate in familiar languages (Go and Java are supported officially, and Python and Ruby by the community).\n\nYou can also use iWF as a DSL framework on top of Cadence.\n\nThe Cadence backend service is stateless and relies on a persistent store. Currently, Cassandra and MySQL/Postgres storages are supported. An adapter to any other database that provides multi-row single shard transactions can be added. There are different service deployment models. At Uber, our team operates multitenant clusters that are shared by hundreds of applications. See service topology to understand the overall architecture. The GitHub repo for the Cadence server is uber/cadence. The docker image for the Cadence server is available on Docker Hub at ubercadence/server.\n\n\n# What's Next\n\nLet's try with some sample workflows. To start with, go to server installation to install cadence locally, and run a HelloWorld sample with Java or Golang.\n\nWhen you have any trouble with the instructions, you can watch the video tutorials, and reach out to us on Slack Channel, or raise any question on StackOverflow or open an Github issue.",normalizedContent:"# overview\n\na large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous , and communicate to external unreliable dependencies. the usual approach to building such applications is a hodgepodge of stateless services, databases, cron jobs, and queuing systems. this negatively impacts the developer productivity as most of the code is dedicated to plumbing, obscuring the actual business logic behind a myriad of low-level details. such systems frequently have availability problems as it is hard to keep all the components healthy.\n\nthe cadence solution is a fault-oblivious stateful programming model that obscures most of the complexities of building scalable distributed applications. in essence, cadence provides a durable virtual memory that is not linked to a specific process, and preserves the full application state, including function stacks, with local variables across all sorts of host and software failures. this allows you to write code using the full power of a programming language while cadence takes care of durability, availability, and scalability of the application.\n\ncadence consists of a programming framework (or client library) and a managed service (or backend). the framework enables developers to author and coordinate in familiar languages (go and java are supported officially, and python and ruby by the community).\n\nyou can also use iwf as a dsl framework on top of cadence.\n\nthe cadence backend service is stateless and relies on a persistent store. currently, cassandra and mysql/postgres storages are supported. an adapter to any other database that provides multi-row single shard transactions can be added. there are different service deployment models. at uber, our team operates multitenant clusters that are shared by hundreds of applications. see service topology to understand the overall architecture. the github repo for the cadence server is uber/cadence. the docker image for the cadence server is available on docker hub at ubercadence/server.\n\n\n# what's next\n\nlet's try with some sample workflows. to start with, go to server installation to install cadence locally, and run a helloworld sample with java or golang.\n\nwhen you have any trouble with the instructions, you can watch the video tutorials, and reach out to us on slack channel, or raise any question on stackoverflow or open an github issue.",charsets:{}},{title:"Periodic execution",frontmatter:{layout:"default",title:"Periodic execution",permalink:"/docs/use-cases/periodic-execution",readingShow:"top"},regularPath:"/docs/02-use-cases/01-periodic-execution.html",relativePath:"docs/02-use-cases/01-periodic-execution.md",key:"v-c2f362bc",path:"/docs/use-cases/periodic-execution/",codeSwitcherOptions:{},headersStr:null,content:"# Periodic execution (aka Distributed Cron)\n\nPeriodic execution, frequently referred to as distributed cron, is when you execute business logic periodically. The advantage of Cadence for these scenarios is that it guarantees execution, sophisticated error handling, retry policies, and visibility into execution history.\n\nAnother important dimension is scale. Some use cases require periodic execution for a large number of entities. At Uber, there are applications that create periodic per customer. Imagine 100+ million parallel cron jobs that don't require a separate batch processing framework.\n\nPeriodic execution is often part of other use cases. For example, once a month report generation is a periodic service orchestration. Or an event-driven that accumulates loyalty points for a customer and applies those points once a month.\n\nThere are many real-world examples of Cadence periodic executions. Such as the following:\n\n * An Uber backend service that recalculates various statistics for each hex in each city once a minute.\n * Monthly Uber for Business report generation.",normalizedContent:"# periodic execution (aka distributed cron)\n\nperiodic execution, frequently referred to as distributed cron, is when you execute business logic periodically. the advantage of cadence for these scenarios is that it guarantees execution, sophisticated error handling, retry policies, and visibility into execution history.\n\nanother important dimension is scale. some use cases require periodic execution for a large number of entities. at uber, there are applications that create periodic per customer. imagine 100+ million parallel cron jobs that don't require a separate batch processing framework.\n\nperiodic execution is often part of other use cases. for example, once a month report generation is a periodic service orchestration. or an event-driven that accumulates loyalty points for a customer and applies those points once a month.\n\nthere are many real-world examples of cadence periodic executions. such as the following:\n\n * an uber backend service that recalculates various statistics for each hex in each city once a minute.\n * monthly uber for business report generation.",charsets:{}},{title:"Orchestration",frontmatter:{layout:"default",title:"Orchestration",permalink:"/docs/use-cases/orchestration",readingShow:"top"},regularPath:"/docs/02-use-cases/02-orchestration.html",relativePath:"docs/02-use-cases/02-orchestration.md",key:"v-d5dcd2a0",path:"/docs/use-cases/orchestration/",codeSwitcherOptions:{},headersStr:null,content:"# Microservice Orchestration and Saga\n\nIt is common that some business processes are implemented as multiple microservice calls. And the implementation must guarantee that all of the calls must eventually succeed even with the occurrence of prolonged downstream service failures. In some cases, instead of trying to complete the process by retrying for a long time, compensation rollback logic should be executed. Saga Pattern is one way to standardize on compensation APIs.\n\nCadence is a perfect fit for such scenarios. It guarantees that code eventually completes, has built-in support for unlimited exponential retries and simplifies coding of the compensation logic. It also gives full visibility into the state of each , in contrast to an orchestration based on queues where getting a current status of each individual request is practically impossible.\n\nFollowing are some real-world examples of Cadence-based service orchestration scenarios:\n\n * Using Cadence workflows to spin up Kubernetes (Banzai Cloud Fork)\n * Improving the User Experience with Uber’s Customer Obsession Ticket Routing Workflow and Orchestration Engine\n * Enabling Faster Financial Partnership Integrations Using Cadence",normalizedContent:"# microservice orchestration and saga\n\nit is common that some business processes are implemented as multiple microservice calls. and the implementation must guarantee that all of the calls must eventually succeed even with the occurrence of prolonged downstream service failures. in some cases, instead of trying to complete the process by retrying for a long time, compensation rollback logic should be executed. saga pattern is one way to standardize on compensation apis.\n\ncadence is a perfect fit for such scenarios. it guarantees that code eventually completes, has built-in support for unlimited exponential retries and simplifies coding of the compensation logic. it also gives full visibility into the state of each , in contrast to an orchestration based on queues where getting a current status of each individual request is practically impossible.\n\nfollowing are some real-world examples of cadence-based service orchestration scenarios:\n\n * using cadence workflows to spin up kubernetes (banzai cloud fork)\n * improving the user experience with uber’s customer obsession ticket routing workflow and orchestration engine\n * enabling faster financial partnership integrations using cadence",charsets:{}},{title:"Polling",frontmatter:{layout:"default",title:"Polling",permalink:"/docs/use-cases/polling",readingShow:"top"},regularPath:"/docs/02-use-cases/03-polling.html",relativePath:"docs/02-use-cases/03-polling.md",key:"v-88def7ac",path:"/docs/use-cases/polling/",codeSwitcherOptions:{},headersStr:null,content:"# Polling\n\nPolling is executing a periodic action checking for a state change. Examples are pinging a host, calling a REST API, or listing an Amazon S3 bucket for newly uploaded files.\n\nCadence support for long running and unlimited retries makes it a good fit.\n\nSome real-world use cases:\n\n * Network, host and service monitoring\n * Processing files uploaded to FTP or S3\n * Cadence Polling Cookbook by Instaclustr: Polling an external API for a specific resource to become available:",normalizedContent:"# polling\n\npolling is executing a periodic action checking for a state change. examples are pinging a host, calling a rest api, or listing an amazon s3 bucket for newly uploaded files.\n\ncadence support for long running and unlimited retries makes it a good fit.\n\nsome real-world use cases:\n\n * network, host and service monitoring\n * processing files uploaded to ftp or s3\n * cadence polling cookbook by instaclustr: polling an external api for a specific resource to become available:",charsets:{}},{title:"Event driven application",frontmatter:{layout:"default",title:"Event driven application",permalink:"/docs/use-cases/event-driven",readingShow:"top"},regularPath:"/docs/02-use-cases/04-event-driven.html",relativePath:"docs/02-use-cases/04-event-driven.md",key:"v-7a5c92a2",path:"/docs/use-cases/event-driven/",codeSwitcherOptions:{},headersStr:null,content:"# Event driven application\n\nMany applications listen to multiple sources, update the state of correspondent business entities, and have to execute actions if some state is reached. Cadence is a good fit for many of these. It has direct support for asynchronous (aka ), has a simple programming model that obscures a lot of complexity around state persistence, and ensures external action execution through built-in retries.\n\nReal-world examples:\n\n * Fraud detection where reacts to generated by consumer behavior\n * Customer loyalty program where the accumulates reward points and applies them when requested",normalizedContent:"# event driven application\n\nmany applications listen to multiple sources, update the state of correspondent business entities, and have to execute actions if some state is reached. cadence is a good fit for many of these. it has direct support for asynchronous (aka ), has a simple programming model that obscures a lot of complexity around state persistence, and ensures external action execution through built-in retries.\n\nreal-world examples:\n\n * fraud detection where reacts to generated by consumer behavior\n * customer loyalty program where the accumulates reward points and applies them when requested",charsets:{}},{title:"Storage scan",frontmatter:{layout:"default",title:"Storage scan",permalink:"/docs/use-cases/partitioned-scan",readingShow:"top"},regularPath:"/docs/02-use-cases/05-partitioned-scan.html",relativePath:"docs/02-use-cases/05-partitioned-scan.md",key:"v-1bc7fd02",path:"/docs/use-cases/partitioned-scan/",codeSwitcherOptions:{},headersStr:null,content:"# Storage scan\n\nIt is common to have large data sets partitioned across a large number of hosts or databases, or having billions of files in an Amazon S3 bucket. Cadence is an ideal solution for implementing the full scan of such data in a scalable and resilient way. The standard pattern is to run an (or multiple parallel for partitioned data sets) that performs the scan and heartbeats its progress back to Cadence. In the case of a host failure, the is retried on a different host and continues execution from the last reported progress.\n\nA real-world example:\n\n * Cadence internal system that performs periodic scan of all records",normalizedContent:"# storage scan\n\nit is common to have large data sets partitioned across a large number of hosts or databases, or having billions of files in an amazon s3 bucket. cadence is an ideal solution for implementing the full scan of such data in a scalable and resilient way. the standard pattern is to run an (or multiple parallel for partitioned data sets) that performs the scan and heartbeats its progress back to cadence. in the case of a host failure, the is retried on a different host and continues execution from the last reported progress.\n\na real-world example:\n\n * cadence internal system that performs periodic scan of all records",charsets:{}},{title:"Batch job",frontmatter:{layout:"default",title:"Batch job",permalink:"/docs/use-cases/batch-job",readingShow:"top"},regularPath:"/docs/02-use-cases/06-batch-job.html",relativePath:"docs/02-use-cases/06-batch-job.md",key:"v-a14b6054",path:"/docs/use-cases/batch-job/",codeSwitcherOptions:{},headersStr:null,content:"# Batch job\n\nA lot of batch jobs are not pure data manipulation programs. For those, the existing big data frameworks are the best fit. But if processing a record requires external API calls that might fail and potentially take a long time, Cadence might be preferable.\n\nOne of our internal Uber customer uses Cadence for end of month statement generation. Each statement requires calls to multiple microservices and some statements can be really large. Cadence was chosen because it provides hard guarantees around durability of the financial data and seamlessly deals with long running operations, retries, and intermittent failures.",normalizedContent:"# batch job\n\na lot of batch jobs are not pure data manipulation programs. for those, the existing big data frameworks are the best fit. but if processing a record requires external api calls that might fail and potentially take a long time, cadence might be preferable.\n\none of our internal uber customer uses cadence for end of month statement generation. each statement requires calls to multiple microservices and some statements can be really large. cadence was chosen because it provides hard guarantees around durability of the financial data and seamlessly deals with long running operations, retries, and intermittent failures.",charsets:{}},{title:"Infrastructure provisioning",frontmatter:{layout:"default",title:"Infrastructure provisioning",permalink:"/docs/use-cases/provisioning",readingShow:"top"},regularPath:"/docs/02-use-cases/07-provisioning.html",relativePath:"docs/02-use-cases/07-provisioning.md",key:"v-28bf3ec2",path:"/docs/use-cases/provisioning/",codeSwitcherOptions:{},headersStr:null,content:"# Infrastructure provisioning\n\nProvisioning a new datacenter or a pool of machines in a public cloud is a potentially long running operation with a lot of possibilities for intermittent failures. The scale is also a concern when tens or even hundreds of thousands of resources should be provisioned and configured. One useful feature for provisioning scenarios is Cadence support for routing execution to a specific process or host.\n\nA lot of operations require some sort of locking to ensure that no more than one mutation is executed on a resource at a time. Cadence provides strong guarantees of uniqueness by business ID. This can be used to implement such locking behavior in a fault tolerant and scalable manner.\n\nSome real-world use cases:\n\n * Using Cadence workflows to spin up Kubernetes, by Banzai Cloud\n * Using Cadence to orchestrate cluster life cycle in HashiCorp Consul, by HashiCorp",normalizedContent:"# infrastructure provisioning\n\nprovisioning a new datacenter or a pool of machines in a public cloud is a potentially long running operation with a lot of possibilities for intermittent failures. the scale is also a concern when tens or even hundreds of thousands of resources should be provisioned and configured. one useful feature for provisioning scenarios is cadence support for routing execution to a specific process or host.\n\na lot of operations require some sort of locking to ensure that no more than one mutation is executed on a resource at a time. cadence provides strong guarantees of uniqueness by business id. this can be used to implement such locking behavior in a fault tolerant and scalable manner.\n\nsome real-world use cases:\n\n * using cadence workflows to spin up kubernetes, by banzai cloud\n * using cadence to orchestrate cluster life cycle in hashicorp consul, by hashicorp",charsets:{}},{title:"Deployment",frontmatter:{layout:"default",title:"Deployment",permalink:"/docs/use-cases/deployment",readingShow:"top"},regularPath:"/docs/02-use-cases/08-deployment.html",relativePath:"docs/02-use-cases/08-deployment.md",key:"v-c99e5abc",path:"/docs/use-cases/deployment/",codeSwitcherOptions:{},headersStr:null,content:"# CI/CD and Deployment\n\nImplementing CI/CD pipelines and deployment of applications to containers or virtual or physical machines is a non-trivial process. Its business logic has to deal with complex requirements around rolling upgrades, canary deployments, and rollbacks. Cadence is a perfect platform for building a deployment solution because it provides all the necessary guarantees and abstractions allowing developers to focus on the business logic.\n\nExample production systems:\n\n * Uber internal deployment infrastructure\n * Update push to IoT devices",normalizedContent:"# ci/cd and deployment\n\nimplementing ci/cd pipelines and deployment of applications to containers or virtual or physical machines is a non-trivial process. its business logic has to deal with complex requirements around rolling upgrades, canary deployments, and rollbacks. cadence is a perfect platform for building a deployment solution because it provides all the necessary guarantees and abstractions allowing developers to focus on the business logic.\n\nexample production systems:\n\n * uber internal deployment infrastructure\n * update push to iot devices",charsets:{}},{title:"Operational management",frontmatter:{layout:"default",title:"Operational management",permalink:"/docs/use-cases/operational-management",readingShow:"top"},regularPath:"/docs/02-use-cases/09-operational-management.html",relativePath:"docs/02-use-cases/09-operational-management.md",key:"v-36ed9422",path:"/docs/use-cases/operational-management/",codeSwitcherOptions:{},headersStr:null,content:"# Operational management\n\nImagine that you have to create a self operating database similar to Amazon RDS. Cadence is used in multiple projects that automate managing and automatic recovery of various products like MySQL, Elasticsearch and Apache Cassandra.\n\nSuch systems are usually a mixture of different use cases. They need to monitor the status of resources using polling. They have to execute orchestration API calls to administrative interfaces of a database. They have to provision new hardware or Docker instances if necessary. They need to push configuration updates and perform other actions like backups periodically.",normalizedContent:"# operational management\n\nimagine that you have to create a self operating database similar to amazon rds. cadence is used in multiple projects that automate managing and automatic recovery of various products like mysql, elasticsearch and apache cassandra.\n\nsuch systems are usually a mixture of different use cases. they need to monitor the status of resources using polling. they have to execute orchestration api calls to administrative interfaces of a database. they have to provision new hardware or docker instances if necessary. they need to push configuration updates and perform other actions like backups periodically.",charsets:{}},{title:"DSL workflows",frontmatter:{layout:"default",title:"DSL workflows",permalink:"/docs/use-cases/dsl",readingShow:"top"},regularPath:"/docs/02-use-cases/11-dsl.html",relativePath:"docs/02-use-cases/11-dsl.md",key:"v-611b8c3c",path:"/docs/use-cases/dsl/",codeSwitcherOptions:{},headersStr:null,content:'# DSL workflows\n\nCadence supports implementing business logic directly in programming languages like Java and Go. But there are cases when using a domain-specific language is more appropriate. Or there might be a legacy system that uses some form of DSL for process definition but it is not operationally stable and scalable. This also applies to more recent systems like Apache Airflow, various BPMN engines and AWS Step Functions.\n\nAn application that interprets the DSL definition can be written using the Cadence SDK. It automatically becomes highly fault tolerant, scalable, and durable when running on Cadence. Cadence has been used to deprecate several Uber internal DSL engines. The customers continue to use existing process definitions, but Cadence is used as an execution engine.\n\nThere are multiple benefits of unifying all company engines on top of Cadence. The most obvious one is that it is more efficient to support a single product instead of many. It is also difficult to beat the scalability and stability of Cadence which each of the integrations it comes with. Additionally, the ability to share across "engines" might be a huge benefit in some cases.',normalizedContent:'# dsl workflows\n\ncadence supports implementing business logic directly in programming languages like java and go. but there are cases when using a domain-specific language is more appropriate. or there might be a legacy system that uses some form of dsl for process definition but it is not operationally stable and scalable. this also applies to more recent systems like apache airflow, various bpmn engines and aws step functions.\n\nan application that interprets the dsl definition can be written using the cadence sdk. it automatically becomes highly fault tolerant, scalable, and durable when running on cadence. cadence has been used to deprecate several uber internal dsl engines. the customers continue to use existing process definitions, but cadence is used as an execution engine.\n\nthere are multiple benefits of unifying all company engines on top of cadence. the most obvious one is that it is more efficient to support a single product instead of many. it is also difficult to beat the scalability and stability of cadence which each of the integrations it comes with. additionally, the ability to share across "engines" might be a huge benefit in some cases.',charsets:{}},{title:"Interactive application",frontmatter:{layout:"default",title:"Interactive application",permalink:"/docs/use-cases/interactive",readingShow:"top"},regularPath:"/docs/02-use-cases/10-interactive.html",relativePath:"docs/02-use-cases/10-interactive.md",key:"v-6b66fa18",path:"/docs/use-cases/interactive/",codeSwitcherOptions:{},headersStr:null,content:"# Interactive application\n\nCadence is performant and scalable enough to support interactive applications. It can be used to track UI session state and at the same time execute background operations. For example, while placing an order a customer might need to go through several screens while a background evaluates the customer for fraudulent .",normalizedContent:"# interactive application\n\ncadence is performant and scalable enough to support interactive applications. it can be used to track ui session state and at the same time execute background operations. for example, while placing an order a customer might need to go through several screens while a background evaluates the customer for fraudulent .",charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/use-cases/",readingShow:"top"},regularPath:"/docs/02-use-cases/",relativePath:"docs/02-use-cases/index.md",key:"v-13d0c1ca",path:"/docs/use-cases/",codeSwitcherOptions:{},headersStr:null,content:'# Use cases\n\nAs Cadence developers, we face a difficult non-technical problem: How to position and describe the Cadence platform.\n\nWe call it workflow. But when most people hear the word "workflow" they think about low-code and UIs. While these might be useful for non technical users, they frequently bring more pain than value to software engineers. Most UIs and low-code DSLs are awesome for "hello world" demo applications, but any diagram with 100+ elements or a few thousand lines of JSON DSL is completely impractical. So positioning Cadence as a is not ideal as it turns away developers that would enjoy its code-only approach.\n\nWe call it orchestrator. But this term is pretty narrow and turns away customers that want to implement business process automation solutions.\n\nWe call it durable function platform. It is technically a correct term. But most developers outside of the Microsoft ecosystem have never heard of Durable Functions.\n\nWe believe that problem in naming comes from the fact that Cadence is indeed a new way to write distributed applications. It is generic enough that it can be applied to practically any use case that goes beyond a single request reply. It can be used to build applications that are in traditional areas of or orchestration platforms. But it is also huge developer productivity boost for multiple use cases that traditionally rely on databases and/or queues.\n\nThis section represents a far from complete list of use cases where Cadence is a good fit. All of them have been used by real production services inside and outside of Uber.\n\nDon\'t think of this list as exhaustive. It is common to employ multiple use types in a single application. For example, an operational management use case might need periodic execution, service orchestration, polling, driven, as well as interactive parts.',normalizedContent:'# use cases\n\nas cadence developers, we face a difficult non-technical problem: how to position and describe the cadence platform.\n\nwe call it workflow. but when most people hear the word "workflow" they think about low-code and uis. while these might be useful for non technical users, they frequently bring more pain than value to software engineers. most uis and low-code dsls are awesome for "hello world" demo applications, but any diagram with 100+ elements or a few thousand lines of json dsl is completely impractical. so positioning cadence as a is not ideal as it turns away developers that would enjoy its code-only approach.\n\nwe call it orchestrator. but this term is pretty narrow and turns away customers that want to implement business process automation solutions.\n\nwe call it durable function platform. it is technically a correct term. but most developers outside of the microsoft ecosystem have never heard of durable functions.\n\nwe believe that problem in naming comes from the fact that cadence is indeed a new way to write distributed applications. it is generic enough that it can be applied to practically any use case that goes beyond a single request reply. it can be used to build applications that are in traditional areas of or orchestration platforms. but it is also huge developer productivity boost for multiple use cases that traditionally rely on databases and/or queues.\n\nthis section represents a far from complete list of use cases where cadence is a good fit. all of them have been used by real production services inside and outside of uber.\n\ndon\'t think of this list as exhaustive. it is common to employ multiple use types in a single application. for example, an operational management use case might need periodic execution, service orchestration, polling, driven, as well as interactive parts.',charsets:{}},{title:"Big data and ML",frontmatter:{layout:"default",title:"Big data and ML",permalink:"/docs/use-cases/big-ml",readingShow:"top"},regularPath:"/docs/02-use-cases/12-big-ml.html",relativePath:"docs/02-use-cases/12-big-ml.md",key:"v-163bae3c",path:"/docs/use-cases/big-ml/",codeSwitcherOptions:{},headersStr:null,content:"# Big data and ML\n\nA lot of companies build custom ETL and ML training and deployment solutions. Cadence is a good fit for a control plane for such applications.\n\nOne important feature of Cadence is its ability to route execution to a specific process or host. It is useful to control how ML models and other large files are allocated to hosts. For example, if an ML model is partitioned by city, the requests should be routed to hosts that contain the corresponding city model.",normalizedContent:"# big data and ml\n\na lot of companies build custom etl and ml training and deployment solutions. cadence is a good fit for a control plane for such applications.\n\none important feature of cadence is its ability to route execution to a specific process or host. it is useful to control how ml models and other large files are allocated to hosts. for example, if an ml model is partitioned by city, the requests should be routed to hosts that contain the corresponding city model.",charsets:{}},{title:"Workflows",frontmatter:{layout:"default",title:"Workflows",permalink:"/docs/concepts/workflows",readingShow:"top"},regularPath:"/docs/03-concepts/01-workflows.html",relativePath:"docs/03-concepts/01-workflows.md",key:"v-8d905b7c",path:"/docs/concepts/workflows/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:45},{level:2,title:"Example",slug:"example",normalizedTitle:"example",charIndex:347},{level:2,title:"State Recovery and Determinism",slug:"state-recovery-and-determinism",normalizedTitle:"state recovery and determinism",charIndex:7821},{level:2,title:"ID Uniqueness",slug:"id-uniqueness",normalizedTitle:"id uniqueness",charIndex:8556},{level:2,title:"Child Workflow",slug:"child-workflow",normalizedTitle:"child workflow",charIndex:9681},{level:2,title:"Workflow Retries",slug:"workflow-retries",normalizedTitle:"workflow retries",charIndex:11254},{level:2,title:"How does workflow run",slug:"how-does-workflow-run",normalizedTitle:"how does workflow run",charIndex:12798}],codeSwitcherOptions:{},headersStr:"Overview Example State Recovery and Determinism ID Uniqueness Child Workflow Workflow Retries How does workflow run",content:"# Fault-oblivious stateful workflow code\n\n\n# Overview\n\nCadence core abstraction is a fault-oblivious stateful . The state of the code, including local variables and threads it creates, is immune to process and Cadence service failures. This is a very powerful concept as it encapsulates state, processing threads, durable timers and handlers.\n\n\n# Example\n\nLet's look at a use case. A customer signs up for an application with a trial period. After the period, if the customer has not cancelled, he should be charged once a month for the renewal. The customer has to be notified by email about the charges and should be able to cancel the subscription at any time.\n\nThe business logic of this use case is not very complicated and can be expressed in a few dozen lines of code. But any practical implementation has to ensure that the business process is fault tolerant and scalable. There are various ways to approach the design of such a system.\n\nOne approach is to center it around a database. An application process would periodically scan database tables for customers in specific states, execute necessary actions, and update the state to reflect that. While feasible, this approach has various drawbacks. The most obvious is that the state machine of the customer state quickly becomes extremely complicated. For example, charging a credit card or sending emails can fail due to a downstream system unavailability. The failed calls might need to be retried for a long time, ideally using an exponential retry policy. These calls should be throttled to not overload external systems. There should be support for poison pills to avoid blocking the whole process if a single customer record cannot be processed for whatever reason. The database-based approach also usually has performance problems. Databases are not efficient for scenarios that require constant polling for records in a specific state.\n\nAnother commonly employed approach is to use a timer service and queues. Any update is pushed to a queue and then a that consumes from it updates a database and possibly pushes more messages in downstream queues. For operations that require scheduling, an external timer service can be used. This approach usually scales much better because a database is not constantly polled for changes. But it makes the programming model more complex and error prone as usually there is no transactional update between a queuing system and a database.\n\nWith Cadence, the entire logic can be encapsulated in a simple durable function that directly implements the business logic. Because the function is stateful, the implementer doesn't need to employ any additional systems to ensure durability and fault tolerance.\n\nHere is an example that implements the subscription management use case. It is in Java, but Go is also supported. The Python and .NET libraries are under active development.\n\n// This SubscriptionWorkflow interface is an example of defining a workflow in Cadence\npublic interface SubscriptionWorkflow {\n @WorkflowMethod\n void manageSubscription(String customerId);\n @SignalMethod\n void cancelSubscription();\n @SignalMethod \n void updateBillingPeriodChargeAmount(int billingPeriodChargeAmount);\n @QueryMethod \n String queryCustomerId();\n @QueryMethod \n int queryBillingPeriodNumber();\n @QueryMethod \n int queryBillingPeriodChargeAmount();\n}\n\n// Workflow implementation is independent from interface. That way, application that start/signal/query workflows only need to know the interface\npublic class SubscriptionWorkflowImpl implements SubscriptionWorkflow {\n\n private int billingPeriodNum;\n private boolean subscriptionCancelled;\n private Customer customer;\n \n private final SubscriptionActivities activities =\n Workflow.newActivityStub(SubscriptionActivities.class);\n\n // This manageSubscription function is an example of a workflow using Cadence\n @Override\n public void manageSubscription(Customer customer) {\n // Set the Workflow customer to class properties so that it can be used by other methods like Query/Signal\n this.customer = customer;\n\n // sendWelcomeEmail is an activity in Cadence. It is implemented in user code and Cadence executes this activity on a worker node when needed.\n activities.sendWelcomeEmail(customer);\n\n // for this example, there are a fixed number of periods in the subscription\n // Cadence supports indefinitely running workflow but some advanced techniques are needed\n while (billingPeriodNum < customer.getSubscription().getPeriodsInSubcription()) {\n\n // Workflow.await tells Cadence to pause the workflow at this stage (saving it's state to the database)\n // Execution restarts when the billing period time has passed or the subscriptionCancelled event is received , whichever comes first\n Workflow.await(customer.getSubscription().getBillingPeriod(), () -> subscriptionCancelled);\n\n if (subscriptionCancelled) {\n activities.sendCancellationEmailDuringActiveSubscription(customer);\n break;\n }\n \n // chargeCustomerForBillingPeriod is another activity\n // Cadence will automatically handle issues such as your billing service being unavailable at the time\n // this activity is invoked\n activities.chargeCustomerForBillingPeriod(customer, billingPeriodNum);\n\n billingPeriodNum++;\n }\n\n if (!subscriptionCancelled) {\n activities.sendSubscriptionOverEmail(customer);\n }\n \n // the workflow is finished once this function returns\n }\n\n @Override\n public void cancelSubscription() {\n subscriptionCancelled = true;\n }\n\n @Override\n public void updateBillingPeriodChargeAmount(int billingPeriodChargeAmount) {\n customer.getSubscription().setBillingPeriodCharge(billingPeriodChargeAmount);\n }\n\n @Override\n public String queryCustomerId() {\n return customer.getId();\n }\n\n @Override\n public int queryBillingPeriodNumber() {\n return billingPeriodNum;\n }\n\n @Override\n public int queryBillingPeriodChargeAmount() {\n return customer.getSubscription().getBillingPeriodCharge();\n }\n}\n\n\n\nAgain, note that this code directly implements the business logic. If any of the invoked operations (aka ) takes a long time, the code is not going to change. It is okay to block on chargeCustomerForBillingPeriod for a day if the downstream processing service is down that long. The same way that blocking sleep for a billing period like 30 days is a normal operation inside the code.\n\nCadence has practically no scalability limits on the number of open instances. So even if your site has hundreds of millions of consumers, the above code is not going to change.\n\nThe commonly asked question by developers that learn Cadence is \"How do I handle process failure/restart in my \"? The answer is that you do not. The code is completely oblivious to any failures and downtime of or even the Cadence service itself. As soon as they are recovered and the needs to handle some , like timer or an completion, the current state of the is fully restored and the execution is continued. The only reason for a failure is the business code throwing an exception, not underlying infrastructure outages.\n\nAnother commonly asked question is whether a can handle more instances than its cache size or number of threads it can support. The answer is that a , when in a blocked state, can be safely removed from a . Later it can be resurrected on a different or the same when the need (in the form of an external ) arises. So a single can handle millions of open , assuming it can handle the update rate.\n\n\n# State Recovery and Determinism\n\nThe state recovery utilizes sourcing which puts a few restrictions on how the code is written. The main restriction is that the code must be deterministic which means that it must produce exactly the same result if executed multiple times. This rules out any external API calls from the code as external calls can fail intermittently or change its output any time. That is why all communication with the external world should happen through . For the same reason, code must use Cadence APIs to get current time, sleep, and create new threads.\n\nTo understand the Cadence execution model as well as the recovery mechanism, watch the following webcast. The animation covering recovery starts at 15:50.\n\n\n# ID Uniqueness\n\nis assigned by a client when starting a . It is usually a business level ID like customer ID or order ID.\n\nCadence guarantees that there could be only one (across all types) with a given ID open per at any time. An attempt to start a with the same ID is going to fail with WorkflowExecutionAlreadyStarted error.\n\nAn attempt to start a if there is a completed with the same ID depends on a WorkflowIdReusePolicy option:\n\n * AllowDuplicateFailedOnly means that it is allowed to start a only if a previously executed with the same ID failed.\n * AllowDuplicate means that it is allowed to start independently of the previous completion status.\n * RejectDuplicate means that it is not allowed to start a using the same at all.\n * TerminateIfRunning means terminating the current running workflow if one exists, and start a new one.\n\nThe default is AllowDuplicateFailedOnly.\n\nTo distinguish multiple runs of a with the same , Cadence identifies a with two IDs: Workflow ID and Run ID. Run ID is a service-assigned UUID. To be precise, any is uniquely identified by a triple: Domain Name, Workflow ID and Run ID.\n\n\n# Child Workflow\n\nA can execute other as child :workflow:workflows:. A child completion or failure is reported to its parent.\n\nSome reasons to use child are:\n\n * A child can be hosted by a separate set of which don't contain the parent code. So it would act as a separate service that can be invoked from multiple other .\n * A single has a limited size. For example, it cannot execute 100k . Child can be used to partition the problem into smaller chunks. One parent with 1000 children each executing 1000 is 1 million executed .\n * A child can be used to manage some resource using its ID to guarantee uniqueness. For example, a that manages host upgrades can have a child per host (host name being a ) and use them to ensure that all operations on the host are serialized.\n * A child can be used to execute some periodic logic without blowing up the parent history size. When a parent starts a child, it executes periodic logic calling that continues as many times as needed, then completes. From the parent point if view, it is just a single child invocation.\n\nThe main limitation of a child versus collocating all the application logic in a single is lack of the shared state. Parent and child can communicate only through asynchronous . But if there is a tight coupling between them, it might be simpler to use a single and just rely on a shared object state.\n\nWe recommended starting from a single implementation if your problem has bounded size in terms of number of executed and processed . It is more straightforward than multiple asynchronously communicating .\n\n\n# Workflow Retries\n\ncode is unaffected by infrastructure level downtime and failures. But it still can fail due to business logic level failures. For example, an can fail due to exceeding the retry interval and the error is not handled by application code, or the code having a bug.\n\nSome require a guarantee that they keep running even in presence of such failures. To support such use cases, an optional exponential retry policy can be specified when starting a . When it is specified, a failure restarts a from the beginning after the calculated retry interval. Following are the retry policy parameters:\n\n * InitialInterval is a delay before the first retry.\n * BackoffCoefficient. Retry policies are exponential. The coefficient specifies how fast the retry interval is growing. The coefficient of 1 means that the retry interval is always equal to the InitialInterval.\n * MaximumInterval specifies the maximum interval between retries. Useful for coefficients of more than 1.\n * MaximumAttempts specifies how many times to attempt to execute a in the presence of failures. If this limit is exceeded, the fails without retry. Not required if ExpirationInterval is specified.\n * ExpirationInterval specifies for how long to attempt executing a in the presence of failures. If this interval is exceeded, the fails without retry. Not required if MaximumAttempts is specified.\n * NonRetryableErrorReasons allows to specify errors that shouldn't be retried. For example, retrying invalid arguments error doesn't make sense in some scenarios.\n\n\n# How does workflow run\n\nYou may wonder how it works. Behind the scenes, workflow decision is driving the whole workflow running. It's the internal entities for client and server to run your workflows. If this is interesting to you, read this stack Overflow QA.",normalizedContent:"# fault-oblivious stateful workflow code\n\n\n# overview\n\ncadence core abstraction is a fault-oblivious stateful . the state of the code, including local variables and threads it creates, is immune to process and cadence service failures. this is a very powerful concept as it encapsulates state, processing threads, durable timers and handlers.\n\n\n# example\n\nlet's look at a use case. a customer signs up for an application with a trial period. after the period, if the customer has not cancelled, he should be charged once a month for the renewal. the customer has to be notified by email about the charges and should be able to cancel the subscription at any time.\n\nthe business logic of this use case is not very complicated and can be expressed in a few dozen lines of code. but any practical implementation has to ensure that the business process is fault tolerant and scalable. there are various ways to approach the design of such a system.\n\none approach is to center it around a database. an application process would periodically scan database tables for customers in specific states, execute necessary actions, and update the state to reflect that. while feasible, this approach has various drawbacks. the most obvious is that the state machine of the customer state quickly becomes extremely complicated. for example, charging a credit card or sending emails can fail due to a downstream system unavailability. the failed calls might need to be retried for a long time, ideally using an exponential retry policy. these calls should be throttled to not overload external systems. there should be support for poison pills to avoid blocking the whole process if a single customer record cannot be processed for whatever reason. the database-based approach also usually has performance problems. databases are not efficient for scenarios that require constant polling for records in a specific state.\n\nanother commonly employed approach is to use a timer service and queues. any update is pushed to a queue and then a that consumes from it updates a database and possibly pushes more messages in downstream queues. for operations that require scheduling, an external timer service can be used. this approach usually scales much better because a database is not constantly polled for changes. but it makes the programming model more complex and error prone as usually there is no transactional update between a queuing system and a database.\n\nwith cadence, the entire logic can be encapsulated in a simple durable function that directly implements the business logic. because the function is stateful, the implementer doesn't need to employ any additional systems to ensure durability and fault tolerance.\n\nhere is an example that implements the subscription management use case. it is in java, but go is also supported. the python and .net libraries are under active development.\n\n// this subscriptionworkflow interface is an example of defining a workflow in cadence\npublic interface subscriptionworkflow {\n @workflowmethod\n void managesubscription(string customerid);\n @signalmethod\n void cancelsubscription();\n @signalmethod \n void updatebillingperiodchargeamount(int billingperiodchargeamount);\n @querymethod \n string querycustomerid();\n @querymethod \n int querybillingperiodnumber();\n @querymethod \n int querybillingperiodchargeamount();\n}\n\n// workflow implementation is independent from interface. that way, application that start/signal/query workflows only need to know the interface\npublic class subscriptionworkflowimpl implements subscriptionworkflow {\n\n private int billingperiodnum;\n private boolean subscriptioncancelled;\n private customer customer;\n \n private final subscriptionactivities activities =\n workflow.newactivitystub(subscriptionactivities.class);\n\n // this managesubscription function is an example of a workflow using cadence\n @override\n public void managesubscription(customer customer) {\n // set the workflow customer to class properties so that it can be used by other methods like query/signal\n this.customer = customer;\n\n // sendwelcomeemail is an activity in cadence. it is implemented in user code and cadence executes this activity on a worker node when needed.\n activities.sendwelcomeemail(customer);\n\n // for this example, there are a fixed number of periods in the subscription\n // cadence supports indefinitely running workflow but some advanced techniques are needed\n while (billingperiodnum < customer.getsubscription().getperiodsinsubcription()) {\n\n // workflow.await tells cadence to pause the workflow at this stage (saving it's state to the database)\n // execution restarts when the billing period time has passed or the subscriptioncancelled event is received , whichever comes first\n workflow.await(customer.getsubscription().getbillingperiod(), () -> subscriptioncancelled);\n\n if (subscriptioncancelled) {\n activities.sendcancellationemailduringactivesubscription(customer);\n break;\n }\n \n // chargecustomerforbillingperiod is another activity\n // cadence will automatically handle issues such as your billing service being unavailable at the time\n // this activity is invoked\n activities.chargecustomerforbillingperiod(customer, billingperiodnum);\n\n billingperiodnum++;\n }\n\n if (!subscriptioncancelled) {\n activities.sendsubscriptionoveremail(customer);\n }\n \n // the workflow is finished once this function returns\n }\n\n @override\n public void cancelsubscription() {\n subscriptioncancelled = true;\n }\n\n @override\n public void updatebillingperiodchargeamount(int billingperiodchargeamount) {\n customer.getsubscription().setbillingperiodcharge(billingperiodchargeamount);\n }\n\n @override\n public string querycustomerid() {\n return customer.getid();\n }\n\n @override\n public int querybillingperiodnumber() {\n return billingperiodnum;\n }\n\n @override\n public int querybillingperiodchargeamount() {\n return customer.getsubscription().getbillingperiodcharge();\n }\n}\n\n\n\nagain, note that this code directly implements the business logic. if any of the invoked operations (aka ) takes a long time, the code is not going to change. it is okay to block on chargecustomerforbillingperiod for a day if the downstream processing service is down that long. the same way that blocking sleep for a billing period like 30 days is a normal operation inside the code.\n\ncadence has practically no scalability limits on the number of open instances. so even if your site has hundreds of millions of consumers, the above code is not going to change.\n\nthe commonly asked question by developers that learn cadence is \"how do i handle process failure/restart in my \"? the answer is that you do not. the code is completely oblivious to any failures and downtime of or even the cadence service itself. as soon as they are recovered and the needs to handle some , like timer or an completion, the current state of the is fully restored and the execution is continued. the only reason for a failure is the business code throwing an exception, not underlying infrastructure outages.\n\nanother commonly asked question is whether a can handle more instances than its cache size or number of threads it can support. the answer is that a , when in a blocked state, can be safely removed from a . later it can be resurrected on a different or the same when the need (in the form of an external ) arises. so a single can handle millions of open , assuming it can handle the update rate.\n\n\n# state recovery and determinism\n\nthe state recovery utilizes sourcing which puts a few restrictions on how the code is written. the main restriction is that the code must be deterministic which means that it must produce exactly the same result if executed multiple times. this rules out any external api calls from the code as external calls can fail intermittently or change its output any time. that is why all communication with the external world should happen through . for the same reason, code must use cadence apis to get current time, sleep, and create new threads.\n\nto understand the cadence execution model as well as the recovery mechanism, watch the following webcast. the animation covering recovery starts at 15:50.\n\n\n# id uniqueness\n\nis assigned by a client when starting a . it is usually a business level id like customer id or order id.\n\ncadence guarantees that there could be only one (across all types) with a given id open per at any time. an attempt to start a with the same id is going to fail with workflowexecutionalreadystarted error.\n\nan attempt to start a if there is a completed with the same id depends on a workflowidreusepolicy option:\n\n * allowduplicatefailedonly means that it is allowed to start a only if a previously executed with the same id failed.\n * allowduplicate means that it is allowed to start independently of the previous completion status.\n * rejectduplicate means that it is not allowed to start a using the same at all.\n * terminateifrunning means terminating the current running workflow if one exists, and start a new one.\n\nthe default is allowduplicatefailedonly.\n\nto distinguish multiple runs of a with the same , cadence identifies a with two ids: workflow id and run id. run id is a service-assigned uuid. to be precise, any is uniquely identified by a triple: domain name, workflow id and run id.\n\n\n# child workflow\n\na can execute other as child :workflow:workflows:. a child completion or failure is reported to its parent.\n\nsome reasons to use child are:\n\n * a child can be hosted by a separate set of which don't contain the parent code. so it would act as a separate service that can be invoked from multiple other .\n * a single has a limited size. for example, it cannot execute 100k . child can be used to partition the problem into smaller chunks. one parent with 1000 children each executing 1000 is 1 million executed .\n * a child can be used to manage some resource using its id to guarantee uniqueness. for example, a that manages host upgrades can have a child per host (host name being a ) and use them to ensure that all operations on the host are serialized.\n * a child can be used to execute some periodic logic without blowing up the parent history size. when a parent starts a child, it executes periodic logic calling that continues as many times as needed, then completes. from the parent point if view, it is just a single child invocation.\n\nthe main limitation of a child versus collocating all the application logic in a single is lack of the shared state. parent and child can communicate only through asynchronous . but if there is a tight coupling between them, it might be simpler to use a single and just rely on a shared object state.\n\nwe recommended starting from a single implementation if your problem has bounded size in terms of number of executed and processed . it is more straightforward than multiple asynchronously communicating .\n\n\n# workflow retries\n\ncode is unaffected by infrastructure level downtime and failures. but it still can fail due to business logic level failures. for example, an can fail due to exceeding the retry interval and the error is not handled by application code, or the code having a bug.\n\nsome require a guarantee that they keep running even in presence of such failures. to support such use cases, an optional exponential retry policy can be specified when starting a . when it is specified, a failure restarts a from the beginning after the calculated retry interval. following are the retry policy parameters:\n\n * initialinterval is a delay before the first retry.\n * backoffcoefficient. retry policies are exponential. the coefficient specifies how fast the retry interval is growing. the coefficient of 1 means that the retry interval is always equal to the initialinterval.\n * maximuminterval specifies the maximum interval between retries. useful for coefficients of more than 1.\n * maximumattempts specifies how many times to attempt to execute a in the presence of failures. if this limit is exceeded, the fails without retry. not required if expirationinterval is specified.\n * expirationinterval specifies for how long to attempt executing a in the presence of failures. if this interval is exceeded, the fails without retry. not required if maximumattempts is specified.\n * nonretryableerrorreasons allows to specify errors that shouldn't be retried. for example, retrying invalid arguments error doesn't make sense in some scenarios.\n\n\n# how does workflow run\n\nyou may wonder how it works. behind the scenes, workflow decision is driving the whole workflow running. it's the internal entities for client and server to run your workflows. if this is interesting to you, read this stack overflow qa.",charsets:{}},{title:"Event handling",frontmatter:{layout:"default",title:"Event handling",permalink:"/docs/concepts/events",readingShow:"top"},regularPath:"/docs/03-concepts/03-events.html",relativePath:"docs/03-concepts/03-events.md",key:"v-2d8e6278",path:"/docs/concepts/events/",headers:[{level:2,title:"Event Aggregation and Correlation",slug:"event-aggregation-and-correlation",normalizedTitle:"event aggregation and correlation",charIndex:248},{level:2,title:"Human Tasks",slug:"human-tasks",normalizedTitle:"human tasks",charIndex:1865},{level:2,title:"Process Execution Alteration",slug:"process-execution-alteration",normalizedTitle:"process execution alteration",charIndex:2447},{level:2,title:"Synchronization",slug:"synchronization",normalizedTitle:"synchronization",charIndex:2966}],codeSwitcherOptions:{},headersStr:"Event Aggregation and Correlation Human Tasks Process Execution Alteration Synchronization",content:"# Event handling\n\nFault-oblivious stateful can be about an external . A is always point to point destined to a specific instance. are always processed in the order in which they are received.\n\nThere are multiple scenarios for which are useful.\n\n\n# Event Aggregation and Correlation\n\nCadence is not a replacement for generic stream processing engines like Apache Flink or Apache Spark. But in certain scenarios it is a better fit. For example, when all that should be aggregated and correlated are always applied to some business entity with a clear ID. And then when a certain condition is met, actions should be executed.\n\nThe main limitation is that a single Cadence has a pretty limited throughput, while the number of is practically unlimited. So if you need to aggregate per customer, and your application has 100 million customers and each customer doesn't generate more than 20 per second, then Cadence would work fine. But if you want to aggregate all for US customers then the rate of these would be beyond the single capacity.\n\nFor example, an IoT device generates and a certain sequence of indicates that the device should be reprovisioned. A instance per device would be created and each instance would manage the state machine of the device and execute reprovision when necessary.\n\nAnother use case is a customer loyalty program. Every time a customer makes a purchase, an is generated into Apache Kafka for downstream systems to process. A loyalty service Kafka consumer receives the and a customer about the purchase using the Cadence signalWorkflowExecution API. The accumulates the count of the purchases. If a specified threshold is achieved, the executes an that notifies some external service that the customer has reached the next level of loyalty program. The also executes to periodically message the customer about their current status.\n\n\n# Human Tasks\n\nA lot of business processes involve human participants. The standard Cadence pattern for implementing an external interaction is to execute an that creates a human in an external system. It can be an email with a form, or a record in some external database, or a mobile app notification. When a user changes the status of the , a is sent to the corresponding . For example, when the form is submitted, or a mobile app notification is acknowledged. Some have multiple possible actions like claim, return, complete, reject. So multiple can be sent in relation to it.\n\n\n# Process Execution Alteration\n\nSome business processes should change their behavior if some external has happened. For example, while executing an order shipment , any change in item quantity could be delivered in a form of a .\n\nAnother example is a service deployment . While rolling out new software version to a Kubernetes cluster some problem was identified. A can be used to ask the to pause while the problem is investigated. Then either a continue or a rollback can be used to execute the appropriate action.\n\n\n# Synchronization\n\nCadence are strongly consistent so they can be used as a synchronization point for executing actions. For example, there is a requirement that all messages for a single user are processed sequentially but the underlying messaging infrastructure can deliver them in parallel. The Cadence solution would be to have a per user and it when an is received. Then the would buffer all in an internal data structure and then call an for every received. See the following Stack Overflow answer for an example.",normalizedContent:"# event handling\n\nfault-oblivious stateful can be about an external . a is always point to point destined to a specific instance. are always processed in the order in which they are received.\n\nthere are multiple scenarios for which are useful.\n\n\n# event aggregation and correlation\n\ncadence is not a replacement for generic stream processing engines like apache flink or apache spark. but in certain scenarios it is a better fit. for example, when all that should be aggregated and correlated are always applied to some business entity with a clear id. and then when a certain condition is met, actions should be executed.\n\nthe main limitation is that a single cadence has a pretty limited throughput, while the number of is practically unlimited. so if you need to aggregate per customer, and your application has 100 million customers and each customer doesn't generate more than 20 per second, then cadence would work fine. but if you want to aggregate all for us customers then the rate of these would be beyond the single capacity.\n\nfor example, an iot device generates and a certain sequence of indicates that the device should be reprovisioned. a instance per device would be created and each instance would manage the state machine of the device and execute reprovision when necessary.\n\nanother use case is a customer loyalty program. every time a customer makes a purchase, an is generated into apache kafka for downstream systems to process. a loyalty service kafka consumer receives the and a customer about the purchase using the cadence signalworkflowexecution api. the accumulates the count of the purchases. if a specified threshold is achieved, the executes an that notifies some external service that the customer has reached the next level of loyalty program. the also executes to periodically message the customer about their current status.\n\n\n# human tasks\n\na lot of business processes involve human participants. the standard cadence pattern for implementing an external interaction is to execute an that creates a human in an external system. it can be an email with a form, or a record in some external database, or a mobile app notification. when a user changes the status of the , a is sent to the corresponding . for example, when the form is submitted, or a mobile app notification is acknowledged. some have multiple possible actions like claim, return, complete, reject. so multiple can be sent in relation to it.\n\n\n# process execution alteration\n\nsome business processes should change their behavior if some external has happened. for example, while executing an order shipment , any change in item quantity could be delivered in a form of a .\n\nanother example is a service deployment . while rolling out new software version to a kubernetes cluster some problem was identified. a can be used to ask the to pause while the problem is investigated. then either a continue or a rollback can be used to execute the appropriate action.\n\n\n# synchronization\n\ncadence are strongly consistent so they can be used as a synchronization point for executing actions. for example, there is a requirement that all messages for a single user are processed sequentially but the underlying messaging infrastructure can deliver them in parallel. the cadence solution would be to have a per user and it when an is received. then the would buffer all in an internal data structure and then call an for every received. see the following stack overflow answer for an example.",charsets:{}},{title:"Synchronous query",frontmatter:{layout:"default",title:"Synchronous query",permalink:"/docs/concepts/queries",readingShow:"top"},regularPath:"/docs/03-concepts/04-queries.html",relativePath:"docs/03-concepts/04-queries.md",key:"v-7b43cf3c",path:"/docs/concepts/queries/",headers:[{level:2,title:"Stack Trace Query",slug:"stack-trace-query",normalizedTitle:"stack trace query",charIndex:1119}],codeSwitcherOptions:{},headersStr:"Stack Trace Query",content:'# Synchronous query\n\ncode is stateful with the Cadence framework preserving it over various software and hardware failures. The state is constantly mutated during . To expose this internal state to the external world Cadence provides a synchronous feature. From the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per type exposing different information to different external systems.\n\nTo execute a an external client calls a synchronous Cadence API providing , workflowID, name and optional arguments.\n\ncallbacks must be read-only not mutating the state in any way. The other limitation is that the callback cannot contain any blocking code. Both above limitations rule out ability to invoke from the handlers.\n\nCadence team is currently working on implementing update feature that would be similar to in the way it is invoked, but would support state mutation and invocations. From user\'s point of view, update is similar to signal + strong consistent query, but implemented in a much less expensive way in Cadence.\n\n\n# Stack Trace Query\n\nThe Cadence client libraries expose some predefined out of the box. Currently the only supported built-in is stack_trace. This returns stacks of all owned threads. This is a great way to troubleshoot any in production.\n\nExample\n\n$cadence --do samples-domain wf query -w -qt __stack_trace\n"coroutine 1 [blocked on selector-1.Select]:\\nmain.sampleSignalCounterWorkflow(0x1a99ae8, 0xc00009d700, 0x0, 0x0, 0x0)\\n\\t/Users/qlong/indeed/cadence-samples/cmd/samples/recipes/signalcounter/signal_counter_workflow.go:38 +0x1be\\nreflect.Value.call(0x1852ac0, 0x19cb608, 0x13, 0x1979180, 0x4, 0xc00045aa80, 0x2, 0x2, 0x2, 0x18, ...)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:476 +0x8e7\\nreflect.Value.Call(0x1852ac0, 0x19cb608, 0x13, 0xc00045aa80, 0x2, 0x2, 0x1, 0x2, 0xc00045a720)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:337 +0xb9\\ngo.uber.org/cadence/internal.(*workflowEnvironmentInterceptor).ExecuteWorkflow(0xc00045a720, 0x1a99ae8, 0xc00009d700, 0xc0001ca820, 0x20, 0xc00007fad0, 0x1, 0x1, 0x1, 0x1, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/workflow.go:372 +0x2cb\\ngo.uber.org/cadence/internal.(*workflowExecutor).Execute(0xc000098d80, 0x1a99ae8, 0xc00009d700, 0xc0001b127e, 0x2, 0x2, 0xc00044cb01, 0xc000070101, 0xc000073738, 0x1729f25, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_worker.go:699 +0x28d\\ngo.uber.org/cadence/internal.(*syncWorkflowDefinition).Execute.func1(0x1a99ce0, 0xc00045a9f0)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_workflow.go:466 +0x106"\n',normalizedContent:'# synchronous query\n\ncode is stateful with the cadence framework preserving it over various software and hardware failures. the state is constantly mutated during . to expose this internal state to the external world cadence provides a synchronous feature. from the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. multiple such callbacks can be provided per type exposing different information to different external systems.\n\nto execute a an external client calls a synchronous cadence api providing , workflowid, name and optional arguments.\n\ncallbacks must be read-only not mutating the state in any way. the other limitation is that the callback cannot contain any blocking code. both above limitations rule out ability to invoke from the handlers.\n\ncadence team is currently working on implementing update feature that would be similar to in the way it is invoked, but would support state mutation and invocations. from user\'s point of view, update is similar to signal + strong consistent query, but implemented in a much less expensive way in cadence.\n\n\n# stack trace query\n\nthe cadence client libraries expose some predefined out of the box. currently the only supported built-in is stack_trace. this returns stacks of all owned threads. this is a great way to troubleshoot any in production.\n\nexample\n\n$cadence --do samples-domain wf query -w -qt __stack_trace\n"coroutine 1 [blocked on selector-1.select]:\\nmain.samplesignalcounterworkflow(0x1a99ae8, 0xc00009d700, 0x0, 0x0, 0x0)\\n\\t/users/qlong/indeed/cadence-samples/cmd/samples/recipes/signalcounter/signal_counter_workflow.go:38 +0x1be\\nreflect.value.call(0x1852ac0, 0x19cb608, 0x13, 0x1979180, 0x4, 0xc00045aa80, 0x2, 0x2, 0x2, 0x18, ...)\\n\\t/usr/local/cellar/go/1.16.3/libexec/src/reflect/value.go:476 +0x8e7\\nreflect.value.call(0x1852ac0, 0x19cb608, 0x13, 0xc00045aa80, 0x2, 0x2, 0x1, 0x2, 0xc00045a720)\\n\\t/usr/local/cellar/go/1.16.3/libexec/src/reflect/value.go:337 +0xb9\\ngo.uber.org/cadence/internal.(*workflowenvironmentinterceptor).executeworkflow(0xc00045a720, 0x1a99ae8, 0xc00009d700, 0xc0001ca820, 0x20, 0xc00007fad0, 0x1, 0x1, 0x1, 0x1, ...)\\n\\t/users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/workflow.go:372 +0x2cb\\ngo.uber.org/cadence/internal.(*workflowexecutor).execute(0xc000098d80, 0x1a99ae8, 0xc00009d700, 0xc0001b127e, 0x2, 0x2, 0xc00044cb01, 0xc000070101, 0xc000073738, 0x1729f25, ...)\\n\\t/users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_worker.go:699 +0x28d\\ngo.uber.org/cadence/internal.(*syncworkflowdefinition).execute.func1(0x1a99ce0, 0xc00045a9f0)\\n\\t/users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_workflow.go:466 +0x106"\n',charsets:{}},{title:"Activities",frontmatter:{layout:"default",title:"Activities",permalink:"/docs/concepts/activities",readingShow:"top"},regularPath:"/docs/03-concepts/02-activities.html",relativePath:"docs/03-concepts/02-activities.md",key:"v-e240404c",path:"/docs/concepts/activities/",headers:[{level:2,title:"Timeouts",slug:"timeouts",normalizedTitle:"timeouts",charIndex:854},{level:2,title:"Retries",slug:"retries",normalizedTitle:"retries",charIndex:1835},{level:2,title:"Long Running Activities",slug:"long-running-activities",normalizedTitle:"long running activities",charIndex:1601},{level:2,title:"Cancellation",slug:"cancellation",normalizedTitle:"cancellation",charIndex:4826},{level:2,title:"Activity Task Routing through Task Lists",slug:"activity-task-routing-through-task-lists",normalizedTitle:"activity task routing through task lists",charIndex:5435},{level:2,title:"Asynchronous Activity Completion",slug:"asynchronous-activity-completion",normalizedTitle:"asynchronous activity completion",charIndex:7240},{level:2,title:"Local Activities",slug:"local-activities",normalizedTitle:"local activities",charIndex:7860}],codeSwitcherOptions:{},headersStr:"Timeouts Retries Long Running Activities Cancellation Activity Task Routing through Task Lists Asynchronous Activity Completion Local Activities",content:"# Activities\n\nFault-oblivious stateful code is the core abstraction of Cadence. But, due to deterministic execution requirements, they are not allowed to call any external API directly. Instead they orchestrate execution of . In its simplest form, a Cadence is a function or an object method in one of the supported languages. Cadence does not recover state in case of failures. Therefore an function is allowed to contain any code without restrictions.\n\nare invoked asynchronously through . A is essentially a queue used to store an until it is picked up by an available . The processes an by invoking its implementation function. When the function returns, the reports the result back to the Cadence service which in turn notifies the about completion. It is possible to implement an fully asynchronously by completing it from a different process.\n\n\n# Timeouts\n\nCadence does not impose any system limit on duration. It is up to the application to choose the timeouts for its execution. These are the configurable timeouts:\n\n * ScheduleToStart is the maximum time from a requesting execution to a starting its execution. The usual reason for this timeout to fire is all being down or not being able to keep up with the request rate. We recommend setting this timeout to the maximum time a is willing to wait for an execution in the presence of all possible outages.\n * StartToClose is the maximum time an can execute after it was picked by a .\n * ScheduleToClose is the maximum time from the requesting an execution to its completion.\n * Heartbeat is the maximum time between heartbeat requests. See Long Running Activities.\n\nEither ScheduleToClose or both ScheduleToStart and StartToClose timeouts are required.\n\nTimeouts are the key to manage activities. For more tips of how to set proper timeout, read this Stack Overflow QA.\n\n\n# Retries\n\nAs Cadence doesn't recover an 's state and they can communicate to any external system, failures are expected. Therefore, Cadence supports automatic retries. Any when invoked can have an associated retry policy. Here are the retry policy parameters:\n\n * InitialInterval is a delay before the first retry.\n * BackoffCoefficient. Retry policies are exponential. The coefficient specifies how fast the retry interval is growing. The coefficient of 1 means that the retry interval is always equal to the InitialInterval.\n * MaximumInterval specifies the maximum interval between retries. Useful for coefficients more than 1.\n * MaximumAttempts specifies how many times to attempt to execute an in the presence of failures. If this limit is exceeded, the error is returned back to the that invoked the . Not required if ExpirationInterval is specified.\n * ExpirationInterval specifies for how long to attempt executing an in the presence of failures. If this interval is exceeded, the error is returned back to the that invoked the . Not required if MaximumAttempts is specified.\n * NonRetryableErrorReasons allows you to specify errors that shouldn't be retried. For example retrying invalid arguments error doesn't make sense in some scenarios.\n\nThere are scenarios when not a single but rather the whole part of a should be retried on failure. For example, a media encoding that downloads a file to a host, processes it, and then uploads the result back to storage. In this , if the host that hosts the dies, all three should be retried on a different host. Such retries should be handled by the code as they are very use case specific.\n\n\n# Long Running Activities\n\nFor long running , we recommended that you specify a relatively short heartbeat timeout and constantly heartbeat. This way failures for even very long running can be handled in a timely manner. An that specifies the heartbeat timeout is expected to call the heartbeat method periodically from its implementation.\n\nA heartbeat request can include application specific payload. This is useful to save execution progress. If an times out due to a missed heartbeat, the next attempt to execute it can access that progress and continue its execution from that point.\n\nLong running can be used as a special case of leader election. Cadence timeouts use second resolution. So it is not a solution for realtime applications. But if it is okay to react to the process failure within a few seconds, then a Cadence heartbeat is a good fit.\n\nOne common use case for such leader election is monitoring. An executes an internal loop that periodically polls some API and checks for some condition. It also heartbeats on every iteration. If the condition is satisfied, the completes which lets its to handle it. If the dies, the times out after the heartbeat interval is exceeded and is retried on a different . The same pattern works for polling for new files in Amazon S3 buckets or responses in REST or other synchronous APIs.\n\n\n# Cancellation\n\nA can request an cancellation. Currently the only way for an to learn that it was cancelled is through heart beating. The heartbeat request fails with a special error indicating that the was cancelled. Then it is up to the implementation to perform all the necessary cleanup and report that it is done with it. It is up to the implementation to decide if it wants to wait for the cancellation confirmation or just proceed without waiting.\n\nAnother common case for heartbeat failure is that the that invoked it is in a completed state. In this case an is expected to perform cleanup as well.\n\n\n# Activity Task Routing through Task Lists\n\nare dispatched to through . are queues that listen on. are highly dynamic and lightweight. They don't need to be explicitly registered. And it is okay to have one per process. It is normal to have more than one type to be invoked through a single . And it is normal in some cases (like host routing) to invoke the same type on multiple .\n\nHere are some use cases for employing multiple in a single workflow:\n\n * Flow control. A that consumes from a asks for an only when it has available capacity. So are never overloaded by request spikes. If executions are requested faster than can process them, they are backlogged in the .\n * Throttling. Each can specify the maximum rate it is allowed to processes on a . It does not exceed this limit even if it has spare capacity. There is also support for global rate limiting. This limit works across all for the given . It is frequently used to limit load on a downstream service that an calls into.\n * Deploying a set of independently. Think about a service that hosts and can be deployed independently from other and . To send to this service, a separate is needed.\n * with different capabilities. For example, on GPU boxes vs non GPU boxes. Having two separate in this case allows to pick which one to send an execution request to.\n * Routing to a specific host. For example, in the media encoding case the transform and upload have to run on the same host as the download one.\n * Routing to a specific process. For example, some load large data sets and caches it in the process. The that rely on this data set should be routed to the same process.\n * Multiple priorities. One per priority and having a pool per priority.\n * Versioning. A new backwards incompatible implementation of an might use a different .\n\n\n# Asynchronous Activity Completion\n\nBy default an is a function or a method depending on a client side library language. As soon as the function returns, an completes. But in some cases an implementation is asynchronous. For example it is forwarded to an external system through a message queue. And the reply comes through a different queue.\n\nTo support such use cases, Cadence allows implementations that do not complete upon function completions. A separate API should be used in this case to complete the . This API can be called from any process, even in a different programming language, that the original used.\n\n\n# Local Activities\n\nSome of the are very short lived and do not need the queing semantic, flow control, rate limiting and routing capabilities. For these Cadence supports so called feature. are executed in the same process as the that invoked them.\n\nWhat you will trade off by using local activities\n\n * Less Debuggability: There is no ActivityTaskScheduled and ActivityTaskStarted events. So you would not able to see the input.\n * No tasklist dispatching: The worker is always the same as the workflow decision worker. You don't have a choice of using activity workers.\n * More possibility of duplicated execution. Though regular activity could also execute multiple times when using retry policy, local activity has more chance of ocurring. Because local activity result is not recorded into history until DecisionTaskCompleted. Also when executing multiple local activities in a row, SDK(Java+Golang) would optimize recording in a way that only recording by interval(before current decision task timeout).\n * No long running capability with record heartbeat\n * No Tasklist global ratelimiting\n\nConsider using for functions that are:\n\n * idempotent\n * no longer than a few seconds\n * do not require global rate limiting\n * do not require routing to specific or pools of\n * can be implemented in the same binary as the that invokes them\n * non business critical so that losing some debuggability is okay(e.g. logging, loading config)\n * when you really need optimization. For example, if there are many timers firing at the same time to invoke activities, it could overload Cadence's server. Using local activities can help save the server capacity.\n\nThe main benefit of is that they are much more efficient in utilizing Cadence service resources and have much lower latency overhead comparing to the usual invocation.",normalizedContent:"# activities\n\nfault-oblivious stateful code is the core abstraction of cadence. but, due to deterministic execution requirements, they are not allowed to call any external api directly. instead they orchestrate execution of . in its simplest form, a cadence is a function or an object method in one of the supported languages. cadence does not recover state in case of failures. therefore an function is allowed to contain any code without restrictions.\n\nare invoked asynchronously through . a is essentially a queue used to store an until it is picked up by an available . the processes an by invoking its implementation function. when the function returns, the reports the result back to the cadence service which in turn notifies the about completion. it is possible to implement an fully asynchronously by completing it from a different process.\n\n\n# timeouts\n\ncadence does not impose any system limit on duration. it is up to the application to choose the timeouts for its execution. these are the configurable timeouts:\n\n * scheduletostart is the maximum time from a requesting execution to a starting its execution. the usual reason for this timeout to fire is all being down or not being able to keep up with the request rate. we recommend setting this timeout to the maximum time a is willing to wait for an execution in the presence of all possible outages.\n * starttoclose is the maximum time an can execute after it was picked by a .\n * scheduletoclose is the maximum time from the requesting an execution to its completion.\n * heartbeat is the maximum time between heartbeat requests. see long running activities.\n\neither scheduletoclose or both scheduletostart and starttoclose timeouts are required.\n\ntimeouts are the key to manage activities. for more tips of how to set proper timeout, read this stack overflow qa.\n\n\n# retries\n\nas cadence doesn't recover an 's state and they can communicate to any external system, failures are expected. therefore, cadence supports automatic retries. any when invoked can have an associated retry policy. here are the retry policy parameters:\n\n * initialinterval is a delay before the first retry.\n * backoffcoefficient. retry policies are exponential. the coefficient specifies how fast the retry interval is growing. the coefficient of 1 means that the retry interval is always equal to the initialinterval.\n * maximuminterval specifies the maximum interval between retries. useful for coefficients more than 1.\n * maximumattempts specifies how many times to attempt to execute an in the presence of failures. if this limit is exceeded, the error is returned back to the that invoked the . not required if expirationinterval is specified.\n * expirationinterval specifies for how long to attempt executing an in the presence of failures. if this interval is exceeded, the error is returned back to the that invoked the . not required if maximumattempts is specified.\n * nonretryableerrorreasons allows you to specify errors that shouldn't be retried. for example retrying invalid arguments error doesn't make sense in some scenarios.\n\nthere are scenarios when not a single but rather the whole part of a should be retried on failure. for example, a media encoding that downloads a file to a host, processes it, and then uploads the result back to storage. in this , if the host that hosts the dies, all three should be retried on a different host. such retries should be handled by the code as they are very use case specific.\n\n\n# long running activities\n\nfor long running , we recommended that you specify a relatively short heartbeat timeout and constantly heartbeat. this way failures for even very long running can be handled in a timely manner. an that specifies the heartbeat timeout is expected to call the heartbeat method periodically from its implementation.\n\na heartbeat request can include application specific payload. this is useful to save execution progress. if an times out due to a missed heartbeat, the next attempt to execute it can access that progress and continue its execution from that point.\n\nlong running can be used as a special case of leader election. cadence timeouts use second resolution. so it is not a solution for realtime applications. but if it is okay to react to the process failure within a few seconds, then a cadence heartbeat is a good fit.\n\none common use case for such leader election is monitoring. an executes an internal loop that periodically polls some api and checks for some condition. it also heartbeats on every iteration. if the condition is satisfied, the completes which lets its to handle it. if the dies, the times out after the heartbeat interval is exceeded and is retried on a different . the same pattern works for polling for new files in amazon s3 buckets or responses in rest or other synchronous apis.\n\n\n# cancellation\n\na can request an cancellation. currently the only way for an to learn that it was cancelled is through heart beating. the heartbeat request fails with a special error indicating that the was cancelled. then it is up to the implementation to perform all the necessary cleanup and report that it is done with it. it is up to the implementation to decide if it wants to wait for the cancellation confirmation or just proceed without waiting.\n\nanother common case for heartbeat failure is that the that invoked it is in a completed state. in this case an is expected to perform cleanup as well.\n\n\n# activity task routing through task lists\n\nare dispatched to through . are queues that listen on. are highly dynamic and lightweight. they don't need to be explicitly registered. and it is okay to have one per process. it is normal to have more than one type to be invoked through a single . and it is normal in some cases (like host routing) to invoke the same type on multiple .\n\nhere are some use cases for employing multiple in a single workflow:\n\n * flow control. a that consumes from a asks for an only when it has available capacity. so are never overloaded by request spikes. if executions are requested faster than can process them, they are backlogged in the .\n * throttling. each can specify the maximum rate it is allowed to processes on a . it does not exceed this limit even if it has spare capacity. there is also support for global rate limiting. this limit works across all for the given . it is frequently used to limit load on a downstream service that an calls into.\n * deploying a set of independently. think about a service that hosts and can be deployed independently from other and . to send to this service, a separate is needed.\n * with different capabilities. for example, on gpu boxes vs non gpu boxes. having two separate in this case allows to pick which one to send an execution request to.\n * routing to a specific host. for example, in the media encoding case the transform and upload have to run on the same host as the download one.\n * routing to a specific process. for example, some load large data sets and caches it in the process. the that rely on this data set should be routed to the same process.\n * multiple priorities. one per priority and having a pool per priority.\n * versioning. a new backwards incompatible implementation of an might use a different .\n\n\n# asynchronous activity completion\n\nby default an is a function or a method depending on a client side library language. as soon as the function returns, an completes. but in some cases an implementation is asynchronous. for example it is forwarded to an external system through a message queue. and the reply comes through a different queue.\n\nto support such use cases, cadence allows implementations that do not complete upon function completions. a separate api should be used in this case to complete the . this api can be called from any process, even in a different programming language, that the original used.\n\n\n# local activities\n\nsome of the are very short lived and do not need the queing semantic, flow control, rate limiting and routing capabilities. for these cadence supports so called feature. are executed in the same process as the that invoked them.\n\nwhat you will trade off by using local activities\n\n * less debuggability: there is no activitytaskscheduled and activitytaskstarted events. so you would not able to see the input.\n * no tasklist dispatching: the worker is always the same as the workflow decision worker. you don't have a choice of using activity workers.\n * more possibility of duplicated execution. though regular activity could also execute multiple times when using retry policy, local activity has more chance of ocurring. because local activity result is not recorded into history until decisiontaskcompleted. also when executing multiple local activities in a row, sdk(java+golang) would optimize recording in a way that only recording by interval(before current decision task timeout).\n * no long running capability with record heartbeat\n * no tasklist global ratelimiting\n\nconsider using for functions that are:\n\n * idempotent\n * no longer than a few seconds\n * do not require global rate limiting\n * do not require routing to specific or pools of\n * can be implemented in the same binary as the that invokes them\n * non business critical so that losing some debuggability is okay(e.g. logging, loading config)\n * when you really need optimization. for example, if there are many timers firing at the same time to invoke activities, it could overload cadence's server. using local activities can help save the server capacity.\n\nthe main benefit of is that they are much more efficient in utilizing cadence service resources and have much lower latency overhead comparing to the usual invocation.",charsets:{}},{title:"Task lists",frontmatter:{layout:"default",title:"Task lists",permalink:"/docs/concepts/task-lists",readingShow:"top"},regularPath:"/docs/03-concepts/06-task-lists.html",relativePath:"docs/03-concepts/06-task-lists.md",key:"v-78a9ec22",path:"/docs/concepts/task-lists/",codeSwitcherOptions:{},headersStr:null,content:"# Task lists\n\nWhen a invokes an , it sends the ScheduleActivityTask to the Cadence service. As a result, the service updates the state and dispatches an to a that implements the . Instead of calling the directly, an intermediate queue is used. So the service adds an to this queue and a receives the using a long poll request. Cadence calls this queue used to dispatch an .\n\nSimilarly, when a needs to handle an external , a is created. A is used to deliver it to the (also called decider).\n\nWhile Cadence are queues, they have some differences from commonly used queuing technologies. The main one is that they do not require explicit registration and are created on demand. The number of is not limited. A common use case is to have a per process and use it to deliver to the process. Another use case is to have a per pool of .\n\nThere are multiple advantages of using a to deliver instead of invoking an through a synchronous RPC:\n\n * doesn't need to have any open ports, which is more secure.\n * doesn't need to advertise itself through DNS or any other network discovery mechanism.\n * When all are down, messages are persisted in a waiting for the to recover.\n * A polls for a message only when it has spare capacity, so it never gets overloaded.\n * Automatic load balancing across a large number of .\n * support server side throttling. This allows you to limit the dispatch rate to the pool of and still supports adding a with a higher rate when spikes happen.\n * can be used to route a request to specific pools of or even a specific process.",normalizedContent:"# task lists\n\nwhen a invokes an , it sends the scheduleactivitytask to the cadence service. as a result, the service updates the state and dispatches an to a that implements the . instead of calling the directly, an intermediate queue is used. so the service adds an to this queue and a receives the using a long poll request. cadence calls this queue used to dispatch an .\n\nsimilarly, when a needs to handle an external , a is created. a is used to deliver it to the (also called decider).\n\nwhile cadence are queues, they have some differences from commonly used queuing technologies. the main one is that they do not require explicit registration and are created on demand. the number of is not limited. a common use case is to have a per process and use it to deliver to the process. another use case is to have a per pool of .\n\nthere are multiple advantages of using a to deliver instead of invoking an through a synchronous rpc:\n\n * doesn't need to have any open ports, which is more secure.\n * doesn't need to advertise itself through dns or any other network discovery mechanism.\n * when all are down, messages are persisted in a waiting for the to recover.\n * a polls for a message only when it has spare capacity, so it never gets overloaded.\n * automatic load balancing across a large number of .\n * support server side throttling. this allows you to limit the dispatch rate to the pool of and still supports adding a with a higher rate when spikes happen.\n * can be used to route a request to specific pools of or even a specific process.",charsets:{}},{title:"Deployment topology",frontmatter:{layout:"default",title:"Deployment topology",permalink:"/docs/concepts/topology",readingShow:"top"},regularPath:"/docs/03-concepts/05-topology.html",relativePath:"docs/03-concepts/05-topology.md",key:"v-1c104a48",path:"/docs/concepts/topology/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:26},{level:2,title:"Cadence Service",slug:"cadence-service",normalizedTitle:"cadence service",charIndex:463},{level:2,title:"Workflow Worker",slug:"workflow-worker",normalizedTitle:"workflow worker",charIndex:2374},{level:2,title:"Activity Worker",slug:"activity-worker",normalizedTitle:"activity worker",charIndex:3445},{level:2,title:"External Clients",slug:"external-clients",normalizedTitle:"external clients",charIndex:4137}],codeSwitcherOptions:{},headersStr:"Overview Cadence Service Workflow Worker Activity Worker External Clients",content:"# Deployment topology\n\n\n# Overview\n\nCadence is a highly scalable fault-oblivious stateful code platform. The fault-oblivious code is a next level of abstraction over commonly used techniques to achieve fault tolerance and durability.\n\nA common Cadence-based application consists of a Cadence service, and , and external clients. Note that both types of as well as external clients are roles and can be collocated in a single application process if necessary.\n\n\n# Cadence Service\n\n\n\nAt the core of Cadence is a highly scalable multitentant service. The service exposes all of its functionality through a strongly typed gRPC API. A Cadence cluster include multiple services, each of which may run on multiple nodes for scalability and reliablity:\n\n * Front End: which is a stateless service used to handle incoming requests from Workers. It is expected that an external load balancing mechanism is used to distribute load between Front End instances.\n * History Service: where the core logic of orchestrating workflow steps and activities is implemented\n * Matching Service: matches workflow/activity tasks that need to be executed to workflow/activity workers that are able to execute them. Matching is assigned task for execution by the history service\n * Internal Worker Service: implements Cadence workflows and activities for internal requirements such as archiving\n * Workers: are effectively the client apps for Cadence. This is where user created workflow and activity logic is executed\n\nInternally it depends on a persistent store. Currently, Apache Cassandra, MySQL, PostgreSQL, CockroachDB (PostgreSQL compatible) and TiDB (MySQL compatible) stores are supported out of the box. For listing using complex predicates, ElasticSearch and OpenSearch cluster can be used.\n\nCadence service is responsible for keeping state and associated durable timers. It maintains internal queues (called ) which are used to dispatch to external .\n\nCadence service is multitentant. Therefore it is expected that multiple pools of implementing different use cases connect to the same service instance. For example, at Uber a single service is used by more than a hundred applications. At the same time some external customers deploy an instance of Cadence service per application. For local development, a local Cadence service instance configured through docker-compose is used.\n\n\n\n\n# Workflow Worker\n\nCadence reuses terminology from workflow automation . So fault-oblivious stateful code is called .\n\nThe Cadence service does not execute code directly. The code is hosted by an external (from the service point of view) process. These processes receive that contain that the is expected to handle from the Cadence service, delivers them to the code, and communicates back to the service.\n\nAs code is external to the service, it can be implemented in any language that can talk service Thrift API. Currently Java and Go clients are production ready. While Python and C# clients are under development. Let us know if you are interested in contributing a client in your preferred language.\n\nThe Cadence service API doesn't impose any specific definition language. So a specific can be implemented to execute practically any existing specification. The model the Cadence team chose to support out of the box is based on the idea of durable function. Durable functions are as close as possible to application business logic with minimal plumbing required.\n\n\n# Activity Worker\n\nfault-oblivious code is immune to infrastructure failures. But it has to communicate with the imperfect external world where failures are common. All communication to the external world is done through . are pieces of code that can perform any application-specific action like calling a service, updating a database record, or downloading a file from Amazon S3. Cadence are very feature-rich compared to queuing systems. Example features are routing to specific processes, infinite retries, heartbeats, and unlimited execution time.\n\nare hosted by processes that receive from the Cadence service, invoke correspondent implementations and report back completion statuses.\n\n\n# External Clients\n\nand host and code. But to create a instance (an execution in Cadence terminology) the StartWorkflowExecution Cadence service API call should be used. Usually, are started by outside entities like UIs, microservices or CLIs.\n\nThese entities can also:\n\n * notify about asynchronous external in the form of\n * synchronously state\n * synchronously wait for a completion\n * cancel, terminate, restart, and reset\n * search for specific using list API",normalizedContent:"# deployment topology\n\n\n# overview\n\ncadence is a highly scalable fault-oblivious stateful code platform. the fault-oblivious code is a next level of abstraction over commonly used techniques to achieve fault tolerance and durability.\n\na common cadence-based application consists of a cadence service, and , and external clients. note that both types of as well as external clients are roles and can be collocated in a single application process if necessary.\n\n\n# cadence service\n\n\n\nat the core of cadence is a highly scalable multitentant service. the service exposes all of its functionality through a strongly typed grpc api. a cadence cluster include multiple services, each of which may run on multiple nodes for scalability and reliablity:\n\n * front end: which is a stateless service used to handle incoming requests from workers. it is expected that an external load balancing mechanism is used to distribute load between front end instances.\n * history service: where the core logic of orchestrating workflow steps and activities is implemented\n * matching service: matches workflow/activity tasks that need to be executed to workflow/activity workers that are able to execute them. matching is assigned task for execution by the history service\n * internal worker service: implements cadence workflows and activities for internal requirements such as archiving\n * workers: are effectively the client apps for cadence. this is where user created workflow and activity logic is executed\n\ninternally it depends on a persistent store. currently, apache cassandra, mysql, postgresql, cockroachdb (postgresql compatible) and tidb (mysql compatible) stores are supported out of the box. for listing using complex predicates, elasticsearch and opensearch cluster can be used.\n\ncadence service is responsible for keeping state and associated durable timers. it maintains internal queues (called ) which are used to dispatch to external .\n\ncadence service is multitentant. therefore it is expected that multiple pools of implementing different use cases connect to the same service instance. for example, at uber a single service is used by more than a hundred applications. at the same time some external customers deploy an instance of cadence service per application. for local development, a local cadence service instance configured through docker-compose is used.\n\n\n\n\n# workflow worker\n\ncadence reuses terminology from workflow automation . so fault-oblivious stateful code is called .\n\nthe cadence service does not execute code directly. the code is hosted by an external (from the service point of view) process. these processes receive that contain that the is expected to handle from the cadence service, delivers them to the code, and communicates back to the service.\n\nas code is external to the service, it can be implemented in any language that can talk service thrift api. currently java and go clients are production ready. while python and c# clients are under development. let us know if you are interested in contributing a client in your preferred language.\n\nthe cadence service api doesn't impose any specific definition language. so a specific can be implemented to execute practically any existing specification. the model the cadence team chose to support out of the box is based on the idea of durable function. durable functions are as close as possible to application business logic with minimal plumbing required.\n\n\n# activity worker\n\nfault-oblivious code is immune to infrastructure failures. but it has to communicate with the imperfect external world where failures are common. all communication to the external world is done through . are pieces of code that can perform any application-specific action like calling a service, updating a database record, or downloading a file from amazon s3. cadence are very feature-rich compared to queuing systems. example features are routing to specific processes, infinite retries, heartbeats, and unlimited execution time.\n\nare hosted by processes that receive from the cadence service, invoke correspondent implementations and report back completion statuses.\n\n\n# external clients\n\nand host and code. but to create a instance (an execution in cadence terminology) the startworkflowexecution cadence service api call should be used. usually, are started by outside entities like uis, microservices or clis.\n\nthese entities can also:\n\n * notify about asynchronous external in the form of\n * synchronously state\n * synchronously wait for a completion\n * cancel, terminate, restart, and reset\n * search for specific using list api",charsets:{}},{title:"Archival",frontmatter:{layout:"default",title:"Archival",permalink:"/docs/concepts/archival",readingShow:"top"},regularPath:"/docs/03-concepts/07-archival.html",relativePath:"docs/03-concepts/07-archival.md",key:"v-eec246bc",path:"/docs/concepts/archival/",headers:[{level:2,title:"Concepts",slug:"concepts",normalizedTitle:"concepts",charIndex:1029},{level:2,title:"Configuring Archival",slug:"configuring-archival",normalizedTitle:"configuring archival",charIndex:1530},{level:3,title:"Cluster Level Archival Config",slug:"cluster-level-archival-config",normalizedTitle:"cluster level archival config",charIndex:1720},{level:3,title:"Domain Level Archival Config",slug:"domain-level-archival-config",normalizedTitle:"domain level archival config",charIndex:2401},{level:2,title:"Running Locally",slug:"running-locally",normalizedTitle:"running locally",charIndex:2837},{level:2,title:"Running in Production",slug:"running-in-production",normalizedTitle:"running in production",charIndex:3996},{level:2,title:"FAQ",slug:"faq",normalizedTitle:"faq",charIndex:970},{level:3,title:"When does archival happen?",slug:"when-does-archival-happen",normalizedTitle:"when does archival happen?",charIndex:4755},{level:3,title:"What's the query syntax for visibility archival?",slug:"what-s-the-query-syntax-for-visibility-archival",normalizedTitle:"what's the query syntax for visibility archival?",charIndex:5315},{level:3,title:"How does archival interact with global domains?",slug:"how-does-archival-interact-with-global-domains",normalizedTitle:"how does archival interact with global domains?",charIndex:5832},{level:3,title:"Can I specify multiple archival URIs?",slug:"can-i-specify-multiple-archival-uris",normalizedTitle:"can i specify multiple archival uris?",charIndex:6409},{level:3,title:"How does archival work with PII?",slug:"how-does-archival-work-with-pii",normalizedTitle:"how does archival work with pii?",charIndex:6591},{level:2,title:"Planned Future Work",slug:"planned-future-work",normalizedTitle:"planned future work",charIndex:6895}],codeSwitcherOptions:{},headersStr:"Concepts Configuring Archival Cluster Level Archival Config Domain Level Archival Config Running Locally Running in Production FAQ When does archival happen? What's the query syntax for visibility archival? How does archival interact with global domains? Can I specify multiple archival URIs? How does archival work with PII? Planned Future Work",content:'# Archival\n\nis a feature that automatically moves histories (history archival) and visibility records (visibility archival) from persistence to a secondary data store after the retention period, thus allowing users to keep workflow history and visibility records as long as necessary without overwhelming Cadence primary data store. There are two reasons you may consider turning on archival for your domain:\n\n 1. Compliance: For legal reasons histories may need to be stored for a long period of time.\n 2. Debugging: Old histories can still be accessed for debugging.\n\nThe current implementation of the feature has two limitations:\n\n 1. RunID Required: In order to retrieve an archived workflow history, both workflowID and runID are required.\n 2. Best Effort: It is possible that a history or visibility record is deleted from Cadence primary persistence without being archived first. These cases are rare but are possible with the current state of . Please check the FAQ section for how to get notified when this happens.\n\n\n# Concepts\n\n * Archiver: Archiver is the component that is responsible for archiving and retrieving histories and visibility records. Its interface is generic and supports different kinds of locations: local file system, S3, Kafka, etc. Check this README if you would like to add a new archiver implementation for your data store.\n * URI: An URI is used to specify the location. Based on the scheme part of an URI, the corresponding archiver will be selected by the system to perform the operation.\n\n\n# Configuring Archival\n\nis controlled by both level config and cluster level config. History and visibility archival have separate domain/cluster configs, but they share the same purpose.\n\n\n# Cluster Level Archival Config\n\nA Cadence cluster can be in one of three states:\n\n * Disabled: No will occur and the archivers will be not initialized on service startup.\n * Paused: This state is not yet implemented. Currently setting cluster to paused is the same as setting it to disabled.\n * Enabled: will occur.\n\nEnabling the cluster for simply means workflow histories will be archived. There is another config which controls whether archived histories or visibility records can be accessed. Both configs have defaults defined in the static yaml and can be overwritten via dynamic config. Note, however, dynamic config will take effect only when is enabled in static yaml.\n\n\n# Domain Level Archival Config\n\nA includes two pieces of related config:\n\n * Status: Either enabled or disabled. If a is in the disabled state, no will occur for that .\n * URI: The scheme and location where histories or visibility records will be archived to. When a enables for the first time URI is set and can never be changed. If URI is not specified when first enabling a for , a default URI from the static config will be used.\n\n\n# Running Locally\n\nYou can follow the steps below to run and test the feature locally:\n\n 1. ./cadence-server start\n 2. ./cadence --do samples-domain domain register --gd false --history_archival_status enabled --visibility_archival_status enabled --retention 0\n 3. Run the helloworld cadence-sample by following the README\n 4. Copy the workflowID the completed from log output\n 5. Retrieve runID through archived visibility record ./cadence --do samples-domain wf listarchived -q \'WorkflowID = ""\'\n 6. Retrieve archived history ./cadence --do samples-domain wf show --wid --rid \n\nIn step 2, we registered a new and enabled both history and visibility feature for that . Since we didn\'t provide an URI when registering the new , the default URI specified in config/development.yaml is used. The default URI is file:///tmp/cadence_archival/development for history archival and "file:///tmp/cadence_vis_archival/development" for visibility archival. You can find the archived history under the /tmp/cadence_archival/development directory and archived visibility record under the /tmp/cadence_vis_archival/development directory.\n\n\n# Running in Production\n\nCadence supports uploading workflow histories to Google Cloud and Amazon S3 for archival in production. Check documentation in GCloud archival component and S3 archival component.\n\nBelow is an example of Amazon S3 archival configuration:\n\narchival:\n history:\n status: "enabled"\n enableRead: true\n provider:\n s3store:\n region: "us-east-2"\n visibility:\n status: "enabled"\n enableRead: true\n provider:\n s3store:\n region: "us-east-2"\ndomainDefaults:\n archival:\n history:\n status: "enabled"\n URI: "s3://put-name-of-your-s3-bucket-here"\n visibility:\n status: "enabled"\n URI: "s3://put-name-of-your-s3-bucket-here" # most proably the same as the previous URI\n\n\n\n# FAQ\n\n\n# When does archival happen?\n\nIn theory, we would like both history and visibility archival happen after workflow closes and retention period passes. However, due to some limitations in the implementation, only history archival happens after the retention period, while visibility archival happens immediately after workflow closes. Please treat this as an implementation details inside Cadence and do not relay on this fact. Archived data should only be checked after the retention period, and we may change the way we do visibility archival in the future.\n\n\n# What\'s the query syntax for visibility archival?\n\nThe listArchived CLI command and API accept a SQL-like query for retrieving archived visibility records, similar to how the listWorkflow command works. Unfortunately, since different Archiver implementations have very different capability, there\'s currently no universal query syntax that works for all Archiver implementations. Please check the README (for example, S3 and GCP) of the Archiver used by your domain for the supported query syntax and limitations.\n\n\n# How does archival interact with global domains?\n\nIf you have a global domain, when occurs it will first run on the active cluster and some time later it will run on the standby cluster when replication happens. For history archival, Cadence will check if upload operation has been performed and skip duplicate efforts. For visibility archival, there\'s no such check and duplicated visibility records will be uploaded. Depending on the Archiver implementation, those duplicated upload may consume more space in the underlying storage and duplicated entries may be returned.\n\n\n# Can I specify multiple archival URIs?\n\nEach can only have one URI for history and one URI for visibility . Different , however, can have different URIs (with different schemes).\n\n\n# How does archival work with PII?\n\nNo cadence should ever operate on clear text PII. Cadence can be thought of as a database and just as one would not store PII in a database PII should not be stored in Cadence. This is even more important when is enabled because these histories can be kept forever.\n\n\n# Planned Future Work\n\n * Support retriving archived workflow histories without providing runID.\n * Provide guarantee that no history or visibility record is deleted from primary persistence before being archived.\n * Implement Paused state. In this state no will occur but histories or visibility record also will not be deleted from persistence. Once enabled again from paused state, all skipped will occur.',normalizedContent:'# archival\n\nis a feature that automatically moves histories (history archival) and visibility records (visibility archival) from persistence to a secondary data store after the retention period, thus allowing users to keep workflow history and visibility records as long as necessary without overwhelming cadence primary data store. there are two reasons you may consider turning on archival for your domain:\n\n 1. compliance: for legal reasons histories may need to be stored for a long period of time.\n 2. debugging: old histories can still be accessed for debugging.\n\nthe current implementation of the feature has two limitations:\n\n 1. runid required: in order to retrieve an archived workflow history, both workflowid and runid are required.\n 2. best effort: it is possible that a history or visibility record is deleted from cadence primary persistence without being archived first. these cases are rare but are possible with the current state of . please check the faq section for how to get notified when this happens.\n\n\n# concepts\n\n * archiver: archiver is the component that is responsible for archiving and retrieving histories and visibility records. its interface is generic and supports different kinds of locations: local file system, s3, kafka, etc. check this readme if you would like to add a new archiver implementation for your data store.\n * uri: an uri is used to specify the location. based on the scheme part of an uri, the corresponding archiver will be selected by the system to perform the operation.\n\n\n# configuring archival\n\nis controlled by both level config and cluster level config. history and visibility archival have separate domain/cluster configs, but they share the same purpose.\n\n\n# cluster level archival config\n\na cadence cluster can be in one of three states:\n\n * disabled: no will occur and the archivers will be not initialized on service startup.\n * paused: this state is not yet implemented. currently setting cluster to paused is the same as setting it to disabled.\n * enabled: will occur.\n\nenabling the cluster for simply means workflow histories will be archived. there is another config which controls whether archived histories or visibility records can be accessed. both configs have defaults defined in the static yaml and can be overwritten via dynamic config. note, however, dynamic config will take effect only when is enabled in static yaml.\n\n\n# domain level archival config\n\na includes two pieces of related config:\n\n * status: either enabled or disabled. if a is in the disabled state, no will occur for that .\n * uri: the scheme and location where histories or visibility records will be archived to. when a enables for the first time uri is set and can never be changed. if uri is not specified when first enabling a for , a default uri from the static config will be used.\n\n\n# running locally\n\nyou can follow the steps below to run and test the feature locally:\n\n 1. ./cadence-server start\n 2. ./cadence --do samples-domain domain register --gd false --history_archival_status enabled --visibility_archival_status enabled --retention 0\n 3. run the helloworld cadence-sample by following the readme\n 4. copy the workflowid the completed from log output\n 5. retrieve runid through archived visibility record ./cadence --do samples-domain wf listarchived -q \'workflowid = ""\'\n 6. retrieve archived history ./cadence --do samples-domain wf show --wid --rid \n\nin step 2, we registered a new and enabled both history and visibility feature for that . since we didn\'t provide an uri when registering the new , the default uri specified in config/development.yaml is used. the default uri is file:///tmp/cadence_archival/development for history archival and "file:///tmp/cadence_vis_archival/development" for visibility archival. you can find the archived history under the /tmp/cadence_archival/development directory and archived visibility record under the /tmp/cadence_vis_archival/development directory.\n\n\n# running in production\n\ncadence supports uploading workflow histories to google cloud and amazon s3 for archival in production. check documentation in gcloud archival component and s3 archival component.\n\nbelow is an example of amazon s3 archival configuration:\n\narchival:\n history:\n status: "enabled"\n enableread: true\n provider:\n s3store:\n region: "us-east-2"\n visibility:\n status: "enabled"\n enableread: true\n provider:\n s3store:\n region: "us-east-2"\ndomaindefaults:\n archival:\n history:\n status: "enabled"\n uri: "s3://put-name-of-your-s3-bucket-here"\n visibility:\n status: "enabled"\n uri: "s3://put-name-of-your-s3-bucket-here" # most proably the same as the previous uri\n\n\n\n# faq\n\n\n# when does archival happen?\n\nin theory, we would like both history and visibility archival happen after workflow closes and retention period passes. however, due to some limitations in the implementation, only history archival happens after the retention period, while visibility archival happens immediately after workflow closes. please treat this as an implementation details inside cadence and do not relay on this fact. archived data should only be checked after the retention period, and we may change the way we do visibility archival in the future.\n\n\n# what\'s the query syntax for visibility archival?\n\nthe listarchived cli command and api accept a sql-like query for retrieving archived visibility records, similar to how the listworkflow command works. unfortunately, since different archiver implementations have very different capability, there\'s currently no universal query syntax that works for all archiver implementations. please check the readme (for example, s3 and gcp) of the archiver used by your domain for the supported query syntax and limitations.\n\n\n# how does archival interact with global domains?\n\nif you have a global domain, when occurs it will first run on the active cluster and some time later it will run on the standby cluster when replication happens. for history archival, cadence will check if upload operation has been performed and skip duplicate efforts. for visibility archival, there\'s no such check and duplicated visibility records will be uploaded. depending on the archiver implementation, those duplicated upload may consume more space in the underlying storage and duplicated entries may be returned.\n\n\n# can i specify multiple archival uris?\n\neach can only have one uri for history and one uri for visibility . different , however, can have different uris (with different schemes).\n\n\n# how does archival work with pii?\n\nno cadence should ever operate on clear text pii. cadence can be thought of as a database and just as one would not store pii in a database pii should not be stored in cadence. this is even more important when is enabled because these histories can be kept forever.\n\n\n# planned future work\n\n * support retriving archived workflow histories without providing runid.\n * provide guarantee that no history or visibility record is deleted from primary persistence before being archived.\n * implement paused state. in this state no will occur but histories or visibility record also will not be deleted from persistence. once enabled again from paused state, all skipped will occur.',charsets:{}},{title:"Cross DC replication",frontmatter:{layout:"default",title:"Cross DC replication",permalink:"/docs/concepts/cross-dc-replication",readingShow:"top"},regularPath:"/docs/03-concepts/08-cross-dc-replication.html",relativePath:"docs/03-concepts/08-cross-dc-replication.md",key:"v-5d616cea",path:"/docs/concepts/cross-dc-replication/",headers:[{level:2,title:"Global Domains Architecture",slug:"global-domains-architecture",normalizedTitle:"global domains architecture",charIndex:300},{level:3,title:"Conflict Resolution",slug:"conflict-resolution",normalizedTitle:"conflict resolution",charIndex:2309},{level:2,title:"Global Domain Concepts, Configuration and Operation",slug:"global-domain-concepts-configuration-and-operation",normalizedTitle:"global domain concepts, configuration and operation",charIndex:3148},{level:3,title:"Concepts",slug:"concepts",normalizedTitle:"concepts",charIndex:3162},{level:3,title:"Operate by CLI",slug:"operate-by-cli",normalizedTitle:"operate by cli",charIndex:4221},{level:2,title:"Running Locally",slug:"running-locally",normalizedTitle:"running locally",charIndex:5743},{level:2,title:"Running in Production",slug:"running-in-production",normalizedTitle:"running in production",charIndex:5865}],codeSwitcherOptions:{},headersStr:"Global Domains Architecture Conflict Resolution Global Domain Concepts, Configuration and Operation Concepts Operate by CLI Running Locally Running in Production",content:'# Cross-DC replication\n\nThe Cadence Global feature provides clients with the capability to continue their from another cluster in the event of a datacenter failover. Although you can configure a Global to be replicated to any number of clusters, it is only considered active in a single cluster.\n\n\n# Global Domains Architecture\n\nCadence has introduced a new top level entity, Global , which provides support for replication of execution across clusters. A global domain can be configured with more than one clusters, but can only be active in one of the clusters at any point of time. We call it passive or standby when not active in other clusters.\n\nThe number of standby clusters can be zero, if a global domain only configured to one cluster. This is preferred/recommended.\n\nAny workflow of a global domain can only make make progress in its active cluster. And the workflow progress is replicated to other standby clusters. For example, starting workflow by calling StartWorkflow, or starting activity(by PollForActivityTask API), can only be processed in its active cluster. After active cluster made progress, standby clusters (if any) will poll the history from active to replicate the workflow states.\n\nHowever, standby clusters can also receive the requests, e.g. for starting workflows or starting activities. They know which cluster the domain is active at. So the requests can be routed to the active clusters. This is called api-forwarding in Cadence. api-forwarding makes it possible to have no downtime during failover. There are two api-forwarding policy: selected-api-forwarding and all-domain-api-forwarding policy.\n\nWhen using selected-api-forwarding, applications need to run different set of activity & workflow polling on every cluster. Cadence will only dispatch tasks on the current active cluster; on the standby cluster will sit idle until the Global is failed over. This is recommended if XDC is being used in multiple clusters running in very remote data centers(regions), which forwarding is expensive to do.\n\nWhen using all-domain-api-forwarding, applications only need to run activity & workflow polling on one cluster. This makes it easier for the application setup. This is recommended when clusters are all in local or nearby datacenters. See more details in discussion.\n\n\n# Conflict Resolution\n\nUnlike local which provide at-most-once semantics for execution, Global can only support at-least-once semantics. Cadence global domain relies on asynchronous replication of across clusters, so in the event of a failover it is possible that gets dispatched again on the new active cluster due to a replication lag. This also means that whenever is updated after a failover by the new cluster, any previous replication for that execution cannot be applied. This results in loss of some progress made by the in the previous active cluster. During such conflict resolution, Cadence re-injects any external like to the new history before discarding replication . Even though some progress could rollback during failovers, Cadence provides the guarantee that won’t get stuck and will continue to make forward progress.\n\n\n# Global Domain Concepts, Configuration and Operation\n\n\n# Concepts\n\n# IsGlobal\n\nThis config is used to distinguish local to the cluster from the global . It controls the creation of replication on updates allowing the state to be replicated across clusters. This is a read-only setting that can only be set when the is provisioned.\n\n# Clusters\n\nA list of clusters where the can fail over to, including the current active cluster. This is also a read-only setting that can only be set when the is provisioned. A re-replication feature on the roadmap will allow updating this config to add/remove clusters in the future.\n\n# Active Cluster Name\n\nName of the current active cluster for the Global . This config is updated each time the Global is failed over to another cluster.\n\n# Failover Version\n\nUnique failover version which also represents the current active cluster for Global . Cadence allows failover to be triggered from any cluster, so failover version is designed in a way to not allow conflicts if failover is mistakenly triggered simultaneously on two clusters.\n\n\n# Operate by CLI\n\nThe Cadence can also be used to the config or perform failovers. Here are some useful commands.\n\n# Describe Global Domain\n\nThe following command can be used to describe Global metadata:\n\n$ cadence --do cadence-canary-xdc d desc\nName: cadence-canary-xdc\nDescription: cadence canary cross dc testing domain\nOwnerEmail: cadence-dev@cadenceworkflow.io\nDomainData:\nStatus: REGISTERED\nRetentionInDays: 7\nEmitMetrics: true\nActiveClusterName: dc1\nClusters: dc1, dc2\n\n\n# Failover Global Domain using domain update command(being deprecated in favor of managed graceful failover)\n\nThe following command can be used to failover Global my-domain-global to the dc2 cluster:\n\n$ cadence --do my-domain-global d up --ac dc2\n\n\n# Failover Global Domain using Managed Graceful Failover\n\nFirst of all, update the domain to enable this feature for the domain\n\n$ cadence --do test-global-domain-0 d update --domain_data IsManagedByCadence:true\n$ cadence --do test-global-domain-1 d update --domain_data IsManagedByCadence:true\n$ cadence --do test-global-domain-2 d update --domain_data IsManagedByCadence:true\n...\n\n\nThen you can start failover the those global domains using managed failover:\n\ncadence admin cluster failover start --source_cluster dc1 --target_cluster dc2\n\n\nThis will failover all the domains with IsManagedByCadence:true from dc1 to dc2.\n\nYou can provide more detailed options when using the command, and also watch the progress of the failover. Feel free to explore the cadence admin cluster failover tab.\n\n\n# Running Locally\n\nThe best way is to use Cadence docker-compose: docker-compose -f docker-compose-multiclusters.yml up\n\n\n# Running in Production\n\nEnable global domain feature needs to be enabled in static config.\n\nHere we use clusterDCA and clusterDCB as an example. We pick clusterDCA as the primary(used to called "master") cluster. The only difference of being a primary cluster is that it is responsible for domain registration. Primary can be changed later but it needs to be the same across all clusters.\n\nThe ClusterMeta config of clusterDCA should be\n\ndcRedirectionPolicy:\n policy: "selected-apis-forwarding"\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: "clusterDCA"\n currentClusterName: "clusterDCA"\n clusterInformation:\n clusterDCA:\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n clusterDCB:\n enabled: true\n initialFailoverVersion: 0\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n\n\nAnd ClusterMeta config of clusterDCB should be\n\ndcRedirectionPolicy:\n policy: "selected-apis-forwarding"\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: "clusterDCA"\n currentClusterName: "clusterDCB"\n clusterInformation:\n clusterDCA:\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n clusterDCB:\n enabled: true\n initialFailoverVersion: 0\n\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n\n\nAfter the configuration is deployed:\n\n 1. Register a global domain cadence --do domain register --global_domain true --clusters clusterDCA clusterDCB --active_cluster clusterDCA\n\n 2. Run some workflow and failover domain from one to another cadence --do domain update --active_cluster clusterDCB\n\nThen the domain should be failed over to clusterDCB. Now worklfows are read-only in clusterDCA. So your workers polling tasks from clusterDCA will become idle.\n\nNote 1: that even though clusterDCA is standy/read-only for this domain, it can be active for another domain. So being active/standy is per domain basis not per clusters. In other words, for example if you use XDC in case of DC failure of clusterDCA, you need to failover all domains from clusterDCA to clusterDCB.\n\nNote 2: even though a domain is standy/read-only in a cluster, say clusterDCA, sending write requests(startWF, signalWF, etc) could still work because there is a forwarding component in the Frontend service. It will try to re-route the requests to an active cluster for the domain.',normalizedContent:'# cross-dc replication\n\nthe cadence global feature provides clients with the capability to continue their from another cluster in the event of a datacenter failover. although you can configure a global to be replicated to any number of clusters, it is only considered active in a single cluster.\n\n\n# global domains architecture\n\ncadence has introduced a new top level entity, global , which provides support for replication of execution across clusters. a global domain can be configured with more than one clusters, but can only be active in one of the clusters at any point of time. we call it passive or standby when not active in other clusters.\n\nthe number of standby clusters can be zero, if a global domain only configured to one cluster. this is preferred/recommended.\n\nany workflow of a global domain can only make make progress in its active cluster. and the workflow progress is replicated to other standby clusters. for example, starting workflow by calling startworkflow, or starting activity(by pollforactivitytask api), can only be processed in its active cluster. after active cluster made progress, standby clusters (if any) will poll the history from active to replicate the workflow states.\n\nhowever, standby clusters can also receive the requests, e.g. for starting workflows or starting activities. they know which cluster the domain is active at. so the requests can be routed to the active clusters. this is called api-forwarding in cadence. api-forwarding makes it possible to have no downtime during failover. there are two api-forwarding policy: selected-api-forwarding and all-domain-api-forwarding policy.\n\nwhen using selected-api-forwarding, applications need to run different set of activity & workflow polling on every cluster. cadence will only dispatch tasks on the current active cluster; on the standby cluster will sit idle until the global is failed over. this is recommended if xdc is being used in multiple clusters running in very remote data centers(regions), which forwarding is expensive to do.\n\nwhen using all-domain-api-forwarding, applications only need to run activity & workflow polling on one cluster. this makes it easier for the application setup. this is recommended when clusters are all in local or nearby datacenters. see more details in discussion.\n\n\n# conflict resolution\n\nunlike local which provide at-most-once semantics for execution, global can only support at-least-once semantics. cadence global domain relies on asynchronous replication of across clusters, so in the event of a failover it is possible that gets dispatched again on the new active cluster due to a replication lag. this also means that whenever is updated after a failover by the new cluster, any previous replication for that execution cannot be applied. this results in loss of some progress made by the in the previous active cluster. during such conflict resolution, cadence re-injects any external like to the new history before discarding replication . even though some progress could rollback during failovers, cadence provides the guarantee that won’t get stuck and will continue to make forward progress.\n\n\n# global domain concepts, configuration and operation\n\n\n# concepts\n\n# isglobal\n\nthis config is used to distinguish local to the cluster from the global . it controls the creation of replication on updates allowing the state to be replicated across clusters. this is a read-only setting that can only be set when the is provisioned.\n\n# clusters\n\na list of clusters where the can fail over to, including the current active cluster. this is also a read-only setting that can only be set when the is provisioned. a re-replication feature on the roadmap will allow updating this config to add/remove clusters in the future.\n\n# active cluster name\n\nname of the current active cluster for the global . this config is updated each time the global is failed over to another cluster.\n\n# failover version\n\nunique failover version which also represents the current active cluster for global . cadence allows failover to be triggered from any cluster, so failover version is designed in a way to not allow conflicts if failover is mistakenly triggered simultaneously on two clusters.\n\n\n# operate by cli\n\nthe cadence can also be used to the config or perform failovers. here are some useful commands.\n\n# describe global domain\n\nthe following command can be used to describe global metadata:\n\n$ cadence --do cadence-canary-xdc d desc\nname: cadence-canary-xdc\ndescription: cadence canary cross dc testing domain\nowneremail: cadence-dev@cadenceworkflow.io\ndomaindata:\nstatus: registered\nretentionindays: 7\nemitmetrics: true\nactiveclustername: dc1\nclusters: dc1, dc2\n\n\n# failover global domain using domain update command(being deprecated in favor of managed graceful failover)\n\nthe following command can be used to failover global my-domain-global to the dc2 cluster:\n\n$ cadence --do my-domain-global d up --ac dc2\n\n\n# failover global domain using managed graceful failover\n\nfirst of all, update the domain to enable this feature for the domain\n\n$ cadence --do test-global-domain-0 d update --domain_data ismanagedbycadence:true\n$ cadence --do test-global-domain-1 d update --domain_data ismanagedbycadence:true\n$ cadence --do test-global-domain-2 d update --domain_data ismanagedbycadence:true\n...\n\n\nthen you can start failover the those global domains using managed failover:\n\ncadence admin cluster failover start --source_cluster dc1 --target_cluster dc2\n\n\nthis will failover all the domains with ismanagedbycadence:true from dc1 to dc2.\n\nyou can provide more detailed options when using the command, and also watch the progress of the failover. feel free to explore the cadence admin cluster failover tab.\n\n\n# running locally\n\nthe best way is to use cadence docker-compose: docker-compose -f docker-compose-multiclusters.yml up\n\n\n# running in production\n\nenable global domain feature needs to be enabled in static config.\n\nhere we use clusterdca and clusterdcb as an example. we pick clusterdca as the primary(used to called "master") cluster. the only difference of being a primary cluster is that it is responsible for domain registration. primary can be changed later but it needs to be the same across all clusters.\n\nthe clustermeta config of clusterdca should be\n\ndcredirectionpolicy:\n policy: "selected-apis-forwarding"\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: "clusterdca"\n currentclustername: "clusterdca"\n clusterinformation:\n clusterdca:\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n clusterdcb:\n enabled: true\n initialfailoverversion: 0\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n\n\nand clustermeta config of clusterdcb should be\n\ndcredirectionpolicy:\n policy: "selected-apis-forwarding"\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: "clusterdca"\n currentclustername: "clusterdcb"\n clusterinformation:\n clusterdca:\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n clusterdcb:\n enabled: true\n initialfailoverversion: 0\n\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n\n\nafter the configuration is deployed:\n\n 1. register a global domain cadence --do domain register --global_domain true --clusters clusterdca clusterdcb --active_cluster clusterdca\n\n 2. run some workflow and failover domain from one to another cadence --do domain update --active_cluster clusterdcb\n\nthen the domain should be failed over to clusterdcb. now worklfows are read-only in clusterdca. so your workers polling tasks from clusterdca will become idle.\n\nnote 1: that even though clusterdca is standy/read-only for this domain, it can be active for another domain. so being active/standy is per domain basis not per clusters. in other words, for example if you use xdc in case of dc failure of clusterdca, you need to failover all domains from clusterdca to clusterdcb.\n\nnote 2: even though a domain is standy/read-only in a cluster, say clusterdca, sending write requests(startwf, signalwf, etc) could still work because there is a forwarding component in the frontend service. it will try to re-route the requests to an active cluster for the domain.',charsets:{}},{title:"Search workflows(Advanced visibility)",frontmatter:{layout:"default",title:"Search workflows(Advanced visibility)",permalink:"/docs/concepts/search-workflows",readingShow:"top"},regularPath:"/docs/03-concepts/09-search-workflows.html",relativePath:"docs/03-concepts/09-search-workflows.md",key:"v-3c665d38",path:"/docs/concepts/search-workflows/",headers:[{level:2,title:"Introduction",slug:"introduction",normalizedTitle:"introduction",charIndex:47},{level:2,title:"Memo vs Search Attributes",slug:"memo-vs-search-attributes",normalizedTitle:"memo vs search attributes",charIndex:843},{level:2,title:"Search Attributes (Go Client Usage)",slug:"search-attributes-go-client-usage",normalizedTitle:"search attributes (go client usage)",charIndex:2531},{level:3,title:"Allow Listing Search Attributes",slug:"allow-listing-search-attributes",normalizedTitle:"allow listing search attributes",charIndex:2885},{level:3,title:"Value Types",slug:"value-types",normalizedTitle:"value types",charIndex:5087},{level:3,title:"Limit",slug:"limit",normalizedTitle:"limit",charIndex:5298},{level:3,title:"Upsert Search Attributes in Workflow",slug:"upsert-search-attributes-in-workflow",normalizedTitle:"upsert search attributes in workflow",charIndex:5631},{level:3,title:"ContinueAsNew and Cron",slug:"continueasnew-and-cron",normalizedTitle:"continueasnew and cron",charIndex:6932},{level:2,title:"Query Capabilities",slug:"query-capabilities",normalizedTitle:"query capabilities",charIndex:7084},{level:3,title:"Supported Operators",slug:"supported-operators",normalizedTitle:"supported operators",charIndex:7264},{level:3,title:"Default Attributes",slug:"default-attributes",normalizedTitle:"default attributes",charIndex:7364},{level:3,title:"General Notes About Queries",slug:"general-notes-about-queries",normalizedTitle:"general notes about queries",charIndex:9280},{level:2,title:"Tools Support",slug:"tools-support",normalizedTitle:"tools support",charIndex:9802},{level:3,title:"CLI",slug:"cli",normalizedTitle:"cli",charIndex:470},{level:3,title:"Web UI Support",slug:"web-ui-support",normalizedTitle:"web ui support",charIndex:11655},{level:3,title:"TLS Support for connecting to Elasticsearch",slug:"tls-support-for-connecting-to-elasticsearch",normalizedTitle:"tls support for connecting to elasticsearch",charIndex:11818},{level:2,title:"Running Locally",slug:"running-locally",normalizedTitle:"running locally",charIndex:12432},{level:2,title:"Running in Production",slug:"running-in-production",normalizedTitle:"running in production",charIndex:13237}],codeSwitcherOptions:{},headersStr:"Introduction Memo vs Search Attributes Search Attributes (Go Client Usage) Allow Listing Search Attributes Value Types Limit Upsert Search Attributes in Workflow ContinueAsNew and Cron Query Capabilities Supported Operators Default Attributes General Notes About Queries Tools Support CLI Web UI Support TLS Support for connecting to Elasticsearch Running Locally Running in Production",content:'# Searching Workflows(Advanced visibility)\n\n\n# Introduction\n\nCadence supports creating with customized key-value pairs, updating the information within the code, and then listing/searching with a SQL-like . For example, you can create with keys city and age, then search all with city = seattle and age > 22.\n\nAlso note that normal properties like start time and type can be queried as well. For example, the following could be specified when listing workflows from the CLI or using the list APIs (Go, Java):\n\nWorkflowType = "main.Workflow" AND CloseStatus != "completed" AND (StartTime > \n "2019-06-07T16:46:34-08:00" OR CloseTime > "2019-06-07T16:46:34-08:00") \n ORDER BY StartTime DESC \n\n\nIn other places, this is also called as advanced visibility. While basic visibility is referred to basic listing without being able to search.\n\n\n# Memo vs Search Attributes\n\nCadence offers two methods for creating with key-value pairs: memo and search attributes. Memo can only be provided on start. Also, memo data are not indexed, and are therefore not searchable. Memo data are visible when listing using the list APIs. Search attributes data are indexed so you can search by on these attributes. However, search attributes require the use of Elasticsearch.\n\nMemo and search attributes are available in the Go client in StartWorkflowOptions.\n\ntype StartWorkflowOptions struct {\n // ...\n\n // Memo - Optional non-indexed info that will be shown in list workflow.\n Memo map[string]interface{}\n\n // SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only\n // supported when Cadence server is using Elasticsearch). The key and value type must be registered on Cadence server side.\n // Use GetSearchAttributes API to get valid key and corresponding value type.\n SearchAttributes map[string]interface{}\n}\n\n\nIn the Java client, the WorkflowOptions.Builder has similar methods for memo and search attributes.\n\nSome important distinctions between memo and search attributes:\n\n * Memo can support all data types because it is not indexed. Search attributes only support basic data types (including String(aka Text), Int, Float, Bool, Datetime) because it is indexed by Elasticsearch.\n * Memo does not restrict on key names. Search attributes require that keys are allowlisted before using them because Elasticsearch has a limit on indexed keys.\n * Memo doesn\'t require Cadence clusters to depend on Elasticsearch while search attributes only works with Elasticsearch.\n\n\n# Search Attributes (Go Client Usage)\n\nWhen using the Cadence Go client, provide key-value pairs as SearchAttributes in StartWorkflowOptions.\n\nSearchAttributes is map[string]interface{} where the keys need to be allowlisted so that Cadence knows the attribute key name and value type. The value provided in the map must be the same type as registered.\n\n\n# Allow Listing Search Attributes\n\nStart by the list of search attributes using the\n\n$ cadence --domain samples-domain cl get-search-attr\n+---------------------+------------+\n| KEY | VALUE TYPE |\n+---------------------+------------+\n| CloseStatus | INT |\n| CloseTime | INT |\n| CustomBoolField | DOUBLE |\n| CustomDatetimeField | DATETIME |\n| CustomDomain | KEYWORD |\n| CustomDoubleField | BOOL |\n| CustomIntField | INT |\n| CustomKeywordField | KEYWORD |\n| CustomStringField | STRING |\n| DomainID | KEYWORD |\n| ExecutionTime | INT |\n| HistoryLength | INT |\n| RunID | KEYWORD |\n| StartTime | INT |\n| WorkflowID | KEYWORD |\n| WorkflowType | KEYWORD |\n+---------------------+------------+\n\n\nUse the admin to add a new search attribute:\n\ncadence --domain samples-domain adm cl asa --search_attr_key NewKey --search_attr_type 1\n\n\nThe numbers for the attribute types map as follows:\n\n * 0 = String(Text)\n * 1 = Keyword\n * 2 = Int\n * 3 = Double\n * 4 = Bool\n * 5 = DateTime\n\n# Keyword vs String(Text)\n\nNote 1: String has been renamed to Text in ElasticSearch. Cadence is also planning to rename it.\n\nNote 2: Keyword and String(Text) are concepts taken from Elasticsearch. Each word in a String(Text) is considered a searchable keyword. For a UUID, that can be problematic as Elasticsearch will index each portion of the UUID separately. To have the whole string considered as a searchable keyword, use the Keyword type.\n\nFor example, key RunID with value "2dd29ab7-2dd8-4668-83e0-89cae261cfb1"\n\n * as a Keyword will only be matched by RunID = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1" (or in the future with regular expressions)\n * as a String(Text) will be matched by RunID = "2dd8", which may cause unwanted matches\n\nNote: String(Text) type can not be used in Order By .\n\nThere are some pre-allowlisted search attributes that are handy for testing:\n\n * CustomKeywordField\n * CustomIntField\n * CustomDoubleField\n * CustomBoolField\n * CustomDatetimeField\n * CustomStringField\n\nTheir types are indicated in their names.\n\n\n# Value Types\n\nHere are the Search Attribute value types and their correspondent Golang types:\n\n * Keyword = string\n * Int = int64\n * Double = float64\n * Bool = bool\n * Datetime = time.Time\n * String = string\n\n\n# Limit\n\nWe recommend limiting the number of Elasticsearch indexes by enforcing limits on the following:\n\n * Number of keys: 100 per\n * Size of value: 2kb per value\n * Total size of key and values: 40kb per\n\nCadence reserves keys like DomainID, WorkflowID, and RunID. These can only be used in list . The values are not updatable.\n\n\n# Upsert Search Attributes in Workflow\n\nUpsertSearchAttributes is used to add or update search attributes from within the code.\n\nGo samples for search attributes can be found at github.com/uber-common/cadence-samples.\n\nUpsertSearchAttributes will merge attributes to the existing map in the . Consider this example code:\n\nfunc MyWorkflow(ctx workflow.Context, input string) error {\n\n attr1 := map[string]interface{}{\n "CustomIntField": 1,\n "CustomBoolField": true,\n }\n workflow.UpsertSearchAttributes(ctx, attr1)\n\n attr2 := map[string]interface{}{\n "CustomIntField": 2,\n "CustomKeywordField": "seattle",\n }\n workflow.UpsertSearchAttributes(ctx, attr2)\n}\n\n\nAfter the second call to UpsertSearchAttributes, the map will contain:\n\nmap[string]interface{}{\n "CustomIntField": 2,\n "CustomBoolField": true,\n "CustomKeywordField": "seattle",\n}\n\n\nThere is no support for removing a field. To achieve a similar effect, set the field to a sentinel value. For example, to remove “CustomKeywordField”, update it to “impossibleVal”. Then searching CustomKeywordField != ‘impossibleVal’ will match with CustomKeywordField not equal to "impossibleVal", which includes without the CustomKeywordField set.\n\nUse workflow.GetInfo to get current search attributes.\n\n\n# ContinueAsNew and Cron\n\nWhen performing a ContinueAsNew or using Cron, search attributes (and memo) will be carried over to the new run by default.\n\n\n# Query Capabilities\n\nby using a SQL-like where clause when listing workflows from the CLI or using the list APIs (Go, Java).\n\nNote that you will only see from one domain when .\n\n\n# Supported Operators\n\n * AND, OR, ()\n * =, !=, >, >=, <, <=\n * IN\n * BETWEEN ... AND\n * ORDER BY\n\n\n# Default Attributes\n\nMore and more default attributes are added in newer versions. Please get the by using the get-search-attr command or the GetSearchAttributes API. Some names and types are as follows:\n\nKEY VALUE TYPE\nCloseStatus INT\nCloseTime INT\nCustomBoolField DOUBLE\nCustomDatetimeField DATETIME\nCustomDomain KEYWORD\nCustomDoubleField BOOL\nCustomIntField INT\nCustomKeywordField KEYWORD\nCustomStringField STRING\nDomainID KEYWORD\nExecutionTime INT\nHistoryLength INT\nRunID KEYWORD\nStartTime INT\nWorkflowID KEYWORD\nWorkflowType KEYWORD\nTasklist KEYWORD\n\nThere are some special considerations for these attributes:\n\n * CloseStatus, CloseTime, DomainID, ExecutionTime, HistoryLength, RunID, StartTime, WorkflowID, WorkflowType are reserved by Cadence and are read-only\n * Starting from v0.18.0, Cadence automatically maps(case insensitive) string to CloseStatus so that you don\'t need to use integer in the query, to make it easier to use.\n * 0 = "completed"\n * 1 = "failed"\n * 2 = "canceled"\n * 3 = "terminated"\n * 4 = "continued_as_new"\n * 5 = "timed_out"\n * StartTime, CloseTime and ExecutionTime are stored as INT, but support using both EpochTime in nanoseconds, and string in RFC3339 format (ex. "2006-01-02T15:04:05+07:00")\n * CloseTime, CloseStatus, HistoryLength are only present in closed\n * ExecutionTime is for Retry/Cron user to a that will run in the future\n * To list only open , add CloseTime = missing to the end of the .\n\nIf you use retry or the cron feature to that will start execution in a certain time range, you can add predicates on ExecutionTime. For example: ExecutionTime > 2019-01-01T10:00:00-07:00. Note that if predicates on ExecutionTime are included, only cron or a that needs to retry will be returned.\n\n\n# General Notes About Queries\n\n * Pagesize default is 1000, and cannot be larger than 10k\n * Range on Cadence timestamp (StartTime, CloseTime, ExecutionTime) cannot be larger than 9223372036854775807 (maxInt64 - 1001)\n * by time range will have 1ms resolution\n * column names are case sensitive\n * ListWorkflow may take longer when retrieving a large number of (10M+)\n * To retrieve a large number of without caring about order, use the ScanWorkflow API\n * To efficiently count the number of , use the CountWorkflow API\n\n\n# Tools Support\n\n\n# CLI\n\nSupport for search attributes is available as of version 0.6.0 of the Cadence server. You can also use the from the latest CLI Docker image (supported on 0.6.4 or later).\n\n# Start Workflow with Search Attributes\n\ncadence --do samples-domain workflow start --tl helloWorldGroup --wt main.Workflow --et 60 --dt 10 -i \'"vancexu"\' -search_attr_key \'CustomIntField | CustomKeywordField | CustomStringField | CustomBoolField | CustomDatetimeField\' -search_attr_value \'5 | keyword1 | vancexu test | true | 2019-06-07T16:16:36-08:00\'\n\n\n# Search Workflows with List API/Command\n\ncadence --do samples-domain wf list -q \'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\' -psa\n\n\ncadence --do samples-domain wf list -q \'CustomKeywordField in ("keyword2", "keyword1") and CustomIntField >= 5 and CloseTime between "2018-06-07T16:16:36-08:00" and "2019-06-07T16:46:34-08:00" order by CustomDatetimeField desc\' -psa\n\n\nTo list only open , add CloseTime = missing to the end of the .\n\nNote that can support more than one type of filter:\n\ncadence --do samples-domain wf list -q \'WorkflowType = "main.Workflow" and (WorkflowID = "1645a588-4772-4dab-b276-5f9db108b3a8" or RunID = "be66519b-5f09-40cd-b2e8-20e4106244dc")\'\n\n\ncadence --do samples-domain wf list -q \'WorkflowType = "main.Workflow" StartTime > "2019-06-07T16:46:34-08:00" and CloseTime = missing\'\n\n\nAll above command can be done with ListWorkflowExecutions API.\n\n# Count Workflows with Count API/Command\n\ncadence --do samples-domain wf count -q \'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\'\n\n\ncadence --do samples-domain wf count -q \'CloseStatus="failed"\'\n\n\ncadence --do samples-domain wf count -q \'CloseStatus!="completed"\'\n\n\nAll above command can be done with CountWorkflowExecutions API.\n\n\n# Web UI Support\n\nare supported in Cadence Web as of release 3.4.0. Use the "Basic/Advanced" button to switch to "Advanced" mode and type the in the search box.\n\n\n# TLS Support for connecting to Elasticsearch\n\nIf your elasticsearch deployment requires TLS to connect to it, you can add the following to your config template. The TLS config is optional and when not provided it defaults to tls.enabled to false\n\nelasticsearch:\n url:\n scheme: "https"\n host: "127.0.0.1:9200"\n indices:\n visibility: cadence-visibility-dev\n tls:\n enabled: true\n caFile: /secrets/cadence/elasticsearch_cert.pem\n enableHostVerification: true\n serverName: myServerName\n certFile: /secrets/cadence/certfile.crt\n keyFile: /secrets/cadence/keyfile.key\n sslmode: false\n\n\n\n# Running Locally\n\n 1. Increase Docker memory to higher than 6GB. Navigate to Docker -> Preferences -> Advanced -> Memory\n 2. Get the Cadence Docker compose file. Run curl -O https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose-es.yml\n 3. Start Cadence Docker (which contains Apache Kafka, Apache Zookeeper, and Elasticsearch) using docker-compose -f docker-compose-es.yml up\n 4. From the Docker output log, make sure Elasticsearch and Cadence started correctly. If you encounter an insufficient disk space error, try docker system prune -a --volumes\n 5. Register a local domain and start using it. cadence --do samples-domain d re\n 6. Add the key to ElasticSearch And also allowlist search attributes. cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1\n\n\n# Running in Production\n\nTo enable this feature in a Cadence cluster:\n\n * Register index schema on ElasticSearch. Run two CURL commands following this script.\n * Create a index template by using the schema , choose v6/v7 based on your ElasticSearch version\n * Create an index follow the index template, remember the name\n * Register topic on Kafka, and remember the name\n * Set up the right number of partitions based on your expected throughput(can be scaled up later)\n * Configure Cadence for ElasticSearch + Kafka like this documentation Based on the full static config, you may add some other fields like AuthN. Similarly for Kafka.\n\nTo add new search attributes:\n\n 1. Add the key to ElasticSearch cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1\n 2. Update the dynamic configuration to allowlist the new attribute\n\nNote: starting a with search attributes but without advanced visibility feature will succeed as normal, but will not be searchable and will not be shown in list results.',normalizedContent:'# searching workflows(advanced visibility)\n\n\n# introduction\n\ncadence supports creating with customized key-value pairs, updating the information within the code, and then listing/searching with a sql-like . for example, you can create with keys city and age, then search all with city = seattle and age > 22.\n\nalso note that normal properties like start time and type can be queried as well. for example, the following could be specified when listing workflows from the cli or using the list apis (go, java):\n\nworkflowtype = "main.workflow" and closestatus != "completed" and (starttime > \n "2019-06-07t16:46:34-08:00" or closetime > "2019-06-07t16:46:34-08:00") \n order by starttime desc \n\n\nin other places, this is also called as advanced visibility. while basic visibility is referred to basic listing without being able to search.\n\n\n# memo vs search attributes\n\ncadence offers two methods for creating with key-value pairs: memo and search attributes. memo can only be provided on start. also, memo data are not indexed, and are therefore not searchable. memo data are visible when listing using the list apis. search attributes data are indexed so you can search by on these attributes. however, search attributes require the use of elasticsearch.\n\nmemo and search attributes are available in the go client in startworkflowoptions.\n\ntype startworkflowoptions struct {\n // ...\n\n // memo - optional non-indexed info that will be shown in list workflow.\n memo map[string]interface{}\n\n // searchattributes - optional indexed info that can be used in query of list/scan/count workflow apis (only\n // supported when cadence server is using elasticsearch). the key and value type must be registered on cadence server side.\n // use getsearchattributes api to get valid key and corresponding value type.\n searchattributes map[string]interface{}\n}\n\n\nin the java client, the workflowoptions.builder has similar methods for memo and search attributes.\n\nsome important distinctions between memo and search attributes:\n\n * memo can support all data types because it is not indexed. search attributes only support basic data types (including string(aka text), int, float, bool, datetime) because it is indexed by elasticsearch.\n * memo does not restrict on key names. search attributes require that keys are allowlisted before using them because elasticsearch has a limit on indexed keys.\n * memo doesn\'t require cadence clusters to depend on elasticsearch while search attributes only works with elasticsearch.\n\n\n# search attributes (go client usage)\n\nwhen using the cadence go client, provide key-value pairs as searchattributes in startworkflowoptions.\n\nsearchattributes is map[string]interface{} where the keys need to be allowlisted so that cadence knows the attribute key name and value type. the value provided in the map must be the same type as registered.\n\n\n# allow listing search attributes\n\nstart by the list of search attributes using the\n\n$ cadence --domain samples-domain cl get-search-attr\n+---------------------+------------+\n| key | value type |\n+---------------------+------------+\n| closestatus | int |\n| closetime | int |\n| customboolfield | double |\n| customdatetimefield | datetime |\n| customdomain | keyword |\n| customdoublefield | bool |\n| customintfield | int |\n| customkeywordfield | keyword |\n| customstringfield | string |\n| domainid | keyword |\n| executiontime | int |\n| historylength | int |\n| runid | keyword |\n| starttime | int |\n| workflowid | keyword |\n| workflowtype | keyword |\n+---------------------+------------+\n\n\nuse the admin to add a new search attribute:\n\ncadence --domain samples-domain adm cl asa --search_attr_key newkey --search_attr_type 1\n\n\nthe numbers for the attribute types map as follows:\n\n * 0 = string(text)\n * 1 = keyword\n * 2 = int\n * 3 = double\n * 4 = bool\n * 5 = datetime\n\n# keyword vs string(text)\n\nnote 1: string has been renamed to text in elasticsearch. cadence is also planning to rename it.\n\nnote 2: keyword and string(text) are concepts taken from elasticsearch. each word in a string(text) is considered a searchable keyword. for a uuid, that can be problematic as elasticsearch will index each portion of the uuid separately. to have the whole string considered as a searchable keyword, use the keyword type.\n\nfor example, key runid with value "2dd29ab7-2dd8-4668-83e0-89cae261cfb1"\n\n * as a keyword will only be matched by runid = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1" (or in the future with regular expressions)\n * as a string(text) will be matched by runid = "2dd8", which may cause unwanted matches\n\nnote: string(text) type can not be used in order by .\n\nthere are some pre-allowlisted search attributes that are handy for testing:\n\n * customkeywordfield\n * customintfield\n * customdoublefield\n * customboolfield\n * customdatetimefield\n * customstringfield\n\ntheir types are indicated in their names.\n\n\n# value types\n\nhere are the search attribute value types and their correspondent golang types:\n\n * keyword = string\n * int = int64\n * double = float64\n * bool = bool\n * datetime = time.time\n * string = string\n\n\n# limit\n\nwe recommend limiting the number of elasticsearch indexes by enforcing limits on the following:\n\n * number of keys: 100 per\n * size of value: 2kb per value\n * total size of key and values: 40kb per\n\ncadence reserves keys like domainid, workflowid, and runid. these can only be used in list . the values are not updatable.\n\n\n# upsert search attributes in workflow\n\nupsertsearchattributes is used to add or update search attributes from within the code.\n\ngo samples for search attributes can be found at github.com/uber-common/cadence-samples.\n\nupsertsearchattributes will merge attributes to the existing map in the . consider this example code:\n\nfunc myworkflow(ctx workflow.context, input string) error {\n\n attr1 := map[string]interface{}{\n "customintfield": 1,\n "customboolfield": true,\n }\n workflow.upsertsearchattributes(ctx, attr1)\n\n attr2 := map[string]interface{}{\n "customintfield": 2,\n "customkeywordfield": "seattle",\n }\n workflow.upsertsearchattributes(ctx, attr2)\n}\n\n\nafter the second call to upsertsearchattributes, the map will contain:\n\nmap[string]interface{}{\n "customintfield": 2,\n "customboolfield": true,\n "customkeywordfield": "seattle",\n}\n\n\nthere is no support for removing a field. to achieve a similar effect, set the field to a sentinel value. for example, to remove “customkeywordfield”, update it to “impossibleval”. then searching customkeywordfield != ‘impossibleval’ will match with customkeywordfield not equal to "impossibleval", which includes without the customkeywordfield set.\n\nuse workflow.getinfo to get current search attributes.\n\n\n# continueasnew and cron\n\nwhen performing a continueasnew or using cron, search attributes (and memo) will be carried over to the new run by default.\n\n\n# query capabilities\n\nby using a sql-like where clause when listing workflows from the cli or using the list apis (go, java).\n\nnote that you will only see from one domain when .\n\n\n# supported operators\n\n * and, or, ()\n * =, !=, >, >=, <, <=\n * in\n * between ... and\n * order by\n\n\n# default attributes\n\nmore and more default attributes are added in newer versions. please get the by using the get-search-attr command or the getsearchattributes api. some names and types are as follows:\n\nkey value type\nclosestatus int\nclosetime int\ncustomboolfield double\ncustomdatetimefield datetime\ncustomdomain keyword\ncustomdoublefield bool\ncustomintfield int\ncustomkeywordfield keyword\ncustomstringfield string\ndomainid keyword\nexecutiontime int\nhistorylength int\nrunid keyword\nstarttime int\nworkflowid keyword\nworkflowtype keyword\ntasklist keyword\n\nthere are some special considerations for these attributes:\n\n * closestatus, closetime, domainid, executiontime, historylength, runid, starttime, workflowid, workflowtype are reserved by cadence and are read-only\n * starting from v0.18.0, cadence automatically maps(case insensitive) string to closestatus so that you don\'t need to use integer in the query, to make it easier to use.\n * 0 = "completed"\n * 1 = "failed"\n * 2 = "canceled"\n * 3 = "terminated"\n * 4 = "continued_as_new"\n * 5 = "timed_out"\n * starttime, closetime and executiontime are stored as int, but support using both epochtime in nanoseconds, and string in rfc3339 format (ex. "2006-01-02t15:04:05+07:00")\n * closetime, closestatus, historylength are only present in closed\n * executiontime is for retry/cron user to a that will run in the future\n * to list only open , add closetime = missing to the end of the .\n\nif you use retry or the cron feature to that will start execution in a certain time range, you can add predicates on executiontime. for example: executiontime > 2019-01-01t10:00:00-07:00. note that if predicates on executiontime are included, only cron or a that needs to retry will be returned.\n\n\n# general notes about queries\n\n * pagesize default is 1000, and cannot be larger than 10k\n * range on cadence timestamp (starttime, closetime, executiontime) cannot be larger than 9223372036854775807 (maxint64 - 1001)\n * by time range will have 1ms resolution\n * column names are case sensitive\n * listworkflow may take longer when retrieving a large number of (10m+)\n * to retrieve a large number of without caring about order, use the scanworkflow api\n * to efficiently count the number of , use the countworkflow api\n\n\n# tools support\n\n\n# cli\n\nsupport for search attributes is available as of version 0.6.0 of the cadence server. you can also use the from the latest cli docker image (supported on 0.6.4 or later).\n\n# start workflow with search attributes\n\ncadence --do samples-domain workflow start --tl helloworldgroup --wt main.workflow --et 60 --dt 10 -i \'"vancexu"\' -search_attr_key \'customintfield | customkeywordfield | customstringfield | customboolfield | customdatetimefield\' -search_attr_value \'5 | keyword1 | vancexu test | true | 2019-06-07t16:16:36-08:00\'\n\n\n# search workflows with list api/command\n\ncadence --do samples-domain wf list -q \'(customkeywordfield = "keyword1" and customintfield >= 5) or customkeywordfield = "keyword2"\' -psa\n\n\ncadence --do samples-domain wf list -q \'customkeywordfield in ("keyword2", "keyword1") and customintfield >= 5 and closetime between "2018-06-07t16:16:36-08:00" and "2019-06-07t16:46:34-08:00" order by customdatetimefield desc\' -psa\n\n\nto list only open , add closetime = missing to the end of the .\n\nnote that can support more than one type of filter:\n\ncadence --do samples-domain wf list -q \'workflowtype = "main.workflow" and (workflowid = "1645a588-4772-4dab-b276-5f9db108b3a8" or runid = "be66519b-5f09-40cd-b2e8-20e4106244dc")\'\n\n\ncadence --do samples-domain wf list -q \'workflowtype = "main.workflow" starttime > "2019-06-07t16:46:34-08:00" and closetime = missing\'\n\n\nall above command can be done with listworkflowexecutions api.\n\n# count workflows with count api/command\n\ncadence --do samples-domain wf count -q \'(customkeywordfield = "keyword1" and customintfield >= 5) or customkeywordfield = "keyword2"\'\n\n\ncadence --do samples-domain wf count -q \'closestatus="failed"\'\n\n\ncadence --do samples-domain wf count -q \'closestatus!="completed"\'\n\n\nall above command can be done with countworkflowexecutions api.\n\n\n# web ui support\n\nare supported in cadence web as of release 3.4.0. use the "basic/advanced" button to switch to "advanced" mode and type the in the search box.\n\n\n# tls support for connecting to elasticsearch\n\nif your elasticsearch deployment requires tls to connect to it, you can add the following to your config template. the tls config is optional and when not provided it defaults to tls.enabled to false\n\nelasticsearch:\n url:\n scheme: "https"\n host: "127.0.0.1:9200"\n indices:\n visibility: cadence-visibility-dev\n tls:\n enabled: true\n cafile: /secrets/cadence/elasticsearch_cert.pem\n enablehostverification: true\n servername: myservername\n certfile: /secrets/cadence/certfile.crt\n keyfile: /secrets/cadence/keyfile.key\n sslmode: false\n\n\n\n# running locally\n\n 1. increase docker memory to higher than 6gb. navigate to docker -> preferences -> advanced -> memory\n 2. get the cadence docker compose file. run curl -o https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose-es.yml\n 3. start cadence docker (which contains apache kafka, apache zookeeper, and elasticsearch) using docker-compose -f docker-compose-es.yml up\n 4. from the docker output log, make sure elasticsearch and cadence started correctly. if you encounter an insufficient disk space error, try docker system prune -a --volumes\n 5. register a local domain and start using it. cadence --do samples-domain d re\n 6. add the key to elasticsearch and also allowlist search attributes. cadence --do domain adm cl asa --search_attr_key newkey --search_attr_type 1\n\n\n# running in production\n\nto enable this feature in a cadence cluster:\n\n * register index schema on elasticsearch. run two curl commands following this script.\n * create a index template by using the schema , choose v6/v7 based on your elasticsearch version\n * create an index follow the index template, remember the name\n * register topic on kafka, and remember the name\n * set up the right number of partitions based on your expected throughput(can be scaled up later)\n * configure cadence for elasticsearch + kafka like this documentation based on the full static config, you may add some other fields like authn. similarly for kafka.\n\nto add new search attributes:\n\n 1. add the key to elasticsearch cadence --do domain adm cl asa --search_attr_key newkey --search_attr_type 1\n 2. update the dynamic configuration to allowlist the new attribute\n\nnote: starting a with search attributes but without advanced visibility feature will succeed as normal, but will not be searchable and will not be shown in list results.',charsets:{cjk:!0}},{title:"HTTP API",frontmatter:{layout:"default",title:"HTTP API",permalink:"/docs/concepts/http-api",readingShow:"top"},regularPath:"/docs/03-concepts/10-http-api.html",relativePath:"docs/03-concepts/10-http-api.md",key:"v-c2670478",path:"/docs/concepts/http-api/",headers:[{level:2,title:"Introduction",slug:"introduction",normalizedTitle:"introduction",charIndex:21},{level:2,title:"Setup",slug:"setup",normalizedTitle:"setup",charIndex:765},{level:3,title:"Updating Cadence configuration files",slug:"updating-cadence-configuration-files",normalizedTitle:"updating cadence configuration files",charIndex:775},{level:3,title:"Using local binaries",slug:"using-local-binaries",normalizedTitle:"using local binaries",charIndex:1188},{level:3,title:"Using “docker run” command",slug:"using-docker-run-command",normalizedTitle:"using “docker run” command",charIndex:1281},{level:3,title:"Using docker-compose",slug:"using-docker-compose",normalizedTitle:"using docker-compose",charIndex:1682},{level:2,title:"Using HTTP API",slug:"using-http-api-2",normalizedTitle:"using http api",charIndex:2},{level:2,title:"HTTP API Reference",slug:"http-api-reference",normalizedTitle:"http api reference",charIndex:3184},{level:3,title:"Admin API",slug:"admin-api",normalizedTitle:"admin api",charIndex:3207},{level:3,title:"Domain API",slug:"domain-api",normalizedTitle:"domain api",charIndex:13442},{level:3,title:"Meta API",slug:"meta-api",normalizedTitle:"meta api",charIndex:18445},{level:3,title:"Visibility API",slug:"visibility-api",normalizedTitle:"visibility api",charIndex:19146},{level:3,title:"Workflow API",slug:"workflow-api",normalizedTitle:"workflow api",charIndex:25851}],codeSwitcherOptions:{},headersStr:"Introduction Setup Updating Cadence configuration files Using local binaries Using “docker run” command Using docker-compose Using HTTP API HTTP API Reference Admin API Domain API Meta API Visibility API Workflow API",content:'# Using HTTP API\n\n\n# Introduction\n\nFrom version 1.2.0 onwards, Cadence has introduced HTTP API support, which allows you to interact with the Cadence server using the HTTP protocol. To put this into perspective, HTTP/JSON communication is a flexible method for server interaction. In the context of Cadence, this implies that a range of RPC methods can be exposed and invoked using the HTTP protocol. This enhancement broadens the scope of interaction with the Cadence server, enabling the use of any programming language that supports HTTP. Consequently, you can leverage this functionality to initiate or terminate workflows from your bash scripts, monitor the status of your cluster, or execute any other operation that the Cadence RPC declaration supports.\n\n\n# Setup\n\n\n# Updating Cadence configuration files\n\nTo enable “start workflow” HTTP API, add http section to Cadence RPC configuration settings (e.g., in base.yaml or development.yaml):\n\nservices:\n frontend:\n rpc:\n <...>\n http:\n port: 8800\n procedures:\n - uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution \n\n\nThen you can run Cadence server in the following ways to use HTTP API.\n\n\n# Using local binaries\n\nBuild and run ./cadence-server as described in Developing Cadence.\n\n\n# Using “docker run” command\n\nRefer to instructions described in Using docker image for production.\n\nAdditionally add two more environment variables:\n\ndocker run\n<...>\n -e FRONTEND_HTTP_PORT=8800 -- HTTP PORT TO LISTEN \n -e FRONTEND_HTTP_PROCEDURES=uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution -- List of API methods exposed\n ubercadence/server: \n\n\n\n# Using docker-compose\n\nAdd HTTP environment variables to docker/docker-compose.yml configuration:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n - "8000:8000"\n - "8001:8001"\n - "8002:8002"\n - "8003:8003"\n - "7933:7933"\n - "7934:7934"\n - "7935:7935"\n - "7939:7939"\n - "7833:7833"\n - "8800:8800"\n environment:\n - "CASSANDRA_SEEDS=cassandra"\n - "PROMETHEUS_ENDPOINT_0=0.0.0.0:8000"\n - "PROMETHEUS_ENDPOINT_1=0.0.0.0:8001"\n - "PROMETHEUS_ENDPOINT_2=0.0.0.0:8002"\n - "PROMETHEUS_ENDPOINT_3=0.0.0.0:8003"\n - "DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development.yaml"\n - "FRONTEND_HTTP_PORT=8800"\n - "FRONTEND_HTTP_PROCEDURES=uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution" \n\n\n\n# Using HTTP API\n\nStart a workflow using curl command\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: rpc-client-name\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution\' \\\n -d @data.json \n\n\nWhere data.json content looks something like this:\n\n{\n "domain": "sample-domain",\n "workflowId": "workflowid123",\n "execution_start_to_close_timeout": "11s",\n "task_start_to_close_timeout": "10s",\n "workflowType": {\n "name": "workflow_type"\n },\n "taskList": {\n "name": "tasklist-name"\n },\n "identity": "My custom caller identity",\n "requestId": "4D1E4058-6FCF-4BA8-BF16-8FA8B02F9651"\n} \n\n\n\n# HTTP API Reference\n\n\n# Admin API\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::AddSearchAttribute\n\n# Add search attributes to whitelist\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIAddSearchAttribute\n\n# Example payload\n\n{\n "search_attribute": {\n "custom_key": 1\n }\n}\n\n\nSearch attribute types\n\nTYPE VALUE\nString 1\nKeyword 2\nInt 3\nDouble 4\nDateTime 5\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::AddSearchAttribute\' \\\n -d \\\n \'{\n "search_attribute": {\n "custom_key": 1\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::CloseShard\n\n# Close a shard given a shard ID\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPICloseShard\n\n# Example payload\n\n{\n "shard_id": 0\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::CloseShard\' \\\n -d \\\n \'{ \n "shard_id": 0\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::CountDLQMessages\n\n# Count DLQ messages\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPICountDLQMessages\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::CountDLQMessages\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "history": []\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeCluster\n\n# Describe cluster information\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeCluster\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeCluster\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "supportedClientVersions": {\n "goSdk": "1.7.0",\n "javaSdk": "1.5.0"\n },\n "membershipInfo": {\n "currentHost": {\n "identity": "127.0.0.1:7933"\n },\n "reachableMembers": [\n "127.0.0.1:7933",\n "127.0.0.1:7934",\n "127.0.0.1:7935",\n "127.0.0.1:7939"\n ],\n "rings": [\n {\n "role": "cadence-frontend",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7933"\n }\n ]\n },\n {\n "role": "cadence-history",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7934"\n }\n ]\n },\n {\n "role": "cadence-matching",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7935"\n }\n ]\n },\n {\n "role": "cadence-worker",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7939"\n }\n ]\n }\n ]\n },\n "persistenceInfo": {\n "historyStore": {\n "backend": "shardedNosql"\n },\n "visibilityStore": {\n "backend": "cassandra",\n "features": [\n {\n "key": "advancedVisibilityEnabled"\n }\n ]\n }\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeHistoryHost\n\n# Describe internal information of history host\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeHistoryHost\n\n# Example payload\n\n{\n "host_address": "127.0.0.1:7934"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeHistoryHost\' \\\n -d \\\n \'{\n "host_address": "127.0.0.1:7934"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "numberOfShards": 4,\n "domainCache": {\n "numOfItemsInCacheByID": 5,\n "numOfItemsInCacheByName": 5\n },\n "shardControllerStatus": "started",\n "address": "127.0.0.1:7934"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeShardDistribution\n\n# List shard distribution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeShardDistribution\n\n# Example payload\n\n{\n "page_size": 100,\n "page_id": 0\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeShardDistribution\' \\\n -d \\\n \'{\n "page_size": 100,\n "page_id": 0\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "numberOfShards": 4,\n "shards": {\n "0": "127.0.0.1:7934",\n "1": "127.0.0.1:7934",\n "2": "127.0.0.1:7934",\n "3": "127.0.0.1:7934"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeWorkflowExecution\n\n# Describe internal information of workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n }\' | tr -d \'\\\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "shardId": 3,\n "historyAddr": "127.0.0.1:7934",\n "mutableStateInDatabase": {\n "ActivityInfos": {},\n "TimerInfos": {},\n "ChildExecutionInfos": {},\n "RequestCancelInfos": {},\n "SignalInfos": {},\n "SignalRequestedIDs": {},\n "ExecutionInfo": {\n "DomainID": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "WorkflowID": "sample-workflow-id",\n "RunID": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "FirstExecutionRunID": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "ParentDomainID": "",\n "ParentWorkflowID": "",\n "ParentRunID": "",\n "InitiatedID": -7,\n "CompletionEventBatchID": 3,\n "CompletionEvent": null,\n "TaskList": "sample-task-list",\n "WorkflowTypeName": "sample-workflow-type",\n "WorkflowTimeout": 11,\n "DecisionStartToCloseTimeout": 10,\n "ExecutionContext": null,\n "State": 2,\n "CloseStatus": 6,\n "LastFirstEventID": 3,\n "LastEventTaskID": 8388614,\n "NextEventID": 4,\n "LastProcessedEvent": -23,\n "StartTimestamp": "2023-09-08T05:13:04.24Z",\n "LastUpdatedTimestamp": "2023-09-08T05:13:15.247Z",\n "CreateRequestID": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "SignalCount": 0,\n "DecisionVersion": 0,\n "DecisionScheduleID": 2,\n "DecisionStartedID": -23,\n "DecisionRequestID": "emptyUuid",\n "DecisionTimeout": 10,\n "DecisionAttempt": 0,\n "DecisionStartedTimestamp": 0,\n "DecisionScheduledTimestamp": 1694149984240504000,\n "DecisionOriginalScheduledTimestamp": 1694149984240503000,\n "CancelRequested": false,\n "CancelRequestID": "",\n "StickyTaskList": "",\n "StickyScheduleToStartTimeout": 0,\n "ClientLibraryVersion": "",\n "ClientFeatureVersion": "",\n "ClientImpl": "",\n "AutoResetPoints": {},\n "Memo": null,\n "SearchAttributes": null,\n "PartitionConfig": null,\n "Attempt": 0,\n "HasRetryPolicy": false,\n "InitialInterval": 0,\n "BackoffCoefficient": 0,\n "MaximumInterval": 0,\n "ExpirationTime": "0001-01-01T00:00:00Z",\n "MaximumAttempts": 0,\n "NonRetriableErrors": null,\n "BranchToken": null,\n "CronSchedule": "",\n "IsCron": false,\n "ExpirationSeconds": 0\n },\n "ExecutionStats": null,\n "BufferedEvents": [],\n "VersionHistories": {\n "CurrentVersionHistoryIndex": 0,\n "Histories": [\n {\n "BranchToken": "WQsACgAAACRjYzA5ZDVkZC1iMmZhLTQ2ZDgtYjQyNi01NGM5NmIxMmQxOGYLABQAAAAkYWM5YmIwMmUtMjllYy00YWEyLTlkZGUtZWQ0YWU1NWRhMjlhDwAeDAAAAAAA",\n "Items": [\n {\n "EventID": 3,\n "Version": 0\n }\n ]\n }\n ]\n },\n "ReplicationState": null,\n "Checksum": {\n "Version": 0,\n "Flavor": 0,\n "Value": null\n }\n }\n}\n\n\n----------------------------------------\n\n\n# Domain API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.DomainAPI::DescribeDomain\n\n# Describe existing workflow domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.DomainAPIDescribeDomain\n\n# Example payload\n\n{\n "name": "sample-domain",\n "uuid": "d7aff879-f524-43a8-b340-5a223a69d75b"\n}\n\n\nuuid of the domain is optional.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.DomainAPI::DescribeDomain\' \\\n -d \\\n \'{\n "name": "sample-domain"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "domain": {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "DOMAIN_STATUS_REGISTERED",\n "data": {},\n "workflowExecutionRetentionPeriod": "259200s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "historyArchivalUri": "file:///tmp/cadence_archival/development",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "visibilityArchivalUri": "file:///tmp/cadence_vis_archival/development",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "isGlobalDomain": true,\n "isolationGroups": {}\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.DomainAPI::ListDomains\n\n# List all domains in the cluster\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.DomainAPIListDomains\n\n# Example payload\n\n{\n "page_size": 100\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.DomainAPI::ListDomains\' \\\n -d \\\n \'{\n "page_size": 100\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "domains": [\n {\n "id": "3116607e-419b-4783-85fc-47726a4c3fe9",\n "name": "cadence-batcher",\n "status": "DOMAIN_STATUS_REGISTERED",\n "description": "Cadence internal system domain",\n "data": {},\n "workflowExecutionRetentionPeriod": "604800s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "failoverVersion": "-24",\n "isolationGroups": {}\n },\n {\n "id": "59c51119-1b41-4a28-986d-d6e377716f82",\n "name": "cadence-shadower",\n "status": "DOMAIN_STATUS_REGISTERED",\n "description": "Cadence internal system domain",\n "data": {},\n "workflowExecutionRetentionPeriod": "604800s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "failoverVersion": "-24",\n "isolationGroups": {}\n },\n {\n "id": "32049b68-7872-4094-8e63-d0dd59896a83",\n "name": "cadence-system",\n "status": "DOMAIN_STATUS_REGISTERED",\n "description": "cadence system workflow domain",\n "ownerEmail": "cadence-dev-group@uber.com",\n "data": {},\n "workflowExecutionRetentionPeriod": "259200s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "failoverVersion": "-24",\n "isolationGroups": {}\n },\n {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "DOMAIN_STATUS_REGISTERED",\n "data": {},\n "workflowExecutionRetentionPeriod": "259200s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "historyArchivalUri": "file:///tmp/cadence_archival/development",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "visibilityArchivalUri": "file:///tmp/cadence_vis_archival/development",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "isGlobalDomain": true,\n "isolationGroups": {}\n }\n ],\n "nextPageToken": ""\n}\n\n\n----------------------------------------\n\n\n# Meta API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.MetaAPI::Health\n\n# Health check\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.MetaAPIHealth\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.MetaAPI::Health\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "ok": true,\n "message": "OK"\n}\n\n\n----------------------------------------\n\n\n# Visibility API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.VisibilityAPI::GetSearchAttributes\n\n# Get search attributes\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.VisibilityAPIGetSearchAttributes\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.VisibilityAPI::GetSearchAttributes\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "keys": {\n "BinaryChecksums": "INDEXED_VALUE_TYPE_KEYWORD",\n "CadenceChangeVersion": "INDEXED_VALUE_TYPE_KEYWORD",\n "CloseStatus": "INDEXED_VALUE_TYPE_INT",\n "CloseTime": "INDEXED_VALUE_TYPE_INT",\n "CustomBoolField": "INDEXED_VALUE_TYPE_BOOL",\n "CustomDatetimeField": "INDEXED_VALUE_TYPE_DATETIME",\n "CustomDomain": "INDEXED_VALUE_TYPE_KEYWORD",\n "CustomDoubleField": "INDEXED_VALUE_TYPE_DOUBLE",\n "CustomIntField": "INDEXED_VALUE_TYPE_INT",\n "CustomKeywordField": "INDEXED_VALUE_TYPE_KEYWORD",\n "CustomStringField": "INDEXED_VALUE_TYPE_STRING",\n "DomainID": "INDEXED_VALUE_TYPE_KEYWORD",\n "ExecutionTime": "INDEXED_VALUE_TYPE_INT",\n "HistoryLength": "INDEXED_VALUE_TYPE_INT",\n "IsCron": "INDEXED_VALUE_TYPE_KEYWORD",\n "NewKey": "INDEXED_VALUE_TYPE_KEYWORD",\n "NumClusters": "INDEXED_VALUE_TYPE_INT",\n "Operator": "INDEXED_VALUE_TYPE_KEYWORD",\n "Passed": "INDEXED_VALUE_TYPE_BOOL",\n "RolloutID": "INDEXED_VALUE_TYPE_KEYWORD",\n "RunID": "INDEXED_VALUE_TYPE_KEYWORD",\n "ShardID": "INDEXED_VALUE_TYPE_INT",\n "StartTime": "INDEXED_VALUE_TYPE_INT",\n "TaskList": "INDEXED_VALUE_TYPE_KEYWORD",\n "TestNewKey": "INDEXED_VALUE_TYPE_STRING",\n "UpdateTime": "INDEXED_VALUE_TYPE_INT",\n "WorkflowID": "INDEXED_VALUE_TYPE_KEYWORD",\n "WorkflowType": "INDEXED_VALUE_TYPE_KEYWORD",\n "addon": "INDEXED_VALUE_TYPE_KEYWORD",\n "addon-type": "INDEXED_VALUE_TYPE_KEYWORD",\n "environment": "INDEXED_VALUE_TYPE_KEYWORD",\n "project": "INDEXED_VALUE_TYPE_KEYWORD",\n "service": "INDEXED_VALUE_TYPE_KEYWORD",\n "user": "INDEXED_VALUE_TYPE_KEYWORD"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.VisibilityAPI::ListClosedWorkflowExecutions\n\n# List closed workflow executions in a domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.VisibilityAPIListClosedWorkflowExecutions\n\n# Example payloads\n\nstartTimeFilter is required while executionFilter and typeFilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.VisibilityAPI::ListClosedWorkflowExecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "executions": [\n {\n "workflowExecution": {\n "workflowId": "sample-workflow-id",\n "runId": "71c3d47b-454a-4315-97c7-15355140094b"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "startTime": "2023-09-08T06:31:18.778Z",\n "closeTime": "2023-09-08T06:32:18.782Z",\n "closeStatus": "WORKFLOW_EXECUTION_CLOSE_STATUS_TIMED_OUT",\n "historyLength": "5",\n "executionTime": "2023-09-08T06:31:18.778Z",\n "memo": {},\n "searchAttributes": {\n "indexedFields": {}\n },\n "taskList": "sample-task-list"\n }\n ],\n "nextPageToken": ""\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.VisibilityAPI::ListOpenWorkflowExecutions\n\n# List open workflow executions in a domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.VisibilityAPIListOpenWorkflowExecutions\n\n# Example payloads\n\nstartTimeFilter is required while executionFilter and typeFilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.VisibilityAPI::ListOpenWorkflowExecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "executions": [\n {\n "workflowExecution": {\n "workflowId": "sample-workflow-id",\n "runId": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "startTime": "2023-09-12T02:17:46.596Z",\n "executionTime": "2023-09-12T02:17:46.596Z",\n "memo": {},\n "searchAttributes": {\n "indexedFields": {}\n },\n "taskList": "sample-task-list"\n }\n ],\n "nextPageToken": ""\n}\n\n\n----------------------------------------\n\n\n# Workflow API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::DescribeTaskList\n\n# Describe pollers info of tasklist\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIDescribeTaskList\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n}\n\n\ntask_list kind is optional.\n\nTask list kinds\n\nTYPE VALUE\nTaskListKindNormal 1\nTaskListKindSticky 2\n\nTask list types\n\nTYPE VALUE\nTaskListTypeDecision 1\nTaskListTypeActivity 2\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::DescribeTaskList\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "taskListStatus": {\n "readLevel": "200000",\n "ratePerSecond": 100000,\n "taskIdBlock": {\n "startId": "200001",\n "endId": "300000"\n }\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::DescribeWorkflowExecution\n\n# Describe a workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIDescribeWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::DescribeWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "executionConfiguration": {\n "taskList": {\n "name": "sample-task-list"\n },\n "executionStartToCloseTimeout": "11s",\n "taskStartToCloseTimeout": "10s"\n },\n "workflowExecutionInfo": {\n "workflowExecution": {\n "workflowId": "sample-workflow-id",\n "runId": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "startTime": "2023-09-12T02:17:46.596Z",\n "closeTime": "2023-09-12T02:17:57.602707Z",\n "closeStatus": "WORKFLOW_EXECUTION_CLOSE_STATUS_TIMED_OUT",\n "historyLength": "3",\n "executionTime": "2023-09-12T02:17:46.596Z",\n "memo": {},\n "searchAttributes": {},\n "autoResetPoints": {}\n },\n "pendingDecision": {\n "state": "PENDING_DECISION_STATE_SCHEDULED",\n "scheduledTime": "2023-09-12T02:17:46.596982Z",\n "originalScheduledTime": "2023-09-12T02:17:46.596982Z"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::GetClusterInfo\n\n# Get supported client versions for the cluster\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIGetClusterInfo\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::GetClusterInfo\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "supportedClientVersions": {\n "goSdk": "1.7.0",\n "javaSdk": "1.5.0"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::GetTaskListsByDomain\n\n# Get the task lists in a domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIGetTaskListsByDomain\n\n# Example payload\n\n{\n "domain": "sample-domain"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::GetTaskListsByDomain\' \\\n -d \\\n \'{\n "domain": "sample-domain"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "decisionTaskListMap": {},\n "activityTaskListMap": {}\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::GetWorkflowExecutionHistory\n\n# Get the history of workflow executions\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIGetWorkflowExecutionHistory\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::GetWorkflowExecutionHistory\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "history": {\n "events": [\n {\n "eventId": "1",\n "eventTime": "2023-09-12T05:34:46.107550Z",\n "taskId": "9437321",\n "workflowExecutionStartedEventAttributes": {\n "workflowType": {\n "name": "sample-workflow-type"\n },\n "taskList": {\n "name": "sample-task-list"\n },\n "input": {\n "data": "IkN1cmwhIg=="\n },\n "executionStartToCloseTimeout": "61s",\n "taskStartToCloseTimeout": "60s",\n "originalExecutionRunId": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "identity": "client-name-visible-in-history",\n "firstExecutionRunId": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "firstDecisionTaskBackoff": "0s"\n }\n },\n {\n "eventId": "2",\n "eventTime": "2023-09-12T05:34:46.107565Z",\n "taskId": "9437322",\n "decisionTaskScheduledEventAttributes": {\n "taskList": {\n "name": "sample-task-list"\n },\n "startToCloseTimeout": "60s"\n }\n },\n {\n "eventId": "3",\n "eventTime": "2023-09-12T05:34:59.184511Z",\n "taskId": "9437330",\n "workflowExecutionCancelRequestedEventAttributes": {\n "cause": "dummy",\n "identity": "client-name-visible-in-history"\n }\n },\n {\n "eventId": "4",\n "eventTime": "2023-09-12T05:35:47.112156Z",\n "taskId": "9437332",\n "workflowExecutionTimedOutEventAttributes": {\n "timeoutType": "TIMEOUT_TYPE_START_TO_CLOSE"\n }\n }\n ]\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::ListTaskListPartitions\n\n# List all the task list partitions and the hostname for partitions\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIListTaskListPartitions\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::ListTaskListPartitions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "activityTaskListPartitions": [\n {\n "key": "sample-task-list",\n "ownerHostName": "127.0.0.1:7935"\n }\n ],\n "decisionTaskListPartitions": [\n {\n "key": "sample-task-list",\n "ownerHostName": "127.0.0.1:7935"\n }\n ]\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::RefreshWorkflowTasks\n\n# Refresh all the tasks of a workflow\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIRefreshWorkflowTasks\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::RefreshWorkflowTasks\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::RequestCancelWorkflowExecution\n\n# Cancel a workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIRequestCancelWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n },\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::RequestCancelWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n },\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::RestartWorkflowExecution\n\n# Restart a previous workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIRestartWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::RestartWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "runId": "82914458-3221-42b4-ae54-2e66dff864f7"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::SignalWithStartWorkflowExecution\n\n# Signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPISignalWithStartWorkflowExecution\n\n# Example payload\n\n{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::SignalWithStartWorkflowExecution\' \\\n -d \\\n \'{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "runId": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::SignalWorkflowExecution\n\n# Signal a workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPISignalWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n}\n\n\nrun_id is optional and allows to signal a specific run.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::SignalWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution\n\n# Start a new workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIStartWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "runId": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::TerminateWorkflowExecution\n\n# Terminate a new workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPITerminateWorkflowExecution\n\n# Example payloads\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "reason": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::TerminateWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------',normalizedContent:'# using http api\n\n\n# introduction\n\nfrom version 1.2.0 onwards, cadence has introduced http api support, which allows you to interact with the cadence server using the http protocol. to put this into perspective, http/json communication is a flexible method for server interaction. in the context of cadence, this implies that a range of rpc methods can be exposed and invoked using the http protocol. this enhancement broadens the scope of interaction with the cadence server, enabling the use of any programming language that supports http. consequently, you can leverage this functionality to initiate or terminate workflows from your bash scripts, monitor the status of your cluster, or execute any other operation that the cadence rpc declaration supports.\n\n\n# setup\n\n\n# updating cadence configuration files\n\nto enable “start workflow” http api, add http section to cadence rpc configuration settings (e.g., in base.yaml or development.yaml):\n\nservices:\n frontend:\n rpc:\n <...>\n http:\n port: 8800\n procedures:\n - uber.cadence.api.v1.workflowapi::startworkflowexecution \n\n\nthen you can run cadence server in the following ways to use http api.\n\n\n# using local binaries\n\nbuild and run ./cadence-server as described in developing cadence.\n\n\n# using “docker run” command\n\nrefer to instructions described in using docker image for production.\n\nadditionally add two more environment variables:\n\ndocker run\n<...>\n -e frontend_http_port=8800 -- http port to listen \n -e frontend_http_procedures=uber.cadence.api.v1.workflowapi::startworkflowexecution -- list of api methods exposed\n ubercadence/server: \n\n\n\n# using docker-compose\n\nadd http environment variables to docker/docker-compose.yml configuration:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n - "8000:8000"\n - "8001:8001"\n - "8002:8002"\n - "8003:8003"\n - "7933:7933"\n - "7934:7934"\n - "7935:7935"\n - "7939:7939"\n - "7833:7833"\n - "8800:8800"\n environment:\n - "cassandra_seeds=cassandra"\n - "prometheus_endpoint_0=0.0.0.0:8000"\n - "prometheus_endpoint_1=0.0.0.0:8001"\n - "prometheus_endpoint_2=0.0.0.0:8002"\n - "prometheus_endpoint_3=0.0.0.0:8003"\n - "dynamic_config_file_path=config/dynamicconfig/development.yaml"\n - "frontend_http_port=8800"\n - "frontend_http_procedures=uber.cadence.api.v1.workflowapi::startworkflowexecution" \n\n\n\n# using http api\n\nstart a workflow using curl command\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: rpc-client-name\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::startworkflowexecution\' \\\n -d @data.json \n\n\nwhere data.json content looks something like this:\n\n{\n "domain": "sample-domain",\n "workflowid": "workflowid123",\n "execution_start_to_close_timeout": "11s",\n "task_start_to_close_timeout": "10s",\n "workflowtype": {\n "name": "workflow_type"\n },\n "tasklist": {\n "name": "tasklist-name"\n },\n "identity": "my custom caller identity",\n "requestid": "4d1e4058-6fcf-4ba8-bf16-8fa8b02f9651"\n} \n\n\n\n# http api reference\n\n\n# admin api\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::addsearchattribute\n\n# add search attributes to whitelist\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapiaddsearchattribute\n\n# example payload\n\n{\n "search_attribute": {\n "custom_key": 1\n }\n}\n\n\nsearch attribute types\n\ntype value\nstring 1\nkeyword 2\nint 3\ndouble 4\ndatetime 5\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::addsearchattribute\' \\\n -d \\\n \'{\n "search_attribute": {\n "custom_key": 1\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::closeshard\n\n# close a shard given a shard id\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapicloseshard\n\n# example payload\n\n{\n "shard_id": 0\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::closeshard\' \\\n -d \\\n \'{ \n "shard_id": 0\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::countdlqmessages\n\n# count dlq messages\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapicountdlqmessages\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::countdlqmessages\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "history": []\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describecluster\n\n# describe cluster information\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribecluster\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describecluster\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "supportedclientversions": {\n "gosdk": "1.7.0",\n "javasdk": "1.5.0"\n },\n "membershipinfo": {\n "currenthost": {\n "identity": "127.0.0.1:7933"\n },\n "reachablemembers": [\n "127.0.0.1:7933",\n "127.0.0.1:7934",\n "127.0.0.1:7935",\n "127.0.0.1:7939"\n ],\n "rings": [\n {\n "role": "cadence-frontend",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7933"\n }\n ]\n },\n {\n "role": "cadence-history",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7934"\n }\n ]\n },\n {\n "role": "cadence-matching",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7935"\n }\n ]\n },\n {\n "role": "cadence-worker",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7939"\n }\n ]\n }\n ]\n },\n "persistenceinfo": {\n "historystore": {\n "backend": "shardednosql"\n },\n "visibilitystore": {\n "backend": "cassandra",\n "features": [\n {\n "key": "advancedvisibilityenabled"\n }\n ]\n }\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describehistoryhost\n\n# describe internal information of history host\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribehistoryhost\n\n# example payload\n\n{\n "host_address": "127.0.0.1:7934"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describehistoryhost\' \\\n -d \\\n \'{\n "host_address": "127.0.0.1:7934"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "numberofshards": 4,\n "domaincache": {\n "numofitemsincachebyid": 5,\n "numofitemsincachebyname": 5\n },\n "shardcontrollerstatus": "started",\n "address": "127.0.0.1:7934"\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describesharddistribution\n\n# list shard distribution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribesharddistribution\n\n# example payload\n\n{\n "page_size": 100,\n "page_id": 0\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describesharddistribution\' \\\n -d \\\n \'{\n "page_size": 100,\n "page_id": 0\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "numberofshards": 4,\n "shards": {\n "0": "127.0.0.1:7934",\n "1": "127.0.0.1:7934",\n "2": "127.0.0.1:7934",\n "3": "127.0.0.1:7934"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describeworkflowexecution\n\n# describe internal information of workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribeworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describeworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n }\' | tr -d \'\\\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "shardid": 3,\n "historyaddr": "127.0.0.1:7934",\n "mutablestateindatabase": {\n "activityinfos": {},\n "timerinfos": {},\n "childexecutioninfos": {},\n "requestcancelinfos": {},\n "signalinfos": {},\n "signalrequestedids": {},\n "executioninfo": {\n "domainid": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "workflowid": "sample-workflow-id",\n "runid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "firstexecutionrunid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "parentdomainid": "",\n "parentworkflowid": "",\n "parentrunid": "",\n "initiatedid": -7,\n "completioneventbatchid": 3,\n "completionevent": null,\n "tasklist": "sample-task-list",\n "workflowtypename": "sample-workflow-type",\n "workflowtimeout": 11,\n "decisionstarttoclosetimeout": 10,\n "executioncontext": null,\n "state": 2,\n "closestatus": 6,\n "lastfirsteventid": 3,\n "lasteventtaskid": 8388614,\n "nexteventid": 4,\n "lastprocessedevent": -23,\n "starttimestamp": "2023-09-08t05:13:04.24z",\n "lastupdatedtimestamp": "2023-09-08t05:13:15.247z",\n "createrequestid": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "signalcount": 0,\n "decisionversion": 0,\n "decisionscheduleid": 2,\n "decisionstartedid": -23,\n "decisionrequestid": "emptyuuid",\n "decisiontimeout": 10,\n "decisionattempt": 0,\n "decisionstartedtimestamp": 0,\n "decisionscheduledtimestamp": 1694149984240504000,\n "decisionoriginalscheduledtimestamp": 1694149984240503000,\n "cancelrequested": false,\n "cancelrequestid": "",\n "stickytasklist": "",\n "stickyscheduletostarttimeout": 0,\n "clientlibraryversion": "",\n "clientfeatureversion": "",\n "clientimpl": "",\n "autoresetpoints": {},\n "memo": null,\n "searchattributes": null,\n "partitionconfig": null,\n "attempt": 0,\n "hasretrypolicy": false,\n "initialinterval": 0,\n "backoffcoefficient": 0,\n "maximuminterval": 0,\n "expirationtime": "0001-01-01t00:00:00z",\n "maximumattempts": 0,\n "nonretriableerrors": null,\n "branchtoken": null,\n "cronschedule": "",\n "iscron": false,\n "expirationseconds": 0\n },\n "executionstats": null,\n "bufferedevents": [],\n "versionhistories": {\n "currentversionhistoryindex": 0,\n "histories": [\n {\n "branchtoken": "wqsacgaaacrjyza5zdvkzc1immzhltq2zdgtyjqyni01ngm5nmixmmqxogylabqaaaakywm5ymiwmmutmjllyy00yweyltlkzgutzwq0ywu1nwrhmjlhdwaedaaaaaaa",\n "items": [\n {\n "eventid": 3,\n "version": 0\n }\n ]\n }\n ]\n },\n "replicationstate": null,\n "checksum": {\n "version": 0,\n "flavor": 0,\n "value": null\n }\n }\n}\n\n\n----------------------------------------\n\n\n# domain api\n\n----------------------------------------\n\npost uber.cadence.api.v1.domainapi::describedomain\n\n# describe existing workflow domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.domainapidescribedomain\n\n# example payload\n\n{\n "name": "sample-domain",\n "uuid": "d7aff879-f524-43a8-b340-5a223a69d75b"\n}\n\n\nuuid of the domain is optional.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.domainapi::describedomain\' \\\n -d \\\n \'{\n "name": "sample-domain"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "domain": {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "domain_status_registered",\n "data": {},\n "workflowexecutionretentionperiod": "259200s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_enabled",\n "historyarchivaluri": "file:///tmp/cadence_archival/development",\n "visibilityarchivalstatus": "archival_status_enabled",\n "visibilityarchivaluri": "file:///tmp/cadence_vis_archival/development",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "isglobaldomain": true,\n "isolationgroups": {}\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.domainapi::listdomains\n\n# list all domains in the cluster\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.domainapilistdomains\n\n# example payload\n\n{\n "page_size": 100\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.domainapi::listdomains\' \\\n -d \\\n \'{\n "page_size": 100\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "domains": [\n {\n "id": "3116607e-419b-4783-85fc-47726a4c3fe9",\n "name": "cadence-batcher",\n "status": "domain_status_registered",\n "description": "cadence internal system domain",\n "data": {},\n "workflowexecutionretentionperiod": "604800s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_disabled",\n "visibilityarchivalstatus": "archival_status_disabled",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "failoverversion": "-24",\n "isolationgroups": {}\n },\n {\n "id": "59c51119-1b41-4a28-986d-d6e377716f82",\n "name": "cadence-shadower",\n "status": "domain_status_registered",\n "description": "cadence internal system domain",\n "data": {},\n "workflowexecutionretentionperiod": "604800s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_disabled",\n "visibilityarchivalstatus": "archival_status_disabled",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "failoverversion": "-24",\n "isolationgroups": {}\n },\n {\n "id": "32049b68-7872-4094-8e63-d0dd59896a83",\n "name": "cadence-system",\n "status": "domain_status_registered",\n "description": "cadence system workflow domain",\n "owneremail": "cadence-dev-group@uber.com",\n "data": {},\n "workflowexecutionretentionperiod": "259200s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_disabled",\n "visibilityarchivalstatus": "archival_status_disabled",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "failoverversion": "-24",\n "isolationgroups": {}\n },\n {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "domain_status_registered",\n "data": {},\n "workflowexecutionretentionperiod": "259200s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_enabled",\n "historyarchivaluri": "file:///tmp/cadence_archival/development",\n "visibilityarchivalstatus": "archival_status_enabled",\n "visibilityarchivaluri": "file:///tmp/cadence_vis_archival/development",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "isglobaldomain": true,\n "isolationgroups": {}\n }\n ],\n "nextpagetoken": ""\n}\n\n\n----------------------------------------\n\n\n# meta api\n\n----------------------------------------\n\npost uber.cadence.api.v1.metaapi::health\n\n# health check\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.metaapihealth\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.metaapi::health\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "ok": true,\n "message": "ok"\n}\n\n\n----------------------------------------\n\n\n# visibility api\n\n----------------------------------------\n\npost uber.cadence.api.v1.visibilityapi::getsearchattributes\n\n# get search attributes\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.visibilityapigetsearchattributes\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.visibilityapi::getsearchattributes\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "keys": {\n "binarychecksums": "indexed_value_type_keyword",\n "cadencechangeversion": "indexed_value_type_keyword",\n "closestatus": "indexed_value_type_int",\n "closetime": "indexed_value_type_int",\n "customboolfield": "indexed_value_type_bool",\n "customdatetimefield": "indexed_value_type_datetime",\n "customdomain": "indexed_value_type_keyword",\n "customdoublefield": "indexed_value_type_double",\n "customintfield": "indexed_value_type_int",\n "customkeywordfield": "indexed_value_type_keyword",\n "customstringfield": "indexed_value_type_string",\n "domainid": "indexed_value_type_keyword",\n "executiontime": "indexed_value_type_int",\n "historylength": "indexed_value_type_int",\n "iscron": "indexed_value_type_keyword",\n "newkey": "indexed_value_type_keyword",\n "numclusters": "indexed_value_type_int",\n "operator": "indexed_value_type_keyword",\n "passed": "indexed_value_type_bool",\n "rolloutid": "indexed_value_type_keyword",\n "runid": "indexed_value_type_keyword",\n "shardid": "indexed_value_type_int",\n "starttime": "indexed_value_type_int",\n "tasklist": "indexed_value_type_keyword",\n "testnewkey": "indexed_value_type_string",\n "updatetime": "indexed_value_type_int",\n "workflowid": "indexed_value_type_keyword",\n "workflowtype": "indexed_value_type_keyword",\n "addon": "indexed_value_type_keyword",\n "addon-type": "indexed_value_type_keyword",\n "environment": "indexed_value_type_keyword",\n "project": "indexed_value_type_keyword",\n "service": "indexed_value_type_keyword",\n "user": "indexed_value_type_keyword"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.visibilityapi::listclosedworkflowexecutions\n\n# list closed workflow executions in a domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.visibilityapilistclosedworkflowexecutions\n\n# example payloads\n\nstarttimefilter is required while executionfilter and typefilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.visibilityapi::listclosedworkflowexecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "executions": [\n {\n "workflowexecution": {\n "workflowid": "sample-workflow-id",\n "runid": "71c3d47b-454a-4315-97c7-15355140094b"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "starttime": "2023-09-08t06:31:18.778z",\n "closetime": "2023-09-08t06:32:18.782z",\n "closestatus": "workflow_execution_close_status_timed_out",\n "historylength": "5",\n "executiontime": "2023-09-08t06:31:18.778z",\n "memo": {},\n "searchattributes": {\n "indexedfields": {}\n },\n "tasklist": "sample-task-list"\n }\n ],\n "nextpagetoken": ""\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.visibilityapi::listopenworkflowexecutions\n\n# list open workflow executions in a domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.visibilityapilistopenworkflowexecutions\n\n# example payloads\n\nstarttimefilter is required while executionfilter and typefilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.visibilityapi::listopenworkflowexecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "executions": [\n {\n "workflowexecution": {\n "workflowid": "sample-workflow-id",\n "runid": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "starttime": "2023-09-12t02:17:46.596z",\n "executiontime": "2023-09-12t02:17:46.596z",\n "memo": {},\n "searchattributes": {\n "indexedfields": {}\n },\n "tasklist": "sample-task-list"\n }\n ],\n "nextpagetoken": ""\n}\n\n\n----------------------------------------\n\n\n# workflow api\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::describetasklist\n\n# describe pollers info of tasklist\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapidescribetasklist\n\n# example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n}\n\n\ntask_list kind is optional.\n\ntask list kinds\n\ntype value\ntasklistkindnormal 1\ntasklistkindsticky 2\n\ntask list types\n\ntype value\ntasklisttypedecision 1\ntasklisttypeactivity 2\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::describetasklist\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "taskliststatus": {\n "readlevel": "200000",\n "ratepersecond": 100000,\n "taskidblock": {\n "startid": "200001",\n "endid": "300000"\n }\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::describeworkflowexecution\n\n# describe a workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapidescribeworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::describeworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "executionconfiguration": {\n "tasklist": {\n "name": "sample-task-list"\n },\n "executionstarttoclosetimeout": "11s",\n "taskstarttoclosetimeout": "10s"\n },\n "workflowexecutioninfo": {\n "workflowexecution": {\n "workflowid": "sample-workflow-id",\n "runid": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "starttime": "2023-09-12t02:17:46.596z",\n "closetime": "2023-09-12t02:17:57.602707z",\n "closestatus": "workflow_execution_close_status_timed_out",\n "historylength": "3",\n "executiontime": "2023-09-12t02:17:46.596z",\n "memo": {},\n "searchattributes": {},\n "autoresetpoints": {}\n },\n "pendingdecision": {\n "state": "pending_decision_state_scheduled",\n "scheduledtime": "2023-09-12t02:17:46.596982z",\n "originalscheduledtime": "2023-09-12t02:17:46.596982z"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::getclusterinfo\n\n# get supported client versions for the cluster\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapigetclusterinfo\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::getclusterinfo\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "supportedclientversions": {\n "gosdk": "1.7.0",\n "javasdk": "1.5.0"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::gettasklistsbydomain\n\n# get the task lists in a domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapigettasklistsbydomain\n\n# example payload\n\n{\n "domain": "sample-domain"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::gettasklistsbydomain\' \\\n -d \\\n \'{\n "domain": "sample-domain"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "decisiontasklistmap": {},\n "activitytasklistmap": {}\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::getworkflowexecutionhistory\n\n# get the history of workflow executions\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapigetworkflowexecutionhistory\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::getworkflowexecutionhistory\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "history": {\n "events": [\n {\n "eventid": "1",\n "eventtime": "2023-09-12t05:34:46.107550z",\n "taskid": "9437321",\n "workflowexecutionstartedeventattributes": {\n "workflowtype": {\n "name": "sample-workflow-type"\n },\n "tasklist": {\n "name": "sample-task-list"\n },\n "input": {\n "data": "ikn1cmwhig=="\n },\n "executionstarttoclosetimeout": "61s",\n "taskstarttoclosetimeout": "60s",\n "originalexecutionrunid": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "identity": "client-name-visible-in-history",\n "firstexecutionrunid": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "firstdecisiontaskbackoff": "0s"\n }\n },\n {\n "eventid": "2",\n "eventtime": "2023-09-12t05:34:46.107565z",\n "taskid": "9437322",\n "decisiontaskscheduledeventattributes": {\n "tasklist": {\n "name": "sample-task-list"\n },\n "starttoclosetimeout": "60s"\n }\n },\n {\n "eventid": "3",\n "eventtime": "2023-09-12t05:34:59.184511z",\n "taskid": "9437330",\n "workflowexecutioncancelrequestedeventattributes": {\n "cause": "dummy",\n "identity": "client-name-visible-in-history"\n }\n },\n {\n "eventid": "4",\n "eventtime": "2023-09-12t05:35:47.112156z",\n "taskid": "9437332",\n "workflowexecutiontimedouteventattributes": {\n "timeouttype": "timeout_type_start_to_close"\n }\n }\n ]\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::listtasklistpartitions\n\n# list all the task list partitions and the hostname for partitions\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapilisttasklistpartitions\n\n# example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::listtasklistpartitions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "activitytasklistpartitions": [\n {\n "key": "sample-task-list",\n "ownerhostname": "127.0.0.1:7935"\n }\n ],\n "decisiontasklistpartitions": [\n {\n "key": "sample-task-list",\n "ownerhostname": "127.0.0.1:7935"\n }\n ]\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::refreshworkflowtasks\n\n# refresh all the tasks of a workflow\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapirefreshworkflowtasks\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::refreshworkflowtasks\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::requestcancelworkflowexecution\n\n# cancel a workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapirequestcancelworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n },\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::requestcancelworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n },\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::restartworkflowexecution\n\n# restart a previous workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapirestartworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::restartworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "runid": "82914458-3221-42b4-ae54-2e66dff864f7"\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::signalwithstartworkflowexecution\n\n# signal the current open workflow if exists, or attempt to start a new run based on idresuepolicy and signals it\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapisignalwithstartworkflowexecution\n\n# example payload\n\n{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::signalwithstartworkflowexecution\' \\\n -d \\\n \'{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "runid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::signalworkflowexecution\n\n# signal a workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapisignalworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n}\n\n\nrun_id is optional and allows to signal a specific run.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::signalworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::startworkflowexecution\n\n# start a new workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapistartworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::startworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "runid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::terminateworkflowexecution\n\n# terminate a new workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapiterminateworkflowexecution\n\n# example payloads\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "reason": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::terminateworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------',charsets:{cjk:!0}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/concepts",readingShow:"top"},regularPath:"/docs/03-concepts/",relativePath:"docs/03-concepts/index.md",key:"v-347319df",path:"/docs/concepts/",codeSwitcherOptions:{},headersStr:null,content:"# Concepts\n\nCadence is a new developer friendly way to develop distributed applications.\n\nIt borrows the core terminology from the workflow-automation space. So its concepts include workflows and activities. can react to events and return internal state through queries.\n\nThe deployment topology explains how all these concepts are mapped to deployable software components.\n\nThe HTTP API reference describes how to use HTTP API to interact with Cadence server.",normalizedContent:"# concepts\n\ncadence is a new developer friendly way to develop distributed applications.\n\nit borrows the core terminology from the workflow-automation space. so its concepts include workflows and activities. can react to events and return internal state through queries.\n\nthe deployment topology explains how all these concepts are mapped to deployable software components.\n\nthe http api reference describes how to use http api to interact with cadence server.",charsets:{}},{title:"Client SDK Overview",frontmatter:{layout:"default",title:"Client SDK Overview",permalink:"/docs/java-client/client-overview",readingShow:"top"},regularPath:"/docs/04-java-client/01-client-overview.html",relativePath:"docs/04-java-client/01-client-overview.md",key:"v-2f3b4398",path:"/docs/java-client/client-overview/",headers:[{level:2,title:"JavaDoc Packages",slug:"javadoc-packages",normalizedTitle:"javadoc packages",charIndex:169},{level:3,title:"com.uber.cadence.activity",slug:"com-uber-cadence-activity",normalizedTitle:"com.uber.cadence.activity",charIndex:190},{level:3,title:"com.uber.cadence.client",slug:"com-uber-cadence-client",normalizedTitle:"com.uber.cadence.client",charIndex:296},{level:3,title:"com.uber.cadence.workflow",slug:"com-uber-cadence-workflow",normalizedTitle:"com.uber.cadence.workflow",charIndex:446},{level:3,title:"com.uber.cadence.worker",slug:"com-uber-cadence-worker",normalizedTitle:"com.uber.cadence.worker",charIndex:506},{level:3,title:"com.uber.cadence.testing",slug:"com-uber-cadence-testing",normalizedTitle:"com.uber.cadence.testing",charIndex:572},{level:2,title:"Samples",slug:"samples",normalizedTitle:"samples",charIndex:26},{level:3,title:"com.uber.cadence.samples.hello",slug:"com-uber-cadence-samples-hello",normalizedTitle:"com.uber.cadence.samples.hello",charIndex:654},{level:3,title:"com.uber.cadence.samples.bookingsaga",slug:"com-uber-cadence-samples-bookingsaga",normalizedTitle:"com.uber.cadence.samples.bookingsaga",charIndex:843},{level:3,title:"com.uber.cadence.samples.fileprocessing",slug:"com-uber-cadence-samples-fileprocessing",normalizedTitle:"com.uber.cadence.samples.fileprocessing",charIndex:942}],codeSwitcherOptions:{},headersStr:"JavaDoc Packages com.uber.cadence.activity com.uber.cadence.client com.uber.cadence.workflow com.uber.cadence.worker com.uber.cadence.testing Samples com.uber.cadence.samples.hello com.uber.cadence.samples.bookingsaga com.uber.cadence.samples.fileprocessing",content:"# Client SDK Overview\n\n * Samples: https://github.com/uber/cadence-java-samples\n * JavaDoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\n\n# JavaDoc Packages\n\n\n# com.uber.cadence.activity\n\nAPIs to implement activity: accessing activity info, or sending heartbeat.\n\n\n# com.uber.cadence.client\n\nAPIs for external application code to interact with Cadence workflows: start workflows, send signals or query workflows.\n\n\n# com.uber.cadence.workflow\n\nAPIs to implement workflows.\n\n\n# com.uber.cadence.worker\n\nAPIs to configure and start workers.\n\n\n# com.uber.cadence.testing\n\nAPIs to write unit tests for workflows.\n\n\n# Samples\n\n\n# com.uber.cadence.samples.hello\n\nSamples of how to use the basic feature: activity, local activity, ChildWorkflow, Query, etc. This is the most important package you need to start with.\n\n\n# com.uber.cadence.samples.bookingsaga\n\nAn end-to-end example to write workflow using SAGA APIs.\n\n\n# com.uber.cadence.samples.fileprocessing\n\nAn end-to-end example to write workflows to download a file, zips it, and uploads it to a destination.\n\nAn important requirement for such a workflow is that while a first activity can run on any host, the second and third must run on the same host as the first one. This is achieved through use of a host specific task list. The first activity returns the name of the host specific task list and all other activities are dispatched using the stub that is configured with it. This assumes that FileProcessingWorker has a worker running on the same task list.",normalizedContent:"# client sdk overview\n\n * samples: https://github.com/uber/cadence-java-samples\n * javadoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\n\n# javadoc packages\n\n\n# com.uber.cadence.activity\n\napis to implement activity: accessing activity info, or sending heartbeat.\n\n\n# com.uber.cadence.client\n\napis for external application code to interact with cadence workflows: start workflows, send signals or query workflows.\n\n\n# com.uber.cadence.workflow\n\napis to implement workflows.\n\n\n# com.uber.cadence.worker\n\napis to configure and start workers.\n\n\n# com.uber.cadence.testing\n\napis to write unit tests for workflows.\n\n\n# samples\n\n\n# com.uber.cadence.samples.hello\n\nsamples of how to use the basic feature: activity, local activity, childworkflow, query, etc. this is the most important package you need to start with.\n\n\n# com.uber.cadence.samples.bookingsaga\n\nan end-to-end example to write workflow using saga apis.\n\n\n# com.uber.cadence.samples.fileprocessing\n\nan end-to-end example to write workflows to download a file, zips it, and uploads it to a destination.\n\nan important requirement for such a workflow is that while a first activity can run on any host, the second and third must run on the same host as the first one. this is achieved through use of a host specific task list. the first activity returns the name of the host specific task list and all other activities are dispatched using the stub that is configured with it. this assumes that fileprocessingworker has a worker running on the same task list.",charsets:{}},{title:"Workflow interface",frontmatter:{layout:"default",title:"Workflow interface",permalink:"/docs/java-client/workflow-interface",readingShow:"top"},regularPath:"/docs/04-java-client/02-workflow-interface.html",relativePath:"docs/04-java-client/02-workflow-interface.md",key:"v-44a96002",path:"/docs/java-client/workflow-interface/",codeSwitcherOptions:{},headersStr:null,content:'# Workflow interface\n\nencapsulates the orchestration of and child . It can also answer synchronous and receive external (also known as ).\n\nA must define an interface class. All of its methods must have one of the following annotations:\n\n * @WorkflowMethod indicates an entry point to a . It contains parameters such as timeouts and a . Required parameters (such as executionStartToCloseTimeoutSeconds) that are not specified through the annotation must be provided at runtime.\n * @SignalMethod indicates a method that reacts to external . It must have a void return type.\n * @QueryMethod indicates a method that reacts to synchronous requests.\n\nYou can have more than one method with the same annotation (except @WorkflowMethod). For example:\n\npublic interface FileProcessingWorkflow {\n\n @WorkflowMethod(executionStartToCloseTimeoutSeconds = 10, taskList = "file-processing")\n String processFile(Arguments args);\n\n @QueryMethod(name="history")\n List getHistory();\n\n @QueryMethod(name="status")\n String getStatus();\n\n @SignalMethod\n void retryNow();\n\n @SignalMethod\n void abandon();\n}\n\n\nWe recommended that you use a single value type argument for methods. In this way, adding new arguments as fields to the value type is a backwards-compatible change.',normalizedContent:'# workflow interface\n\nencapsulates the orchestration of and child . it can also answer synchronous and receive external (also known as ).\n\na must define an interface class. all of its methods must have one of the following annotations:\n\n * @workflowmethod indicates an entry point to a . it contains parameters such as timeouts and a . required parameters (such as executionstarttoclosetimeoutseconds) that are not specified through the annotation must be provided at runtime.\n * @signalmethod indicates a method that reacts to external . it must have a void return type.\n * @querymethod indicates a method that reacts to synchronous requests.\n\nyou can have more than one method with the same annotation (except @workflowmethod). for example:\n\npublic interface fileprocessingworkflow {\n\n @workflowmethod(executionstarttoclosetimeoutseconds = 10, tasklist = "file-processing")\n string processfile(arguments args);\n\n @querymethod(name="history")\n list gethistory();\n\n @querymethod(name="status")\n string getstatus();\n\n @signalmethod\n void retrynow();\n\n @signalmethod\n void abandon();\n}\n\n\nwe recommended that you use a single value type argument for methods. in this way, adding new arguments as fields to the value type is a backwards-compatible change.',charsets:{}},{title:"Implementing workflows",frontmatter:{layout:"default",title:"Implementing workflows",permalink:"/docs/java-client/implementing-workflows",readingShow:"top"},regularPath:"/docs/04-java-client/03-implementing-workflows.html",relativePath:"docs/04-java-client/03-implementing-workflows.md",key:"v-73f5d8c2",path:"/docs/java-client/implementing-workflows/",headers:[{level:2,title:"Calling Activities",slug:"calling-activities",normalizedTitle:"calling activities",charIndex:515},{level:2,title:"Calling Activities Asynchronously",slug:"calling-activities-asynchronously",normalizedTitle:"calling activities asynchronously",charIndex:2719},{level:2,title:"Workflow Implementation Constraints",slug:"workflow-implementation-constraints",normalizedTitle:"workflow implementation constraints",charIndex:5585}],codeSwitcherOptions:{},headersStr:"Calling Activities Calling Activities Asynchronously Workflow Implementation Constraints",content:"# Implementing workflows\n\nA implementation implements a interface. Each time a new is started, a new instance of the implementation object is created. Then, one of the methods (depending on which type has been started) annotated with @WorkflowMethod is invoked. As soon as this method returns, the is closed. While is open, it can receive calls to and methods. No additional calls to methods are allowed. The object is stateful, so and methods can communicate with the other parts of the through object fields.\n\n\n# Calling Activities\n\nWorkflow.newActivityStub returns a client-side stub that implements an interface. It takes type and options as arguments. options are needed only if some of the required timeouts are not specified through the @ActivityMethod annotation.\n\nCalling a method on this interface invokes an that implements this method. An invocation synchronously blocks until the completes, fails, or times out. Even if execution takes a few months, the code still sees it as a single synchronous invocation. It doesn't matter what happens to the processes that host the . The business logic code just sees a single method call.\n\npublic class FileProcessingWorkflowImpl implements FileProcessingWorkflow {\n\n private final FileProcessingActivities activities;\n\n public FileProcessingWorkflowImpl() {\n this.activities = Workflow.newActivityStub(FileProcessingActivities.class);\n }\n\n @Override\n public void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName);\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n }\n ...\n}\n\n\nIf different need different options, like timeouts or a , multiple client-side stubs can be created with different options.\n\npublic FileProcessingWorkflowImpl() {\n ActivityOptions options1 = new ActivityOptions.Builder()\n .setTaskList(\"taskList1\")\n .build();\n this.store1 = Workflow.newActivityStub(FileProcessingActivities.class, options1);\n\n ActivityOptions options2 = new ActivityOptions.Builder()\n .setTaskList(\"taskList2\")\n .build();\n this.store2 = Workflow.newActivityStub(FileProcessingActivities.class, options2);\n}\n\n\n\n# Calling Activities Asynchronously\n\nSometimes need to perform certain operations in parallel. The Async class static methods allow you to invoke any asynchronously. The calls return a Promise result immediately. Promise is similar to both Java Future and CompletionStage. The Promise get blocks until a result is available. It also exposes the thenApply and handle methods. See the Promise JavaDoc for technical details about differences with Future.\n\nTo convert a synchronous call:\n\nString localName = activities.download(sourceBucket, sourceFile);\n\n\nTo asynchronous style, the method reference is passed to Async.function or Async.procedure followed by arguments:\n\nPromise localNamePromise = Async.function(activities::download, sourceBucket, sourceFile);\n\n\nThen to wait synchronously for the result:\n\nString localName = localNamePromise.get();\n\n\nHere is the above example rewritten to call download and upload in parallel on multiple files:\n\npublic void processFile(Arguments args) {\n List> localNamePromises = new ArrayList<>();\n List processedNames = null;\n try {\n // Download all files in parallel.\n for (String sourceFilename : args.getSourceFilenames()) {\n Promise localName = Async.function(activities::download,\n args.getSourceBucketName(), sourceFilename);\n localNamePromises.add(localName);\n }\n // allOf converts a list of promises to a single promise that contains a list\n // of each promise value.\n Promise> localNamesPromise = Promise.allOf(localNamePromises);\n\n // All code until the next line wasn't blocking.\n // The promise get is a blocking call.\n List localNames = localNamesPromise.get();\n processedNames = activities.processFiles(localNames);\n\n // Upload all results in parallel.\n List> uploadedList = new ArrayList<>();\n for (String processedName : processedNames) {\n Promise uploaded = Async.procedure(activities::upload,\n args.getTargetBucketName(), args.getTargetFilename(), processedName);\n uploadedList.add(uploaded);\n }\n // Wait for all uploads to complete.\n Promise allUploaded = Promise.allOf(uploadedList);\n allUploaded.get(); // blocks until all promises are ready.\n } finally {\n for (Promise localNamePromise : localNamePromises) {\n // Skip files that haven't completed downloading.\n if (localNamePromise.isCompleted()) {\n activities.deleteLocalFile(localNamePromise.get());\n }\n }\n if (processedNames != null) {\n for (String processedName : processedNames) {\n activities.deleteLocalFile(processedName);\n }\n }\n }\n}\n\n\n\n# Workflow Implementation Constraints\n\nCadence uses the Microsoft Azure Event Sourcing pattern to recover the state of a object including its threads and local variable values. In essence, every time a state has to be restored, its code is re-executed from the beginning. When replaying, side effects (such as invocations) are ignored because they are already recorded in the . When writing logic, the replay is not visible, so the code should be written since it executes only once. This design puts the following constraints on the implementation:\n\n * Do not use any mutable global variables because multiple instances of are executed in parallel.\n * Do not call any non-deterministic functions like non seeded random or UUID.randomUUID() directly from the code.\n\nAlways do the following in :\n\n * Don’t perform any IO or service calls as they are not usually deterministic. Use for this.\n * Only use Workflow.currentTimeMillis() to get the current time inside a .\n * Do not use native Java Thread or any other multi-threaded classes like ThreadPoolExecutor. Use Async.function or Async.procedure to execute code asynchronously.\n * Don't use any synchronization, locks, and other standard Java blocking concurrency-related classes besides those provided by the Workflow class. There is no need in explicit synchronization because multi-threaded code inside a is executed one thread at a time and under a global lock.\n * Call WorkflowThread.sleep instead of Thread.sleep.\n * Use Promise and CompletablePromise instead of Future and CompletableFuture.\n * Use WorkflowQueue instead of BlockingQueue.\n\n * Use Workflow.getVersion when making any changes to the code. Without this, any deployment of updated code might break already open .\n * Don’t access configuration APIs directly from a because changes in the configuration might affect a path. Pass it as an argument to a function or use an to load it.\n\nmethod arguments and return values are serializable to a byte array using the provided DataConverter interface. The default implementation uses JSON serializer, but you can use any alternative serialization mechanism.\n\nThe values passed to through invocation parameters or returned through a result value are recorded in the execution history. The entire execution history is transferred from the Cadence service to with every that the logic needs to process. A large execution history can thus adversely impact the performance of your . Therefore, be mindful of the amount of data that you transfer via invocation parameters or return values. Otherwise, no additional limitations exist on implementations.",normalizedContent:"# implementing workflows\n\na implementation implements a interface. each time a new is started, a new instance of the implementation object is created. then, one of the methods (depending on which type has been started) annotated with @workflowmethod is invoked. as soon as this method returns, the is closed. while is open, it can receive calls to and methods. no additional calls to methods are allowed. the object is stateful, so and methods can communicate with the other parts of the through object fields.\n\n\n# calling activities\n\nworkflow.newactivitystub returns a client-side stub that implements an interface. it takes type and options as arguments. options are needed only if some of the required timeouts are not specified through the @activitymethod annotation.\n\ncalling a method on this interface invokes an that implements this method. an invocation synchronously blocks until the completes, fails, or times out. even if execution takes a few months, the code still sees it as a single synchronous invocation. it doesn't matter what happens to the processes that host the . the business logic code just sees a single method call.\n\npublic class fileprocessingworkflowimpl implements fileprocessingworkflow {\n\n private final fileprocessingactivities activities;\n\n public fileprocessingworkflowimpl() {\n this.activities = workflow.newactivitystub(fileprocessingactivities.class);\n }\n\n @override\n public void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n activities.upload(args.gettargetbucketname(), args.gettargetfilename(), processedname);\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n }\n ...\n}\n\n\nif different need different options, like timeouts or a , multiple client-side stubs can be created with different options.\n\npublic fileprocessingworkflowimpl() {\n activityoptions options1 = new activityoptions.builder()\n .settasklist(\"tasklist1\")\n .build();\n this.store1 = workflow.newactivitystub(fileprocessingactivities.class, options1);\n\n activityoptions options2 = new activityoptions.builder()\n .settasklist(\"tasklist2\")\n .build();\n this.store2 = workflow.newactivitystub(fileprocessingactivities.class, options2);\n}\n\n\n\n# calling activities asynchronously\n\nsometimes need to perform certain operations in parallel. the async class static methods allow you to invoke any asynchronously. the calls return a promise result immediately. promise is similar to both java future and completionstage. the promise get blocks until a result is available. it also exposes the thenapply and handle methods. see the promise javadoc for technical details about differences with future.\n\nto convert a synchronous call:\n\nstring localname = activities.download(sourcebucket, sourcefile);\n\n\nto asynchronous style, the method reference is passed to async.function or async.procedure followed by arguments:\n\npromise localnamepromise = async.function(activities::download, sourcebucket, sourcefile);\n\n\nthen to wait synchronously for the result:\n\nstring localname = localnamepromise.get();\n\n\nhere is the above example rewritten to call download and upload in parallel on multiple files:\n\npublic void processfile(arguments args) {\n list> localnamepromises = new arraylist<>();\n list processednames = null;\n try {\n // download all files in parallel.\n for (string sourcefilename : args.getsourcefilenames()) {\n promise localname = async.function(activities::download,\n args.getsourcebucketname(), sourcefilename);\n localnamepromises.add(localname);\n }\n // allof converts a list of promises to a single promise that contains a list\n // of each promise value.\n promise> localnamespromise = promise.allof(localnamepromises);\n\n // all code until the next line wasn't blocking.\n // the promise get is a blocking call.\n list localnames = localnamespromise.get();\n processednames = activities.processfiles(localnames);\n\n // upload all results in parallel.\n list> uploadedlist = new arraylist<>();\n for (string processedname : processednames) {\n promise uploaded = async.procedure(activities::upload,\n args.gettargetbucketname(), args.gettargetfilename(), processedname);\n uploadedlist.add(uploaded);\n }\n // wait for all uploads to complete.\n promise alluploaded = promise.allof(uploadedlist);\n alluploaded.get(); // blocks until all promises are ready.\n } finally {\n for (promise localnamepromise : localnamepromises) {\n // skip files that haven't completed downloading.\n if (localnamepromise.iscompleted()) {\n activities.deletelocalfile(localnamepromise.get());\n }\n }\n if (processednames != null) {\n for (string processedname : processednames) {\n activities.deletelocalfile(processedname);\n }\n }\n }\n}\n\n\n\n# workflow implementation constraints\n\ncadence uses the microsoft azure event sourcing pattern to recover the state of a object including its threads and local variable values. in essence, every time a state has to be restored, its code is re-executed from the beginning. when replaying, side effects (such as invocations) are ignored because they are already recorded in the . when writing logic, the replay is not visible, so the code should be written since it executes only once. this design puts the following constraints on the implementation:\n\n * do not use any mutable global variables because multiple instances of are executed in parallel.\n * do not call any non-deterministic functions like non seeded random or uuid.randomuuid() directly from the code.\n\nalways do the following in :\n\n * don’t perform any io or service calls as they are not usually deterministic. use for this.\n * only use workflow.currenttimemillis() to get the current time inside a .\n * do not use native java thread or any other multi-threaded classes like threadpoolexecutor. use async.function or async.procedure to execute code asynchronously.\n * don't use any synchronization, locks, and other standard java blocking concurrency-related classes besides those provided by the workflow class. there is no need in explicit synchronization because multi-threaded code inside a is executed one thread at a time and under a global lock.\n * call workflowthread.sleep instead of thread.sleep.\n * use promise and completablepromise instead of future and completablefuture.\n * use workflowqueue instead of blockingqueue.\n\n * use workflow.getversion when making any changes to the code. without this, any deployment of updated code might break already open .\n * don’t access configuration apis directly from a because changes in the configuration might affect a path. pass it as an argument to a function or use an to load it.\n\nmethod arguments and return values are serializable to a byte array using the provided dataconverter interface. the default implementation uses json serializer, but you can use any alternative serialization mechanism.\n\nthe values passed to through invocation parameters or returned through a result value are recorded in the execution history. the entire execution history is transferred from the cadence service to with every that the logic needs to process. a large execution history can thus adversely impact the performance of your . therefore, be mindful of the amount of data that you transfer via invocation parameters or return values. otherwise, no additional limitations exist on implementations.",charsets:{}},{title:"Starting workflows",frontmatter:{layout:"default",title:"Starting workflows",permalink:"/docs/java-client/starting-workflow-executions",readingShow:"top"},regularPath:"/docs/04-java-client/04-starting-workflow-executions.html",relativePath:"docs/04-java-client/04-starting-workflow-executions.md",key:"v-7106a8e2",path:"/docs/java-client/starting-workflow-executions/",headers:[{level:2,title:"Creating a WorkflowClient",slug:"creating-a-workflowclient",normalizedTitle:"creating a workflowclient",charIndex:35},{level:2,title:"Executing Workflows",slug:"executing-workflows",normalizedTitle:"executing workflows",charIndex:2593}],codeSwitcherOptions:{},headersStr:"Creating a WorkflowClient Executing Workflows",content:'# Starting workflow executions\n\n\n# Creating a WorkflowClient\n\nA interface that executes a requires initializing a WorkflowClient instance, creating a client side stub to the , and then calling a method annotated with @WorkflowMethod.\n\nA simple WorkflowClient instance that utilises the communication protocol can be initialised as follows:\n\nWorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(\n ClientOptions.newBuilder().setHost(cadenceServiceHost).setPort(cadenceServicePort).build()),\n WorkflowClientOptions.newBuilder().setDomain(domain).build());\n// Create a workflow stub.\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(FileProcessingWorkflow.class);\n\n\nAlternatively, if wishing to create a WorkflowClient that uses TLS, we can initialise a client that uses the gRPC communication protocol instead. First, additions will need to be made to the project\'s pom.xml:\n\n\n io.grpc\n grpc-netty\n LATEST.RELEASE.VERSION\n\n\n io.netty\n netty-all\n LATEST.RELEASE.VERSION\n\n\n\nThen, use the following client implementation; provide a TLS certificate with which the cluster has also been configured (replace "/path/to/cert/file" in the sample):\n\nWorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new Thrift2ProtoAdapter(\n IGrpcServiceStubs.newInstance(\n ClientOptions.newBuilder().setGRPCChannel(\n NettyChannelBuilder.forAddress(cadenceServiceHost, cadenceServicePort)\n .useTransportSecurity()\n .defaultLoadBalancingPolicy("round_robin")\n .sslContext(GrpcSslContexts.forClient()\n .trustManager(new File("/path/to/cert/file"))\n .build()).build()).build())),\n WorkflowClientOptions.newBuilder().setDomain(domain).build());\n// Create a workflow stub.\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(FileProcessingWorkflow.class);\n\n\nOr, if you are using version prior to 3.0.0, a WorkflowClient can be created as follows:\n\nWorkflowClient workflowClient = WorkflowClient.newClient(cadenceServiceHost, cadenceServicePort, domain);\n// Create a workflow stub.\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(FileProcessingWorkflow.class);\n\n\n\n# Executing Workflows\n\nThere are two ways to start asynchronously and synchronously. Asynchronous start initiates a and immediately returns to the caller. This is the most common way to start in production code. Synchronous invocation starts a and then waits for its completion. If the process that started the crashes or stops waiting, the continues executing. Because are potentially long running, and crashes of clients happen, this is not very commonly found in production use.\n\nAsynchronous start:\n\n// Returns as soon as the workflow starts.\nWorkflowExecution workflowExecution = WorkflowClient.start(workflow::processFile, workflowArgs);\n\nSystem.out.println("Started process file workflow with workflowId=\\"" + workflowExecution.getWorkflowId()\n + "\\" and runId=\\"" + workflowExecution.getRunId() + "\\"");\n\n\nSynchronous start:\n\n// Start a workflow and then wait for a result.\n// Note that if the waiting process is killed, the workflow will continue execution.\nString result = workflow.processFile(workflowArgs);\n\n\nIf you need to wait for a completion after an asynchronous start, the most straightforward way is to call the blocking version again. If WorkflowOptions.WorkflowIdReusePolicy is not AllowDuplicate, then instead of throwing DuplicateWorkflowException, it reconnects to an existing and waits for its completion. The following example shows how to do this from a different process than the one that started the . All this process needs is a WorkflowID.\n\nWorkflowExecution execution = new WorkflowExecution().setWorkflowId(workflowId);\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(execution);\n// Returns result potentially waiting for workflow to complete.\nString result = workflow.processFile(workflowArgs);\n',normalizedContent:'# starting workflow executions\n\n\n# creating a workflowclient\n\na interface that executes a requires initializing a workflowclient instance, creating a client side stub to the , and then calling a method annotated with @workflowmethod.\n\na simple workflowclient instance that utilises the communication protocol can be initialised as follows:\n\nworkflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(\n clientoptions.newbuilder().sethost(cadenceservicehost).setport(cadenceserviceport).build()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n// create a workflow stub.\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(fileprocessingworkflow.class);\n\n\nalternatively, if wishing to create a workflowclient that uses tls, we can initialise a client that uses the grpc communication protocol instead. first, additions will need to be made to the project\'s pom.xml:\n\n\n io.grpc\n grpc-netty\n latest.release.version\n\n\n io.netty\n netty-all\n latest.release.version\n\n\n\nthen, use the following client implementation; provide a tls certificate with which the cluster has also been configured (replace "/path/to/cert/file" in the sample):\n\nworkflowclient workflowclient =\n workflowclient.newinstance(\n new thrift2protoadapter(\n igrpcservicestubs.newinstance(\n clientoptions.newbuilder().setgrpcchannel(\n nettychannelbuilder.foraddress(cadenceservicehost, cadenceserviceport)\n .usetransportsecurity()\n .defaultloadbalancingpolicy("round_robin")\n .sslcontext(grpcsslcontexts.forclient()\n .trustmanager(new file("/path/to/cert/file"))\n .build()).build()).build())),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n// create a workflow stub.\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(fileprocessingworkflow.class);\n\n\nor, if you are using version prior to 3.0.0, a workflowclient can be created as follows:\n\nworkflowclient workflowclient = workflowclient.newclient(cadenceservicehost, cadenceserviceport, domain);\n// create a workflow stub.\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(fileprocessingworkflow.class);\n\n\n\n# executing workflows\n\nthere are two ways to start asynchronously and synchronously. asynchronous start initiates a and immediately returns to the caller. this is the most common way to start in production code. synchronous invocation starts a and then waits for its completion. if the process that started the crashes or stops waiting, the continues executing. because are potentially long running, and crashes of clients happen, this is not very commonly found in production use.\n\nasynchronous start:\n\n// returns as soon as the workflow starts.\nworkflowexecution workflowexecution = workflowclient.start(workflow::processfile, workflowargs);\n\nsystem.out.println("started process file workflow with workflowid=\\"" + workflowexecution.getworkflowid()\n + "\\" and runid=\\"" + workflowexecution.getrunid() + "\\"");\n\n\nsynchronous start:\n\n// start a workflow and then wait for a result.\n// note that if the waiting process is killed, the workflow will continue execution.\nstring result = workflow.processfile(workflowargs);\n\n\nif you need to wait for a completion after an asynchronous start, the most straightforward way is to call the blocking version again. if workflowoptions.workflowidreusepolicy is not allowduplicate, then instead of throwing duplicateworkflowexception, it reconnects to an existing and waits for its completion. the following example shows how to do this from a different process than the one that started the . all this process needs is a workflowid.\n\nworkflowexecution execution = new workflowexecution().setworkflowid(workflowid);\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(execution);\n// returns result potentially waiting for workflow to complete.\nstring result = workflow.processfile(workflowargs);\n',charsets:{}},{title:"Activity interface",frontmatter:{layout:"default",title:"Activity interface",permalink:"/docs/java-client/activity-interface",readingShow:"top"},regularPath:"/docs/04-java-client/05-activity-interface.html",relativePath:"docs/04-java-client/05-activity-interface.md",key:"v-4af1f23c",path:"/docs/java-client/activity-interface/",codeSwitcherOptions:{},headersStr:null,content:"# Activity interface\n\nAn is a manifestation of a particular in the business logic.\n\nare defined as methods of a plain Java interface. Each method defines a single type. A single can use more than one interface and call more than one method from the same interface. The only requirement is that method arguments and return values are serializable to a byte array using the provided DataConverter interface. The default implementation uses a JSON serializer, but an alternative implementation can be easily configured.\n\nFollowing is an example of an interface that defines four activities:\n\npublic interface FileProcessingActivities {\n\n void upload(String bucketName, String localName, String targetName);\n\n String download(String bucketName, String remoteName);\n\n @ActivityMethod(scheduleToCloseTimeoutSeconds = 2)\n String processFile(String localName);\n\n void deleteLocalFile(String fileName);\n}\n\n\n\nWe recommend to use a single value type argument for methods. In this way, adding new arguments as fields to the value type is a backwards-compatible change.\n\nAn optional @ActivityMethod annotation can be used to specify options like timeouts or a . Required options that are not specified through the annotation must be specified at runtime.",normalizedContent:"# activity interface\n\nan is a manifestation of a particular in the business logic.\n\nare defined as methods of a plain java interface. each method defines a single type. a single can use more than one interface and call more than one method from the same interface. the only requirement is that method arguments and return values are serializable to a byte array using the provided dataconverter interface. the default implementation uses a json serializer, but an alternative implementation can be easily configured.\n\nfollowing is an example of an interface that defines four activities:\n\npublic interface fileprocessingactivities {\n\n void upload(string bucketname, string localname, string targetname);\n\n string download(string bucketname, string remotename);\n\n @activitymethod(scheduletoclosetimeoutseconds = 2)\n string processfile(string localname);\n\n void deletelocalfile(string filename);\n}\n\n\n\nwe recommend to use a single value type argument for methods. in this way, adding new arguments as fields to the value type is a backwards-compatible change.\n\nan optional @activitymethod annotation can be used to specify options like timeouts or a . required options that are not specified through the annotation must be specified at runtime.",charsets:{}},{title:"Implementing activities",frontmatter:{layout:"default",title:"Implementing activities",permalink:"/docs/java-client/implementing-activities",readingShow:"top"},regularPath:"/docs/04-java-client/06-implementing-activities.html",relativePath:"docs/04-java-client/06-implementing-activities.md",key:"v-b64a802c",path:"/docs/java-client/implementing-activities/",headers:[{level:2,title:"Accessing Activity Info",slug:"accessing-activity-info",normalizedTitle:"accessing activity info",charIndex:1518},{level:2,title:"Asynchronous Activity Completion",slug:"asynchronous-activity-completion",normalizedTitle:"asynchronous activity completion",charIndex:2514},{level:2,title:"Activity Heart Beating",slug:"activity-heart-beating",normalizedTitle:"activity heart beating",charIndex:3930}],codeSwitcherOptions:{},headersStr:"Accessing Activity Info Asynchronous Activity Completion Activity Heart Beating",content:'# Implementing activities\n\nimplementation is an implementation of an interface. A single instance of the implementation is shared across multiple simultaneous invocations. Therefore, the implementation code must be thread safe.\n\nThe values passed to through invocation parameters or returned through a result value are recorded in the execution history. The entire execution history is transferred from the Cadence service to when a state needs to recover. A large execution history can thus adversely impact the performance of your . Therefore, be mindful of the amount of data you transfer via invocation parameters or return values. Otherwise, no additional limitations exist on implementations.\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n private final AmazonS3 s3Client;\n\n private final String localDirectory;\n\n void upload(String bucketName, String localName, String targetName) {\n File f = new File(localName);\n s3Client.putObject(bucket, remoteName, f);\n }\n\n String download(String bucketName, String remoteName, String localName) {\n // Implementation omitted for brevity.\n return downloadFileFromS3(bucketName, remoteName, localDirectory + localName);\n }\n\n String processFile(String localName) {\n // Implementation omitted for brevity.\n return compressFile(localName);\n }\n\n void deleteLocalFile(String fileName) {\n File f = new File(localDirectory + fileName);\n f.delete();\n }\n}\n\n\n\n# Accessing Activity Info\n\nThe Activity class provides static getters to access information about the that invoked it. Note that this information is stored in a thread local variable. Therefore, calls to accessors succeed only in the thread that invoked the function.\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n @Override\n public String download(String bucketName, String remoteName, String localName) {\n log.info("domain=" + Activity.getDomain());\n WorkflowExecution execution = Activity.getWorkflowExecution();\n log.info("workflowId=" + execution.getWorkflowId());\n log.info("runId=" + execution.getRunId());\n ActivityTask activityTask = Activity.getTask();\n log.info("activityId=" + activityTask.getActivityId());\n log.info("activityTimeout=" + activityTask.getStartToCloseTimeoutSeconds());\n return downloadFileFromS3(bucketName, remoteName, localDirectory + localName);\n }\n ...\n}\n\n\n\n# Asynchronous Activity Completion\n\nSometimes an lifecycle goes beyond a synchronous method invocation. For example, a request can be put in a queue and later a reply comes and is picked up by a different process. The whole request-reply interaction can be modeled as a single Cadence .\n\nTo indicate that an should not be completed upon its method return, call Activity.doNotCompleteOnReturn() from the original thread. Then later, when replies come, complete the using ActivityCompletionClient. To correlate invocation with completion, use either TaskToken or and IDs.\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n public String download(String bucketName, String remoteName, String localName) {\n byte[] taskToken = Activity.getTaskToken(); // Used to correlate reply.\n asyncDownloadFileFromS3(taskToken, bucketName, remoteName, localDirectory + localName);\n Activity.doNotCompleteOnReturn();\n return "ignored"; // Return value is ignored when doNotCompleteOnReturn was called.\n }\n ...\n}\n\n\nWhen the download is complete, the download service potentially calls back from a different process:\n\npublic void completeActivity(byte[] taskToken, R result) {\n completionClient.complete(taskToken, result);\n}\n\npublic void failActivity(byte[] taskToken, Exception failure) {\n completionClient.completeExceptionally(taskToken, failure);\n}\n\n\n\n# Activity Heart Beating\n\nSome are long running. To react to a crash quickly, use a heartbeat mechanism. The Activity.heartbeat function lets the Cadence service know that the is still alive. You can piggyback details on an heartbeat. If an times out, the last value of details is included in the ActivityTimeoutException delivered to a . Then the can pass the details to the next invocation. This acts as a periodic checkpoint mechanism for the progress of an .\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n @Override\n public String download(String bucketName, String remoteName, String localName) {\n InputStream inputStream = openInputStream(file);\n try {\n byte[] bytes = new byte[MAX_BUFFER_SIZE];\n while ((read = inputStream.read(bytes)) != -1) {\n totalRead += read;\n f.write(bytes, 0, read);\n /*\n * Let the service know about the download progress.\n */\n Activity.heartbeat(totalRead);\n }\n } finally {\n inputStream.close();\n }\n }\n ...\n}\n',normalizedContent:'# implementing activities\n\nimplementation is an implementation of an interface. a single instance of the implementation is shared across multiple simultaneous invocations. therefore, the implementation code must be thread safe.\n\nthe values passed to through invocation parameters or returned through a result value are recorded in the execution history. the entire execution history is transferred from the cadence service to when a state needs to recover. a large execution history can thus adversely impact the performance of your . therefore, be mindful of the amount of data you transfer via invocation parameters or return values. otherwise, no additional limitations exist on implementations.\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n private final amazons3 s3client;\n\n private final string localdirectory;\n\n void upload(string bucketname, string localname, string targetname) {\n file f = new file(localname);\n s3client.putobject(bucket, remotename, f);\n }\n\n string download(string bucketname, string remotename, string localname) {\n // implementation omitted for brevity.\n return downloadfilefroms3(bucketname, remotename, localdirectory + localname);\n }\n\n string processfile(string localname) {\n // implementation omitted for brevity.\n return compressfile(localname);\n }\n\n void deletelocalfile(string filename) {\n file f = new file(localdirectory + filename);\n f.delete();\n }\n}\n\n\n\n# accessing activity info\n\nthe activity class provides static getters to access information about the that invoked it. note that this information is stored in a thread local variable. therefore, calls to accessors succeed only in the thread that invoked the function.\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n @override\n public string download(string bucketname, string remotename, string localname) {\n log.info("domain=" + activity.getdomain());\n workflowexecution execution = activity.getworkflowexecution();\n log.info("workflowid=" + execution.getworkflowid());\n log.info("runid=" + execution.getrunid());\n activitytask activitytask = activity.gettask();\n log.info("activityid=" + activitytask.getactivityid());\n log.info("activitytimeout=" + activitytask.getstarttoclosetimeoutseconds());\n return downloadfilefroms3(bucketname, remotename, localdirectory + localname);\n }\n ...\n}\n\n\n\n# asynchronous activity completion\n\nsometimes an lifecycle goes beyond a synchronous method invocation. for example, a request can be put in a queue and later a reply comes and is picked up by a different process. the whole request-reply interaction can be modeled as a single cadence .\n\nto indicate that an should not be completed upon its method return, call activity.donotcompleteonreturn() from the original thread. then later, when replies come, complete the using activitycompletionclient. to correlate invocation with completion, use either tasktoken or and ids.\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n public string download(string bucketname, string remotename, string localname) {\n byte[] tasktoken = activity.gettasktoken(); // used to correlate reply.\n asyncdownloadfilefroms3(tasktoken, bucketname, remotename, localdirectory + localname);\n activity.donotcompleteonreturn();\n return "ignored"; // return value is ignored when donotcompleteonreturn was called.\n }\n ...\n}\n\n\nwhen the download is complete, the download service potentially calls back from a different process:\n\npublic void completeactivity(byte[] tasktoken, r result) {\n completionclient.complete(tasktoken, result);\n}\n\npublic void failactivity(byte[] tasktoken, exception failure) {\n completionclient.completeexceptionally(tasktoken, failure);\n}\n\n\n\n# activity heart beating\n\nsome are long running. to react to a crash quickly, use a heartbeat mechanism. the activity.heartbeat function lets the cadence service know that the is still alive. you can piggyback details on an heartbeat. if an times out, the last value of details is included in the activitytimeoutexception delivered to a . then the can pass the details to the next invocation. this acts as a periodic checkpoint mechanism for the progress of an .\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n @override\n public string download(string bucketname, string remotename, string localname) {\n inputstream inputstream = openinputstream(file);\n try {\n byte[] bytes = new byte[max_buffer_size];\n while ((read = inputstream.read(bytes)) != -1) {\n totalread += read;\n f.write(bytes, 0, read);\n /*\n * let the service know about the download progress.\n */\n activity.heartbeat(totalread);\n }\n } finally {\n inputstream.close();\n }\n }\n ...\n}\n',charsets:{}},{title:"Versioning",frontmatter:{layout:"default",title:"Versioning",permalink:"/docs/java-client/versioning",readingShow:"top"},regularPath:"/docs/04-java-client/07-versioning.html",relativePath:"docs/04-java-client/07-versioning.md",key:"v-3c541bc2",path:"/docs/java-client/versioning/",codeSwitcherOptions:{},headersStr:null,content:'# Versioning\n\nAs outlined in the Workflow Implementation Constraints section, code has to be deterministic by taking the same code path when replaying history . Any code change that affects the order in which are generated breaks this assumption. The solution that allows updating code of already running is to keep both the old and new code. When replaying, use the code version that the were generated with and when executing a new code path, always take the new code.\n\nUse the Workflow.getVersion function to return a version of the code that should be executed and then use the returned value to pick a correct branch. Let\'s look at an example.\n\npublic void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName);\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n}\n\n\nNow we decide to calculate the processed file checksum and pass it to upload. The correct way to implement this change is:\n\npublic void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n int version = Workflow.getVersion("checksumAdded", Workflow.DEFAULT_VERSION, 1);\n if (version == Workflow.DEFAULT_VERSION) {\n activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName);\n } else {\n long checksum = activities.calculateChecksum(processedName);\n activities.uploadWithChecksum(\n args.getTargetBucketName(), args.getTargetFilename(), processedName, checksum);\n }\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n}\n\n\nLater, when all that use the old version are completed, the old branch can be removed.\n\npublic void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n // getVersion call is left here to ensure that any attempt to replay history\n // for a different version fails. It can be removed later when there is no possibility\n // of this happening.\n Workflow.getVersion("checksumAdded", 1, 1);\n long checksum = activities.calculateChecksum(processedName);\n activities.uploadWithChecksum(\n args.getTargetBucketName(), args.getTargetFilename(), processedName, checksum);\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n}\n\n\nThe ID that is passed to the getVersion call identifies the change. Each change is expected to have its own ID. But if a change spawns multiple places in the code and the new code should be either executed in all of them or in none of them, then they have to share the ID.',normalizedContent:'# versioning\n\nas outlined in the workflow implementation constraints section, code has to be deterministic by taking the same code path when replaying history . any code change that affects the order in which are generated breaks this assumption. the solution that allows updating code of already running is to keep both the old and new code. when replaying, use the code version that the were generated with and when executing a new code path, always take the new code.\n\nuse the workflow.getversion function to return a version of the code that should be executed and then use the returned value to pick a correct branch. let\'s look at an example.\n\npublic void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n activities.upload(args.gettargetbucketname(), args.gettargetfilename(), processedname);\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n}\n\n\nnow we decide to calculate the processed file checksum and pass it to upload. the correct way to implement this change is:\n\npublic void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n int version = workflow.getversion("checksumadded", workflow.default_version, 1);\n if (version == workflow.default_version) {\n activities.upload(args.gettargetbucketname(), args.gettargetfilename(), processedname);\n } else {\n long checksum = activities.calculatechecksum(processedname);\n activities.uploadwithchecksum(\n args.gettargetbucketname(), args.gettargetfilename(), processedname, checksum);\n }\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n}\n\n\nlater, when all that use the old version are completed, the old branch can be removed.\n\npublic void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n // getversion call is left here to ensure that any attempt to replay history\n // for a different version fails. it can be removed later when there is no possibility\n // of this happening.\n workflow.getversion("checksumadded", 1, 1);\n long checksum = activities.calculatechecksum(processedname);\n activities.uploadwithchecksum(\n args.gettargetbucketname(), args.gettargetfilename(), processedname, checksum);\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n}\n\n\nthe id that is passed to the getversion call identifies the change. each change is expected to have its own id. but if a change spawns multiple places in the code and the new code should be either executed in all of them or in none of them, then they have to share the id.',charsets:{}},{title:"Distributed CRON",frontmatter:{layout:"default",title:"Distributed CRON",permalink:"/docs/java-client/distributed-cron",readingShow:"top"},regularPath:"/docs/04-java-client/08-distributed-cron.html",relativePath:"docs/04-java-client/08-distributed-cron.md",key:"v-423a333c",path:"/docs/java-client/distributed-cron/",headers:[{level:2,title:"Convert an existing cron workflow",slug:"convert-an-existing-cron-workflow",normalizedTitle:"convert an existing cron workflow",charIndex:2157},{level:2,title:"Retrieve last successful result",slug:"retrieve-last-successful-result",normalizedTitle:"retrieve last successful result",charIndex:2623}],codeSwitcherOptions:{},headersStr:"Convert an existing cron workflow Retrieve last successful result",content:'# Distributed CRON\n\nIt is relatively straightforward to turn any Cadence into a Cron . All you need is to supply a cron schedule when starting the using the CronSchedule parameter of StartWorkflowOptions.\n\nYou can also start a using the Cadence with an optional cron schedule using the --cron argument.\n\nFor with CronSchedule:\n\n * CronSchedule is based on UTC time. For example cron schedule "15 8 * * *" will run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * If a failed and a RetryPolicy is supplied to the StartWorkflowOptions as well, the will retry based on the RetryPolicy. While the is retrying, the server will not schedule the next cron run.\n * Cadence server only schedules the next cron run after the current run is completed. If the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * Cron will not stop until they are terminated or cancelled.\n\nCadence supports the standard cron spec:\n\n// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for the next run only happens\n// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won\'t\n// schedule its next run. If the next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).\n// The cron spec is as follows:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\nCronSchedule string\n\n\nCadence also supports more advanced cron expressions.\n\nThe crontab guru site is useful for testing your cron expressions.\n\n\n# Convert an existing cron workflow\n\nBefore CronSchedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return ContinueAsNew. One problem with that implementation is that if the fails or times out, the cron would stop.\n\nTo convert those to make use of Cadence CronSchedule, all you need is to remove the delay timer and return without using ContinueAsNew. Then start the with the desired CronSchedule.\n\n\n# Retrieve last successful result\n\nSometimes it is useful to obtain the progress of previous successful runs. This is supported by two new APIs in the client library: HasLastCompletionResult and GetLastCompletionResult. Below is an example of how to use this in Java:\n\npublic String cronWorkflow() {\n String lastProcessedFileName = Workflow.getLastCompletionResult(String.class);\n\n // Process work starting from the lastProcessedFileName.\n // Business logic implementation goes here.\n // Updates lastProcessedFileName to the new value.\n\n return lastProcessedFileName;\n}\n\n\nNote that this works even if one of the cron schedule runs failed. The next schedule will still get the last successful result if it ever successfully completed at least once. For example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these APIs.',normalizedContent:'# distributed cron\n\nit is relatively straightforward to turn any cadence into a cron . all you need is to supply a cron schedule when starting the using the cronschedule parameter of startworkflowoptions.\n\nyou can also start a using the cadence with an optional cron schedule using the --cron argument.\n\nfor with cronschedule:\n\n * cronschedule is based on utc time. for example cron schedule "15 8 * * *" will run daily at 8:15am utc. another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * if a failed and a retrypolicy is supplied to the startworkflowoptions as well, the will retry based on the retrypolicy. while the is retrying, the server will not schedule the next cron run.\n * cadence server only schedules the next cron run after the current run is completed. if the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * cron will not stop until they are terminated or cancelled.\n\ncadence supports the standard cron spec:\n\n// cronschedule - optional cron schedule for workflow. if a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. the scheduling will be based on utc time. the schedule for the next run only happens\n// after the current run is completed/failed/timeout. if a retrypolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. while the workflow is retrying, it won\'t\n// schedule its next run. if the next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. cron workflow will not stop until it is terminated or cancelled (by returning cadence.cancelederror).\n// the cron spec is as follows:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (sunday to saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\ncronschedule string\n\n\ncadence also supports more advanced cron expressions.\n\nthe crontab guru site is useful for testing your cron expressions.\n\n\n# convert an existing cron workflow\n\nbefore cronschedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return continueasnew. one problem with that implementation is that if the fails or times out, the cron would stop.\n\nto convert those to make use of cadence cronschedule, all you need is to remove the delay timer and return without using continueasnew. then start the with the desired cronschedule.\n\n\n# retrieve last successful result\n\nsometimes it is useful to obtain the progress of previous successful runs. this is supported by two new apis in the client library: haslastcompletionresult and getlastcompletionresult. below is an example of how to use this in java:\n\npublic string cronworkflow() {\n string lastprocessedfilename = workflow.getlastcompletionresult(string.class);\n\n // process work starting from the lastprocessedfilename.\n // business logic implementation goes here.\n // updates lastprocessedfilename to the new value.\n\n return lastprocessedfilename;\n}\n\n\nnote that this works even if one of the cron schedule runs failed. the next schedule will still get the last successful result if it ever successfully completed at least once. for example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these apis.',charsets:{}},{title:"Worker service",frontmatter:{layout:"default",title:"Worker service",permalink:"/docs/java-client/workers",readingShow:"top"},regularPath:"/docs/04-java-client/09-workers.html",relativePath:"docs/04-java-client/09-workers.md",key:"v-47638d30",path:"/docs/java-client/workers/",codeSwitcherOptions:{},headersStr:null,content:"# Worker service\n\nA or service is a service that hosts the and implementations. The polls the Cadence service for , performs those , and communicates execution results back to the Cadence service. services are developed, deployed, and operated by Cadence customers.\n\nYou can run a Cadence in a new or an existing service. Use the framework APIs to start the Cadence and link in all and implementations that you require the service to execute.\n\n WorkerFactory factory = WorkerFactory.newInstance(workflowClient,\n WorkerFactoryOptions.newBuilder()\n .setMaxWorkflowThreadCount(1000)\n .setStickyCacheSize(100)\n .setDisableStickyExecution(false)\n .build());\n Worker worker = factory.newWorker(TASK_LIST,\n WorkerOptions.newBuilder()\n .setMaxConcurrentActivityExecutionSize(100)\n .setMaxConcurrentWorkflowExecutionSize(100)\n .build());\n \n // Workflows are stateful. So you need a type to create instances.\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class);\n // Activities are stateless and thread safe. So a shared instance is used.\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n // Start listening to the workflow and activity task lists.\n factory.start();\n\n\nThe code is slightly different if you are using client version prior to 3.0.0:\n\nWorker.Factory factory = new Worker.Factory(DOMAIN,\n new Worker.FactoryOptions.Builder()\n .setMaxWorkflowThreadCount(1000)\n .setCacheMaximumSize(100)\n .setDisableStickyExecution(false)\n .build());\n Worker worker = factory.newWorker(TASK_LIST,\n new WorkerOptions.Builder()\n .setMaxConcurrentActivityExecutionSize(100)\n .setMaxConcurrentWorkflowExecutionSize(100)\n .build());\n // Workflows are stateful. So you need a type to create instances.\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class);\n // Activities are stateless and thread safe. So a shared instance is used.\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n // Start listening to the workflow and activity task lists.\n factory.start();\n\n\nThe WorkerFactoryOptions includes those that need to be shared across workers on the hosts like thread pool, sticky cache.\n\nIn WorkerOptions you can customize things like pollerOptions, activities per second.",normalizedContent:"# worker service\n\na or service is a service that hosts the and implementations. the polls the cadence service for , performs those , and communicates execution results back to the cadence service. services are developed, deployed, and operated by cadence customers.\n\nyou can run a cadence in a new or an existing service. use the framework apis to start the cadence and link in all and implementations that you require the service to execute.\n\n workerfactory factory = workerfactory.newinstance(workflowclient,\n workerfactoryoptions.newbuilder()\n .setmaxworkflowthreadcount(1000)\n .setstickycachesize(100)\n .setdisablestickyexecution(false)\n .build());\n worker worker = factory.newworker(task_list,\n workeroptions.newbuilder()\n .setmaxconcurrentactivityexecutionsize(100)\n .setmaxconcurrentworkflowexecutionsize(100)\n .build());\n \n // workflows are stateful. so you need a type to create instances.\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class);\n // activities are stateless and thread safe. so a shared instance is used.\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n // start listening to the workflow and activity task lists.\n factory.start();\n\n\nthe code is slightly different if you are using client version prior to 3.0.0:\n\nworker.factory factory = new worker.factory(domain,\n new worker.factoryoptions.builder()\n .setmaxworkflowthreadcount(1000)\n .setcachemaximumsize(100)\n .setdisablestickyexecution(false)\n .build());\n worker worker = factory.newworker(task_list,\n new workeroptions.builder()\n .setmaxconcurrentactivityexecutionsize(100)\n .setmaxconcurrentworkflowexecutionsize(100)\n .build());\n // workflows are stateful. so you need a type to create instances.\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class);\n // activities are stateless and thread safe. so a shared instance is used.\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n // start listening to the workflow and activity task lists.\n factory.start();\n\n\nthe workerfactoryoptions includes those that need to be shared across workers on the hosts like thread pool, sticky cache.\n\nin workeroptions you can customize things like polleroptions, activities per second.",charsets:{}},{title:"Signals",frontmatter:{layout:"default",title:"Signals",permalink:"/docs/java-client/signals",readingShow:"top"},regularPath:"/docs/04-java-client/10-signals.html",relativePath:"docs/04-java-client/10-signals.md",key:"v-65cef250",path:"/docs/java-client/signals/",headers:[{level:2,title:"Implement Signal Handler in Workflow",slug:"implement-signal-handler-in-workflow",normalizedTitle:"implement signal handler in workflow",charIndex:1012},{level:2,title:"Signal From Command Line",slug:"signal-from-command-line",normalizedTitle:"signal from command line",charIndex:2494},{level:2,title:"SignalWithStart From Command Line",slug:"signalwithstart-from-command-line",normalizedTitle:"signalwithstart from command line",charIndex:6183},{level:2,title:"Signal from user/application code",slug:"signal-from-user-application-code",normalizedTitle:"signal from user/application code",charIndex:6851}],codeSwitcherOptions:{},headersStr:"Implement Signal Handler in Workflow Signal From Command Line SignalWithStart From Command Line Signal from user/application code",content:'# Signals\n\nprovide a mechanism to send data directly to a running . Previously, you had two options for passing data to the implementation:\n\n * Via start parameters\n * As return values from\n\nWith start parameters, we could only pass in values before began.\n\nReturn values from allowed us to pass information to a running , but this approach comes with its own complications. One major drawback is reliance on polling. This means that the data needs to be stored in a third-party location until it\'s ready to be picked up by the . Further, the lifecycle of this requires management, and the requires manual restart if it fails before acquiring the data.\n\n, on the other hand, provide a fully asynchronous and durable mechanism for providing data to a running . When a is received for a running , Cadence persists the and the payload in the history. The can then process the at any time afterwards without the risk of losing the information. The also has the option to stop execution by blocking on a channel.\n\n\n# Implement Signal Handler in Workflow\n\nSee the below example from sample.\n\npublic interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n\n @SignalMethod\n void updateGreeting(String greeting);\n}\n\npublic static class HelloWorldImpl implements HelloWorld {\n\n private String greeting = "Hello";\n\n @Override\n public void sayHello(String name) {\n int count = 0;\n while (!"Bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n String oldGreeting = greeting;\n Workflow.await(() -> !Objects.equals(greeting, oldGreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @Override\n public void updateGreeting(String greeting) {\n this.greeting = greeting;\n }\n}\n\n\nThe interface now has a new method annotated with @SignalMethod. It is a callback method that is invoked every time a new of "HelloWorldupdateGreeting" is delivered to a . The interface can have only one @WorkflowMethod which is a main function of the and as many methods as needed.\n\nThe updated implementation demonstrates a few important Cadence concepts. The first is that is stateful and can have fields of any complex type. Another is that the Workflow.await function that blocks until the function it receives as a parameter evaluates to true. The condition is going to be evaluated only on state changes, so it is not a busy wait in traditional sense.\n\n\n# Signal From Command Line\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "HelloSignal" --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"World\\"\nStarted Workflow Id: HelloSignal, run Id: 6fa204cb-f478-469a-9432-78060b83b6cd\n\n\nProgram output:\n\n16:53:56.120 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n\n\nLet\'s send a using\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloSignal" --name "HelloWorld::updateGreeting" --input \\"Hi\\"\nSignal workflow succeeded.\n\n\nProgram output:\n\n16:53:56.120 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n16:54:57.901 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 2: Hi World!\n\n\nTry sending the same with the same input again. Note that the output doesn\'t change. This happens because the await condition doesn\'t unblock when it sees the same value. But a new greeting unblocks it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloSignal" --name "HelloWorld::updateGreeting" --input \\"Welcome\\"\nSignal workflow succeeded.\n\n\nProgram output:\n\n16:53:56.120 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n16:54:57.901 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 2: Hi World!\n16:56:24.400 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 3: Welcome World!\n\n\nNow shut down the and send the same again:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloSignal" --name "HelloWorld::updateGreeting" --input \\"Welcome\\"\nSignal workflow succeeded.\n\n\nNote that sending as well as starting does not need a running. The requests are queued inside the Cadence service.\n\nNow bring the back. Note that it doesn\'t log anything besides the standard startup messages. This occurs because it ignores the queued that contains the same input as the current value of greeting. Note that the restart of the didn\'t affect the . It is still blocked on the same line of code as before the failure. This is the most important feature of Cadence. The code doesn\'t need to deal with failures at all. Its state is fully recovered to its current state that includes all the local variables and threads.\n\nLet\'s look at the line where the is blocked:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow stack --workflow_id "Hello2"\nQuery result:\n"workflow-root: (BLOCKED on await)\ncom.uber.cadence.internal.sync.SyncDecisionContext.await(SyncDecisionContext.java:546)\ncom.uber.cadence.internal.sync.WorkflowInternal.await(WorkflowInternal.java:243)\ncom.uber.cadence.workflow.Workflow.await(Workflow.java:611)\ncom.uber.cadence.samples.hello.GettingStarted$HelloWorldImpl.sayHello(GettingStarted.java:32)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)"\n\n\nYes, indeed the is blocked on await. This feature works for any open , greatly simplifying troubleshooting in production. Let\'s complete the by sending a with a "Bye" greeting:\n\n16:58:22.962 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 4: Bye World!\n\n\nNote that the value of the count variable was not lost during the restart.\n\nAlso note that while a single instance is used for this walkthrough, any real production deployment has multiple instances running. So any failure or restart does not delay any because it is just migrated to any other available .\n\n\n# SignalWithStart From Command Line\n\nYou may not know if a is running and can accept a . The signalWithStart feature allows you to send a to the current instance if one exists or to create a new run and then send the . SignalWithStartWorkflow therefore doesn\'t take a as a parameter.\n\nLearn more from the --help manual:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain workflow signalwithstart -h\nNAME:\n cadence workflow signalwithstart - signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n\nUSAGE:\n cadence workflow signalwithstart [command options] [arguments...]\n...\n...\n...\n\n\n\n# Signal from user/application code\n\nYou may want to signal workflows without running the command line.\n\nThe WorkflowClient API allows you to send signal (or SignalWithStartWorkflow) from outside of the workflow to send a to the current .\n\nNote that when using newWorkflowStub to signal a workflow, you MUST NOT passing WorkflowOptions.\n\nThe WorkflowStub with WorkflowOptions is only for starting workflows.\n\nThe WorkflowStub without WorkflowOptions is for signal or query',normalizedContent:'# signals\n\nprovide a mechanism to send data directly to a running . previously, you had two options for passing data to the implementation:\n\n * via start parameters\n * as return values from\n\nwith start parameters, we could only pass in values before began.\n\nreturn values from allowed us to pass information to a running , but this approach comes with its own complications. one major drawback is reliance on polling. this means that the data needs to be stored in a third-party location until it\'s ready to be picked up by the . further, the lifecycle of this requires management, and the requires manual restart if it fails before acquiring the data.\n\n, on the other hand, provide a fully asynchronous and durable mechanism for providing data to a running . when a is received for a running , cadence persists the and the payload in the history. the can then process the at any time afterwards without the risk of losing the information. the also has the option to stop execution by blocking on a channel.\n\n\n# implement signal handler in workflow\n\nsee the below example from sample.\n\npublic interface helloworld {\n @workflowmethod\n void sayhello(string name);\n\n @signalmethod\n void updategreeting(string greeting);\n}\n\npublic static class helloworldimpl implements helloworld {\n\n private string greeting = "hello";\n\n @override\n public void sayhello(string name) {\n int count = 0;\n while (!"bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n string oldgreeting = greeting;\n workflow.await(() -> !objects.equals(greeting, oldgreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @override\n public void updategreeting(string greeting) {\n this.greeting = greeting;\n }\n}\n\n\nthe interface now has a new method annotated with @signalmethod. it is a callback method that is invoked every time a new of "helloworldupdategreeting" is delivered to a . the interface can have only one @workflowmethod which is a main function of the and as many methods as needed.\n\nthe updated implementation demonstrates a few important cadence concepts. the first is that is stateful and can have fields of any complex type. another is that the workflow.await function that blocks until the function it receives as a parameter evaluates to true. the condition is going to be evaluated only on state changes, so it is not a busy wait in traditional sense.\n\n\n# signal from command line\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "hellosignal" --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"world\\"\nstarted workflow id: hellosignal, run id: 6fa204cb-f478-469a-9432-78060b83b6cd\n\n\nprogram output:\n\n16:53:56.120 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n\n\nlet\'s send a using\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "hellosignal" --name "helloworld::updategreeting" --input \\"hi\\"\nsignal workflow succeeded.\n\n\nprogram output:\n\n16:53:56.120 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n16:54:57.901 [workflow-root] info c.u.c.samples.hello.gettingstarted - 2: hi world!\n\n\ntry sending the same with the same input again. note that the output doesn\'t change. this happens because the await condition doesn\'t unblock when it sees the same value. but a new greeting unblocks it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "hellosignal" --name "helloworld::updategreeting" --input \\"welcome\\"\nsignal workflow succeeded.\n\n\nprogram output:\n\n16:53:56.120 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n16:54:57.901 [workflow-root] info c.u.c.samples.hello.gettingstarted - 2: hi world!\n16:56:24.400 [workflow-root] info c.u.c.samples.hello.gettingstarted - 3: welcome world!\n\n\nnow shut down the and send the same again:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "hellosignal" --name "helloworld::updategreeting" --input \\"welcome\\"\nsignal workflow succeeded.\n\n\nnote that sending as well as starting does not need a running. the requests are queued inside the cadence service.\n\nnow bring the back. note that it doesn\'t log anything besides the standard startup messages. this occurs because it ignores the queued that contains the same input as the current value of greeting. note that the restart of the didn\'t affect the . it is still blocked on the same line of code as before the failure. this is the most important feature of cadence. the code doesn\'t need to deal with failures at all. its state is fully recovered to its current state that includes all the local variables and threads.\n\nlet\'s look at the line where the is blocked:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow stack --workflow_id "hello2"\nquery result:\n"workflow-root: (blocked on await)\ncom.uber.cadence.internal.sync.syncdecisioncontext.await(syncdecisioncontext.java:546)\ncom.uber.cadence.internal.sync.workflowinternal.await(workflowinternal.java:243)\ncom.uber.cadence.workflow.workflow.await(workflow.java:611)\ncom.uber.cadence.samples.hello.gettingstarted$helloworldimpl.sayhello(gettingstarted.java:32)\nsun.reflect.nativemethodaccessorimpl.invoke0(native method)\nsun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)"\n\n\nyes, indeed the is blocked on await. this feature works for any open , greatly simplifying troubleshooting in production. let\'s complete the by sending a with a "bye" greeting:\n\n16:58:22.962 [workflow-root] info c.u.c.samples.hello.gettingstarted - 4: bye world!\n\n\nnote that the value of the count variable was not lost during the restart.\n\nalso note that while a single instance is used for this walkthrough, any real production deployment has multiple instances running. so any failure or restart does not delay any because it is just migrated to any other available .\n\n\n# signalwithstart from command line\n\nyou may not know if a is running and can accept a . the signalwithstart feature allows you to send a to the current instance if one exists or to create a new run and then send the . signalwithstartworkflow therefore doesn\'t take a as a parameter.\n\nlearn more from the --help manual:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain workflow signalwithstart -h\nname:\n cadence workflow signalwithstart - signal the current open workflow if exists, or attempt to start a new run based on idresuepolicy and signals it\n\nusage:\n cadence workflow signalwithstart [command options] [arguments...]\n...\n...\n...\n\n\n\n# signal from user/application code\n\nyou may want to signal workflows without running the command line.\n\nthe workflowclient api allows you to send signal (or signalwithstartworkflow) from outside of the workflow to send a to the current .\n\nnote that when using newworkflowstub to signal a workflow, you must not passing workflowoptions.\n\nthe workflowstub with workflowoptions is only for starting workflows.\n\nthe workflowstub without workflowoptions is for signal or query',charsets:{}},{title:"Retries",frontmatter:{layout:"default",title:"Retries",permalink:"/docs/java-client/retries",readingShow:"top"},regularPath:"/docs/04-java-client/12-retries.html",relativePath:"docs/04-java-client/12-retries.md",key:"v-2ef7ad44",path:"/docs/java-client/retries/",headers:[{level:2,title:"RetryOptions",slug:"retryoptions",normalizedTitle:"retryoptions",charIndex:282},{level:3,title:"InitialInterval",slug:"initialinterval",normalizedTitle:"initialinterval",charIndex:339},{level:3,title:"BackoffCoefficient",slug:"backoffcoefficient",normalizedTitle:"backoffcoefficient",charIndex:481},{level:3,title:"MaximumInterval",slug:"maximuminterval",normalizedTitle:"maximuminterval",charIndex:682},{level:3,title:"ExpirationInterval",slug:"expirationinterval",normalizedTitle:"expirationinterval",charIndex:869},{level:3,title:"MaximumAttempts",slug:"maximumattempts",normalizedTitle:"maximumattempts",charIndex:941},{level:3,title:"NonRetriableErrorReasons(via setDoNotRetry)",slug:"nonretriableerrorreasons-via-setdonotretry",normalizedTitle:"nonretriableerrorreasons(via setdonotretry)",charIndex:1466},{level:2,title:"Activity Timeout Usage",slug:"activity-timeout-usage",normalizedTitle:"activity timeout usage",charIndex:2113},{level:2,title:"Activity Timeout Internals",slug:"activity-timeout-internals",normalizedTitle:"activity timeout internals",charIndex:3466},{level:3,title:"Basics without Retry",slug:"basics-without-retry",normalizedTitle:"basics without retry",charIndex:3497},{level:3,title:"Heartbeat timeout",slug:"heartbeat-timeout",normalizedTitle:"heartbeat timeout",charIndex:2519},{level:3,title:"RetryOptions and Activity with Retry",slug:"retryoptions-and-activity-with-retry",normalizedTitle:"retryoptions and activity with retry",charIndex:6151}],codeSwitcherOptions:{},headersStr:"RetryOptions InitialInterval BackoffCoefficient MaximumInterval ExpirationInterval MaximumAttempts NonRetriableErrorReasons(via setDoNotRetry) Activity Timeout Usage Activity Timeout Internals Basics without Retry Heartbeat timeout RetryOptions and Activity with Retry",content:"# Activity and workflow retries\n\nand can fail due to various intermediate conditions. In those cases, we want to retry the failed or child or even the parent . This can be achieved by supplying an optional retry options.\n\n> Note that sometimes it's also referred as RetryPolicy\n\n\n# RetryOptions\n\nA RetryOptions includes the following.\n\n\n# InitialInterval\n\nBackoff interval for the first retry. If coefficient is 1.0 then it is used for all retries. Required, no default value.\n\n\n# BackoffCoefficient\n\nCoefficient used to calculate the next retry backoff interval. The next retry interval is previous interval multiplied by this coefficient. Must be 1 or larger. Default is 2.0.\n\n\n# MaximumInterval\n\nMaximum backoff interval between retries. Exponential backoff leads to interval increase. This value is the cap of the interval. Default is 100x of initial interval.\n\n\n# ExpirationInterval\n\nMaximum time to retry. Either ExpirationInterval or MaximumAttempts is required. When exceeded the retries stop even if maximum retries is not reached yet. First (non-retry) attempt is unaffected by this field and is guaranteed to run for the entirety of the workflow timeout duration (ExecutionStartToCloseTimeoutSeconds).\n\n\n# MaximumAttempts\n\nMaximum number of attempts. When exceeded the retries stop even if not expired yet. If not set or set to 0, it means unlimited, and relies on ExpirationInterval to stop. Either MaximumAttempts or ExpirationInterval is required.\n\n\n# NonRetriableErrorReasons(via setDoNotRetry)\n\nNon-Retriable errors. This is optional. Cadence server will stop retry if error reason matches this list. When matching an exact match is used. So adding RuntimeException.class to this list is going to include only RuntimeException itself, not all of its subclasses. The reason for such behaviour is to be able to support server side retries without knowledge of Java exception hierarchy. When considering an exception type a cause of ActivityFailureException and ChildWorkflowFailureException is looked at. Error and CancellationException are never retried and are not even passed to this filter.\n\n\n# Activity Timeout Usage\n\nIt's probably too complicated to learn how to set those timeouts by reading the above. There is an easy way to deal with it.\n\nLocalActivity without retry: Use ScheduleToClose for overall timeout\n\nRegular Activity without retry:\n\n 1. Use ScheduleToClose for overall timeout\n 2. Leave ScheduleToStart and StartToClose empty\n 3. If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.\n\nLocalActivity with retry:\n\n 1. Use ScheduleToClose as timeout of each attempt.\n 2. Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.\n 3. Use retryOptions.ExperiationInterval as overall timeout of all attempts.\n 4. Leave retryOptions.MaximumAttempts empty.\n\nRegular Activity with retry:\n\n 1. Use ScheduleToClose as timeout of each attempt\n 2. Leave ScheduleToStart and StartToClose empty\n 3. If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.\n 4. Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.\n 5. Use retryOptions.ExperiationInterval as overall timeout of all attempts.\n 6. Leave retryOptions.MaximumAttempts empty.\n\n\n# Activity Timeout Internals\n\n\n# Basics without Retry\n\nThings are easier to understand in the world without retry. Because Cadence started from it.\n\n * ScheduleToClose timeout is the overall end-to-end timeout from a workflow's perspective.\n\n * ScheduleToStart timeout is the time that activity worker needed to start an activity. Exceeding this timeout, activity will return an ScheduleToStart timeout error/exception to workflow\n\n * StartToClose timeout is the time that an activity needed to run. Exceeding this will return StartToClose to workflow.\n\n * Requirement and defaults:\n \n * Either ScheduleToClose is provided or both of ScheduleToStart and StartToClose are provided.\n * If only ScheduleToClose, then ScheduleToStart and StartToClose are default to it.\n * If only ScheduleToStart and StartToClose are provided, then ScheduleToClose = ScheduleToStart + StartToClose.\n * All of them are capped by workflowTimeout. (e.g. if workflowTimeout is 1hour, set 2 hour for ScheduleToClose will still get 1 hour :ScheduleToClose=Min(ScheduleToClose, workflowTimeout) )\n\nSo why are they?\n\nYou may notice that ScheduleToClose is only useful when ScheduleToClose < ScheduleToStart + StartToClose. Because if ScheduleToClose >= ScheduleToStart+StartToClose the ScheduleToClose timeout is already enforced by the combination of the other two, and it become meaningless.\n\nSo the main use case of ScheduleToClose being less than the sum of two is that people want to limit the overall timeout of the activity but give more timeout for scheduleToStart or startToClose. This is extremely rare use case.\n\nAlso the main use case that people want to distinguish ScheduleToStart and StartToClose is that the workflow may need to do some special handling for ScheduleToStart timeout error. This is also very rare use case.\n\nTherefore, you can understand why in TL;DR that I recommend only using ScheduleToClose but leave the other two empty. Because only in some rare cases you may need it. If you can't think of the use case, then you do not need it.\n\nLocalActivity doesn't have ScheduleToStart/StartToClose because it's started directly inside workflow worker without server scheduling involved.\n\n\n# Heartbeat timeout\n\nHeartbeat is very important for long running activity, to prevent it from getting stuck. Not only bugs can cause activity getting stuck, regular deployment/host restart/failure could also cause it. Because without heartbeat, Cadence server couldn't know whether or not the activity is still being worked on. See more details about here https://stackoverflow.com/questions/65118584/solutions-to-stuck-timers-activities-in-cadence-swf-stepfunctions/65118585#65118585\n\n\n# RetryOptions and Activity with Retry\n\nFirst of all, here RetryOptions is for server side backoff retry -- meaning that the retry is managed automatically by Cadence without interacting with workflows. Because retry is managed by Cadence, the activity has to be specially handled in Cadence history that the started event can not written until the activity is closed. Here is some reference: https://stackoverflow.com/questions/65113363/why-an-activity-task-is-scheduled-but-not-started/65113365#65113365\n\nIn fact, workflow can do client side retry on their own. This means workflow will be managing the retry logic. You can write your own retry function, or there is some helper function in SDK, like Workflow.retry in Cadence-java-client. Client side retry will show all start events immediately, but there will be many events in the history when retrying for a single activity. It's not recommended because of performance issue.\n\nSo what do the options mean:\n\n * ExpirationInterval:\n \n * It replaces the ScheduleToClose timeout to become the actual overall timeout of the activity for all attempts.\n * It's also capped to workflow timeout like other three timeout options. ScheduleToClose = Min(ScheduleToClose, workflowTimeout)\n * The timeout of each attempt is StartToClose, but StartToClose defaults to ScheduleToClose like explanation above.\n * ScheduleToClose will be extended to ExpirationInterval: ScheduleToClose = Max(ScheduleToClose, ExpirationInterval), and this happens before ScheduleToClose is copied to ScheduleToClose and StartToClose.\n\n * InitialInterval: the interval of first retry\n\n * BackoffCoefficient: self explained\n\n * MaximumInterval: maximum of the interval during retry\n\n * MaximumAttempts: the maximum attempts. If existing with ExpirationInterval, then retry stops when either one of them is exceeded.\n\n * Requirements and defaults:\n\n * Either MaximumAttempts or ExpirationInterval is required. ExpirationInterval is set to workflowTimeout if not provided.\n\nSince ExpirationInterval is always there, and in fact it's more useful. And I think it's quite confusing to use MaximumAttempts, so I would recommend just use ExpirationInterval. Unless you really need it.",normalizedContent:"# activity and workflow retries\n\nand can fail due to various intermediate conditions. in those cases, we want to retry the failed or child or even the parent . this can be achieved by supplying an optional retry options.\n\n> note that sometimes it's also referred as retrypolicy\n\n\n# retryoptions\n\na retryoptions includes the following.\n\n\n# initialinterval\n\nbackoff interval for the first retry. if coefficient is 1.0 then it is used for all retries. required, no default value.\n\n\n# backoffcoefficient\n\ncoefficient used to calculate the next retry backoff interval. the next retry interval is previous interval multiplied by this coefficient. must be 1 or larger. default is 2.0.\n\n\n# maximuminterval\n\nmaximum backoff interval between retries. exponential backoff leads to interval increase. this value is the cap of the interval. default is 100x of initial interval.\n\n\n# expirationinterval\n\nmaximum time to retry. either expirationinterval or maximumattempts is required. when exceeded the retries stop even if maximum retries is not reached yet. first (non-retry) attempt is unaffected by this field and is guaranteed to run for the entirety of the workflow timeout duration (executionstarttoclosetimeoutseconds).\n\n\n# maximumattempts\n\nmaximum number of attempts. when exceeded the retries stop even if not expired yet. if not set or set to 0, it means unlimited, and relies on expirationinterval to stop. either maximumattempts or expirationinterval is required.\n\n\n# nonretriableerrorreasons(via setdonotretry)\n\nnon-retriable errors. this is optional. cadence server will stop retry if error reason matches this list. when matching an exact match is used. so adding runtimeexception.class to this list is going to include only runtimeexception itself, not all of its subclasses. the reason for such behaviour is to be able to support server side retries without knowledge of java exception hierarchy. when considering an exception type a cause of activityfailureexception and childworkflowfailureexception is looked at. error and cancellationexception are never retried and are not even passed to this filter.\n\n\n# activity timeout usage\n\nit's probably too complicated to learn how to set those timeouts by reading the above. there is an easy way to deal with it.\n\nlocalactivity without retry: use scheduletoclose for overall timeout\n\nregular activity without retry:\n\n 1. use scheduletoclose for overall timeout\n 2. leave scheduletostart and starttoclose empty\n 3. if scheduletoclose is too large(like 10 mins), then set heartbeat timeout to a smaller value like 10s. call heartbeat api inside activity regularly.\n\nlocalactivity with retry:\n\n 1. use scheduletoclose as timeout of each attempt.\n 2. use retryoptions.initialinterval, retryoptions.backoffcoefficient, retryoptions.maximuminterval to control backoff.\n 3. use retryoptions.experiationinterval as overall timeout of all attempts.\n 4. leave retryoptions.maximumattempts empty.\n\nregular activity with retry:\n\n 1. use scheduletoclose as timeout of each attempt\n 2. leave scheduletostart and starttoclose empty\n 3. if scheduletoclose is too large(like 10 mins), then set heartbeat timeout to a smaller value like 10s. call heartbeat api inside activity regularly.\n 4. use retryoptions.initialinterval, retryoptions.backoffcoefficient, retryoptions.maximuminterval to control backoff.\n 5. use retryoptions.experiationinterval as overall timeout of all attempts.\n 6. leave retryoptions.maximumattempts empty.\n\n\n# activity timeout internals\n\n\n# basics without retry\n\nthings are easier to understand in the world without retry. because cadence started from it.\n\n * scheduletoclose timeout is the overall end-to-end timeout from a workflow's perspective.\n\n * scheduletostart timeout is the time that activity worker needed to start an activity. exceeding this timeout, activity will return an scheduletostart timeout error/exception to workflow\n\n * starttoclose timeout is the time that an activity needed to run. exceeding this will return starttoclose to workflow.\n\n * requirement and defaults:\n \n * either scheduletoclose is provided or both of scheduletostart and starttoclose are provided.\n * if only scheduletoclose, then scheduletostart and starttoclose are default to it.\n * if only scheduletostart and starttoclose are provided, then scheduletoclose = scheduletostart + starttoclose.\n * all of them are capped by workflowtimeout. (e.g. if workflowtimeout is 1hour, set 2 hour for scheduletoclose will still get 1 hour :scheduletoclose=min(scheduletoclose, workflowtimeout) )\n\nso why are they?\n\nyou may notice that scheduletoclose is only useful when scheduletoclose < scheduletostart + starttoclose. because if scheduletoclose >= scheduletostart+starttoclose the scheduletoclose timeout is already enforced by the combination of the other two, and it become meaningless.\n\nso the main use case of scheduletoclose being less than the sum of two is that people want to limit the overall timeout of the activity but give more timeout for scheduletostart or starttoclose. this is extremely rare use case.\n\nalso the main use case that people want to distinguish scheduletostart and starttoclose is that the workflow may need to do some special handling for scheduletostart timeout error. this is also very rare use case.\n\ntherefore, you can understand why in tl;dr that i recommend only using scheduletoclose but leave the other two empty. because only in some rare cases you may need it. if you can't think of the use case, then you do not need it.\n\nlocalactivity doesn't have scheduletostart/starttoclose because it's started directly inside workflow worker without server scheduling involved.\n\n\n# heartbeat timeout\n\nheartbeat is very important for long running activity, to prevent it from getting stuck. not only bugs can cause activity getting stuck, regular deployment/host restart/failure could also cause it. because without heartbeat, cadence server couldn't know whether or not the activity is still being worked on. see more details about here https://stackoverflow.com/questions/65118584/solutions-to-stuck-timers-activities-in-cadence-swf-stepfunctions/65118585#65118585\n\n\n# retryoptions and activity with retry\n\nfirst of all, here retryoptions is for server side backoff retry -- meaning that the retry is managed automatically by cadence without interacting with workflows. because retry is managed by cadence, the activity has to be specially handled in cadence history that the started event can not written until the activity is closed. here is some reference: https://stackoverflow.com/questions/65113363/why-an-activity-task-is-scheduled-but-not-started/65113365#65113365\n\nin fact, workflow can do client side retry on their own. this means workflow will be managing the retry logic. you can write your own retry function, or there is some helper function in sdk, like workflow.retry in cadence-java-client. client side retry will show all start events immediately, but there will be many events in the history when retrying for a single activity. it's not recommended because of performance issue.\n\nso what do the options mean:\n\n * expirationinterval:\n \n * it replaces the scheduletoclose timeout to become the actual overall timeout of the activity for all attempts.\n * it's also capped to workflow timeout like other three timeout options. scheduletoclose = min(scheduletoclose, workflowtimeout)\n * the timeout of each attempt is starttoclose, but starttoclose defaults to scheduletoclose like explanation above.\n * scheduletoclose will be extended to expirationinterval: scheduletoclose = max(scheduletoclose, expirationinterval), and this happens before scheduletoclose is copied to scheduletoclose and starttoclose.\n\n * initialinterval: the interval of first retry\n\n * backoffcoefficient: self explained\n\n * maximuminterval: maximum of the interval during retry\n\n * maximumattempts: the maximum attempts. if existing with expirationinterval, then retry stops when either one of them is exceeded.\n\n * requirements and defaults:\n\n * either maximumattempts or expirationinterval is required. expirationinterval is set to workflowtimeout if not provided.\n\nsince expirationinterval is always there, and in fact it's more useful. and i think it's quite confusing to use maximumattempts, so i would recommend just use expirationinterval. unless you really need it.",charsets:{}},{title:"Queries",frontmatter:{layout:"default",title:"Queries",permalink:"/docs/java-client/queries",readingShow:"top"},regularPath:"/docs/04-java-client/11-queries.html",relativePath:"docs/04-java-client/11-queries.md",key:"v-47e211a0",path:"/docs/java-client/queries/",headers:[{level:2,title:"Built-in Query: Stack Trace",slug:"built-in-query-stack-trace",normalizedTitle:"built-in query: stack trace",charIndex:550},{level:2,title:"Customized Query",slug:"customized-query",normalizedTitle:"customized query",charIndex:1055},{level:2,title:"Run Query from Command Line",slug:"run-query-from-command-line",normalizedTitle:"run query from command line",charIndex:2688},{level:2,title:"Run Query from external application code",slug:"run-query-from-external-application-code",normalizedTitle:"run query from external application code",charIndex:4693},{level:2,title:"Consistent Query",slug:"consistent-query",normalizedTitle:"consistent query",charIndex:4803}],codeSwitcherOptions:{},headersStr:"Built-in Query: Stack Trace Customized Query Run Query from Command Line Run Query from external application code Consistent Query",content:'# Queries\n\nQuery is to expose this internal state to the external world Cadence provides a synchronous feature. From the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per type exposing different information to different external systems.\n\ncallbacks must be read-only not mutating the state in any way. The other limitation is that the callback cannot contain any blocking code. Both above limitations rule out ability to invoke from the handlers.\n\n\n# Built-in Query: Stack Trace\n\nIf a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. You can use the Cadence to perform this . For example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nThis command uses __stack_trace, which is a built-in type supported by the Cadence client library. You can add custom types to handle such as the current state of a , or how many the has completed.\n\n\n# Customized Query\n\nCadence provides a feature that supports synchronously returning any information from a to an external caller.\n\nInterface QueryMethod indicates that the method is a query method. Query method can be used to query a workflow state by external process at any time during its execution. This annotation applies only to workflow interface methods.\n\nSee the example code :\n\npublic interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n\n @SignalMethod\n void updateGreeting(String greeting);\n\n @QueryMethod\n int getCount();\n}\n\npublic static class HelloWorldImpl implements HelloWorld {\n\n private String greeting = "Hello";\n private int count = 0;\n\n @Override\n public void sayHello(String name) {\n while (!"Bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n String oldGreeting = greeting;\n Workflow.await(() -> !Objects.equals(greeting, oldGreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @Override\n public void updateGreeting(String greeting) {\n this.greeting = greeting;\n }\n\n @Override\n public int getCount() {\n return count;\n }\n}\n\n\nThe new getCount method annotated with @QueryMethod was added to the interface definition. It is allowed to have multiple methods per interface.\n\nThe main restriction on the implementation of the method is that it is not allowed to modify state in any form. It also is not allowed to block its thread in any way. It usually just returns a value derived from the fields of the object.\n\n\n# Run Query from Command Line\n\nLet\'s run the updated and send a couple to it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "HelloQuery" --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"World\\"\nStarted Workflow Id: HelloQuery, run Id: 1925f668-45b5-4405-8cba-74f7c68c3135\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloQuery" --name "HelloWorld::updateGreeting" --input \\"Hi\\"\nSignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloQuery" --name "HelloWorld::updateGreeting" --input \\"Welcome\\"\nSignal workflow succeeded.\n\n\nThe output:\n\n17:35:50.485 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n17:36:10.483 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 2: Hi World!\n17:36:16.204 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 3: Welcome World!\n\n\nNow let\'s the using the\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "HelloQuery" --query_type "HelloWorld::getCount"\n:query:Query: result as JSON:\n3\n\n\nOne limitation of the is that it requires a process running because it is executing callback code. An interesting feature of the is that it works for completed as well. Let\'s complete the by sending "Bye" and it.\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloQuery" --name "HelloWorld::updateGreeting" --input \\"Bye\\"\nSignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "HelloQuery" --query_type "HelloWorld::getCount"\n:query:Query: result as JSON:\n4\n\n\nThe method can accept parameters. This might be useful if only part of the state should be returned.\n\n\n# Run Query from external application code\n\nThe WorkflowStub without WorkflowOptions is for signal or query\n\n\n# Consistent Query\n\nhas two consistency levels, eventual and strong. Consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nIn this example if were to change state, may or may not see that state update reflected in the result. This is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. A strongly consistent is guaranteed to be based on state which includes all that came before the was issued. An is considered to have come before a if the call creating the external returned success before the was issued. External which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nIn order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nIn order to run a using application code, you need to use service client.\n\nWhen using strongly consistent you should expect higher latency than eventually consistent .',normalizedContent:'# queries\n\nquery is to expose this internal state to the external world cadence provides a synchronous feature. from the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. multiple such callbacks can be provided per type exposing different information to different external systems.\n\ncallbacks must be read-only not mutating the state in any way. the other limitation is that the callback cannot contain any blocking code. both above limitations rule out ability to invoke from the handlers.\n\n\n# built-in query: stack trace\n\nif a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. you can use the cadence to perform this . for example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nthis command uses __stack_trace, which is a built-in type supported by the cadence client library. you can add custom types to handle such as the current state of a , or how many the has completed.\n\n\n# customized query\n\ncadence provides a feature that supports synchronously returning any information from a to an external caller.\n\ninterface querymethod indicates that the method is a query method. query method can be used to query a workflow state by external process at any time during its execution. this annotation applies only to workflow interface methods.\n\nsee the example code :\n\npublic interface helloworld {\n @workflowmethod\n void sayhello(string name);\n\n @signalmethod\n void updategreeting(string greeting);\n\n @querymethod\n int getcount();\n}\n\npublic static class helloworldimpl implements helloworld {\n\n private string greeting = "hello";\n private int count = 0;\n\n @override\n public void sayhello(string name) {\n while (!"bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n string oldgreeting = greeting;\n workflow.await(() -> !objects.equals(greeting, oldgreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @override\n public void updategreeting(string greeting) {\n this.greeting = greeting;\n }\n\n @override\n public int getcount() {\n return count;\n }\n}\n\n\nthe new getcount method annotated with @querymethod was added to the interface definition. it is allowed to have multiple methods per interface.\n\nthe main restriction on the implementation of the method is that it is not allowed to modify state in any form. it also is not allowed to block its thread in any way. it usually just returns a value derived from the fields of the object.\n\n\n# run query from command line\n\nlet\'s run the updated and send a couple to it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "helloquery" --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"world\\"\nstarted workflow id: helloquery, run id: 1925f668-45b5-4405-8cba-74f7c68c3135\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "helloquery" --name "helloworld::updategreeting" --input \\"hi\\"\nsignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "helloquery" --name "helloworld::updategreeting" --input \\"welcome\\"\nsignal workflow succeeded.\n\n\nthe output:\n\n17:35:50.485 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n17:36:10.483 [workflow-root] info c.u.c.samples.hello.gettingstarted - 2: hi world!\n17:36:16.204 [workflow-root] info c.u.c.samples.hello.gettingstarted - 3: welcome world!\n\n\nnow let\'s the using the\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "helloquery" --query_type "helloworld::getcount"\n:query:query: result as json:\n3\n\n\none limitation of the is that it requires a process running because it is executing callback code. an interesting feature of the is that it works for completed as well. let\'s complete the by sending "bye" and it.\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "helloquery" --name "helloworld::updategreeting" --input \\"bye\\"\nsignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "helloquery" --query_type "helloworld::getcount"\n:query:query: result as json:\n4\n\n\nthe method can accept parameters. this might be useful if only part of the state should be returned.\n\n\n# run query from external application code\n\nthe workflowstub without workflowoptions is for signal or query\n\n\n# consistent query\n\nhas two consistency levels, eventual and strong. consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nin this example if were to change state, may or may not see that state update reflected in the result. this is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. a strongly consistent is guaranteed to be based on state which includes all that came before the was issued. an is considered to have come before a if the call creating the external returned success before the was issued. external which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nin order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nin order to run a using application code, you need to use service client.\n\nwhen using strongly consistent you should expect higher latency than eventually consistent .',charsets:{}},{title:"Child workflows",frontmatter:{layout:"default",title:"Child workflows",permalink:"/docs/java-client/child-workflows",readingShow:"top"},regularPath:"/docs/04-java-client/13-child-workflows.html",relativePath:"docs/04-java-client/13-child-workflows.md",key:"v-272408a2",path:"/docs/java-client/child-workflows/",codeSwitcherOptions:{},headersStr:null,content:'# Child workflows\n\nBesides , a can also orchestrate other .\n\nworkflow.ExecuteChildWorkflow enables the scheduling of other from within a \'s implementation. The parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\npublic static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n // Workflows are stateful. So a new stub must be created for each new child.\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n\n // This is a non blocking call that returns immediately.\n // Use child.composeGreeting("Hello", name) to call synchronously.\n Promise greeting = Async.function(child::composeGreeting, "Hello", name);\n // Do something else here.\n return greeting.get(); // blocks waiting for the child to complete.\n }\n\n // This example shows how parent workflow return right after starting a child workflow,\n // and let the child run itself.\n private String demoAsyncChildRun(String name) {\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n // non blocking call that initiated child workflow\n Async.function(child::composeGreeting, "Hello", name);\n // instead of using greeting.get() to block till child complete,\n // sometimes we just want to return parent immediately and keep child running\n Promise childPromise = Workflow.getWorkflowExecution(child);\n childPromise.get(); // block until child started,\n // otherwise child may not start because parent complete first.\n return "let child run, parent just return";\n }\n}\n\n\nWorkflow.newChildWorkflowStub returns a client-side stub that implements a child interface. It takes a child type and optional child options as arguments. options may be needed to override the timeouts and if they differ from the ones defined in the @WorkflowMethod annotation or parent .\n\nThe first call to the child stub must always be to a method annotated with @WorkflowMethod. Similar to , a call can be made synchronous or asynchronous by using Async#function or Async#procedure. The synchronous call blocks until a child completes. The asynchronous call returns a Promise that can be used to wait for the completion. After an async call returns the stub, it can be used to send to the child by calling methods annotated with @SignalMethod. a child by calling methods annotated with @QueryMethod from within code is not supported. However, can be done from using the provided WorkflowClient stub.\n\nRunning two children in parallel:\n\npublic static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n\n // Workflows are stateful, so a new stub must be created for each new child.\n GreetingChild child1 = Workflow.newChildWorkflowStub(GreetingChild.class);\n Promise greeting1 = Async.function(child1::composeGreeting, "Hello", name);\n\n // Both children will run concurrently.\n GreetingChild child2 = Workflow.newChildWorkflowStub(GreetingChild.class);\n Promise greeting2 = Async.function(child2::composeGreeting, "Bye", name);\n\n // Do something else here.\n ...\n return "First: " + greeting1.get() + ", second: " + greeting2.get();\n }\n}\n\n\nTo send a to a child, call a method annotated with @SignalMethod:\n\npublic interface GreetingChild {\n @WorkflowMethod\n String composeGreeting(String greeting, String name);\n\n @SignalMethod\n void updateName(String name);\n}\n\npublic static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n Promise greeting = Async.function(child::composeGreeting, "Hello", name);\n child.updateName("Cadence");\n return greeting.get();\n }\n}\n\n\nCalling methods annotated with @QueryMethod is not allowed from within code.',normalizedContent:'# child workflows\n\nbesides , a can also orchestrate other .\n\nworkflow.executechildworkflow enables the scheduling of other from within a \'s implementation. the parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\npublic static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n // workflows are stateful. so a new stub must be created for each new child.\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n\n // this is a non blocking call that returns immediately.\n // use child.composegreeting("hello", name) to call synchronously.\n promise greeting = async.function(child::composegreeting, "hello", name);\n // do something else here.\n return greeting.get(); // blocks waiting for the child to complete.\n }\n\n // this example shows how parent workflow return right after starting a child workflow,\n // and let the child run itself.\n private string demoasyncchildrun(string name) {\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n // non blocking call that initiated child workflow\n async.function(child::composegreeting, "hello", name);\n // instead of using greeting.get() to block till child complete,\n // sometimes we just want to return parent immediately and keep child running\n promise childpromise = workflow.getworkflowexecution(child);\n childpromise.get(); // block until child started,\n // otherwise child may not start because parent complete first.\n return "let child run, parent just return";\n }\n}\n\n\nworkflow.newchildworkflowstub returns a client-side stub that implements a child interface. it takes a child type and optional child options as arguments. options may be needed to override the timeouts and if they differ from the ones defined in the @workflowmethod annotation or parent .\n\nthe first call to the child stub must always be to a method annotated with @workflowmethod. similar to , a call can be made synchronous or asynchronous by using async#function or async#procedure. the synchronous call blocks until a child completes. the asynchronous call returns a promise that can be used to wait for the completion. after an async call returns the stub, it can be used to send to the child by calling methods annotated with @signalmethod. a child by calling methods annotated with @querymethod from within code is not supported. however, can be done from using the provided workflowclient stub.\n\nrunning two children in parallel:\n\npublic static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n\n // workflows are stateful, so a new stub must be created for each new child.\n greetingchild child1 = workflow.newchildworkflowstub(greetingchild.class);\n promise greeting1 = async.function(child1::composegreeting, "hello", name);\n\n // both children will run concurrently.\n greetingchild child2 = workflow.newchildworkflowstub(greetingchild.class);\n promise greeting2 = async.function(child2::composegreeting, "bye", name);\n\n // do something else here.\n ...\n return "first: " + greeting1.get() + ", second: " + greeting2.get();\n }\n}\n\n\nto send a to a child, call a method annotated with @signalmethod:\n\npublic interface greetingchild {\n @workflowmethod\n string composegreeting(string greeting, string name);\n\n @signalmethod\n void updatename(string name);\n}\n\npublic static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n promise greeting = async.function(child::composegreeting, "hello", name);\n child.updatename("cadence");\n return greeting.get();\n }\n}\n\n\ncalling methods annotated with @querymethod is not allowed from within code.',charsets:{}},{title:"Exception Handling",frontmatter:{layout:"default",title:"Exception Handling",permalink:"/docs/java-client/exception-handling",readingShow:"top"},regularPath:"/docs/04-java-client/14-exception-handling.html",relativePath:"docs/04-java-client/14-exception-handling.md",key:"v-d965e2bc",path:"/docs/java-client/exception-handling/",codeSwitcherOptions:{},headersStr:null,content:'# Exception Handling\n\nBy default, Exceptions thrown by an activity are received by the workflow wrapped into an com.uber.cadence.workflow.ActivityFailureException,\n\nExceptions thrown by a child workflow are received by a parent workflow wrapped into a com.uber.cadence.workflow.ChildWorkflowFailureException\n\nExceptions thrown by a workflow are received by a workflow client wrapped into com.uber.cadence.client.WorkflowFailureException.\n\nIn this example a Workflow Client executes a workflow which executes a child workflow which executes an activity which throws an IOException. The resulting exception stack trace is:\n\n com.uber.cadence.client.WorkflowFailureException: WorkflowType="GreetingWorkflow::getGreeting", WorkflowID="38b9ce7a-e370-4cd8-a9f3-35e7295f7b3d", RunID="37ceb58c-9271-4fca-b5aa-ba06c5495214\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:139)\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:111)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.startWorkflow(WorkflowExternalInvocationHandler.java:187)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.invoke(WorkflowExternalInvocationHandler.java:113)\n at com.sun.proxy.$Proxy2.getGreeting(Unknown Source)\n at com.uber.cadence.samples.hello.HelloException.main(HelloException.java:117)\n Caused by: com.uber.cadence.workflow.ChildWorkflowFailureException: WorkflowType="GreetingChild::composeGreeting", ID="37ceb58c-9271-4fca-b5aa-ba06c5495214:1", RunID="47859b47-da4c-4225-876a-462421c98c72, EventID=10\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.executeChildWorkflow(ChildWorkflowInvocationHandler.java:114)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.invoke(ChildWorkflowInvocationHandler.java:71)\n at com.sun.proxy.$Proxy5.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingWorkflowImpl.getGreeting(HelloException.java:70)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOWorkflowImplementationFactory$POJOWorkflowImplementation.execute(POJOWorkflowImplementationFactory.java:160)\n Caused by: com.uber.cadence.workflow.ActivityFailureException: ActivityType="GreetingActivities::composeGreeting" ActivityID="1", EventID=7\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ActivityInvocationHandler.invoke(ActivityInvocationHandler.java:75)\n at com.sun.proxy.$Proxy6.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingChildImpl.composeGreeting(HelloException.java:85)\n ... 5 more\n Caused by: java.io.IOException: Hello World!\n at com.uber.cadence.samples.hello.HelloException$GreetingActivitiesImpl.composeGreeting(HelloException.java:93)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOActivityImplementationFactory$POJOActivityImplementation.execute(POJOActivityImplementationFactory.java:162)\n\n\nNote that IOException is a checked exception. The standard Java way of adding throws IOException to method signature of activity, child and workflow interfaces is not going to help. It is because at all levels it is never received directly, but in wrapped form. Propagating it without wrapping would not allow adding additional context information like activity, child workflow and parent workflow types and IDs. The Cadence library solution is to provide a special wrapper method Workflow.wrap(Exception) which wraps a checked exception in a special runtime exception. It is special because the framework strips it when chaining exceptions across logical process boundaries. In this example IOException is directly attached to ActivityFailureException besides being wrapped when rethrown.\n\npublic class HelloException {\n\n static final String TASK_LIST = "HelloException";\n\n public interface GreetingWorkflow {\n @WorkflowMethod\n String getGreeting(String name);\n }\n\n public interface GreetingChild {\n @WorkflowMethod\n String composeGreeting(String greeting, String name);\n }\n\n public interface GreetingActivities {\n String composeGreeting(String greeting, String name);\n }\n\n /** Parent implementation that calls GreetingChild#composeGreeting.**/\n public static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n return child.composeGreeting("Hello", name);\n }\n }\n\n /** Child workflow implementation.**/\n public static class GreetingChildImpl implements GreetingChild {\n private final GreetingActivities activities =\n Workflow.newActivityStub(\n GreetingActivities.class,\n new ActivityOptions.Builder()\n .setScheduleToCloseTimeout(Duration.ofSeconds(10))\n .build());\n\n @Override\n public String composeGreeting(String greeting, String name) {\n return activities.composeGreeting(greeting, name);\n }\n }\n\n static class GreetingActivitiesImpl implements GreetingActivities {\n @Override\n public String composeGreeting(String greeting, String name) {\n try {\n throw new IOException(greeting + " " + name + "!");\n } catch (IOException e) {\n // Wrapping the exception as checked exceptions in activity and workflow interface methods\n // are prohibited.\n // It will be unwrapped and attached as a cause to the ActivityFailureException.\n throw Workflow.wrap(e);\n }\n }\n }\n\n public static void main(String[] args) {\n // Get a new client\n // NOTE: to set a different options, you can do like this:\n // ClientOptions.newBuilder().setRpcTimeout(5 * 1000).build();\n WorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(ClientOptions.defaultInstance()),\n WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());\n // Get worker to poll the task list.\n WorkerFactory factory = WorkerFactory.newInstance(workflowClient);\n Worker worker = factory.newWorker(TASK_LIST);\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class, GreetingChildImpl.class);\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n factory.start();\n\n WorkflowOptions workflowOptions =\n new WorkflowOptions.Builder()\n .setTaskList(TASK_LIST)\n .setExecutionStartToCloseTimeout(Duration.ofSeconds(30))\n .build();\n GreetingWorkflow workflow =\n workflowClient.newWorkflowStub(GreetingWorkflow.class, workflowOptions);\n try {\n workflow.getGreeting("World");\n throw new IllegalStateException("unreachable");\n } catch (WorkflowException e) {\n Throwable cause = Throwables.getRootCause(e);\n // prints "Hello World!"\n System.out.println(cause.getMessage());\n System.out.println("\\nStack Trace:\\n" + Throwables.getStackTraceAsString(e));\n }\n System.exit(0);\n }\n \n}\n\n\nThe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(String[] args) {\n Worker.Factory factory = new Worker.Factory(DOMAIN);\n Worker worker = factory.newWorker(TASK_LIST);\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class, GreetingChildImpl.class);\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n factory.start();\n\n WorkflowClient workflowClient = WorkflowClient.newInstance(DOMAIN);\n WorkflowOptions workflowOptions =\n new WorkflowOptions.Builder()\n .setTaskList(TASK_LIST)\n .setExecutionStartToCloseTimeout(Duration.ofSeconds(30))\n .build();\n GreetingWorkflow workflow =\n workflowClient.newWorkflowStub(GreetingWorkflow.class, workflowOptions);\n try {\n workflow.getGreeting("World");\n throw new IllegalStateException("unreachable");\n } catch (WorkflowException e) {\n Throwable cause = Throwables.getRootCause(e);\n // prints "Hello World!"\n System.out.println(cause.getMessage());\n System.out.println("\\nStack Trace:\\n" + Throwables.getStackTraceAsString(e));\n }\n System.exit(0);\n}\n',normalizedContent:'# exception handling\n\nby default, exceptions thrown by an activity are received by the workflow wrapped into an com.uber.cadence.workflow.activityfailureexception,\n\nexceptions thrown by a child workflow are received by a parent workflow wrapped into a com.uber.cadence.workflow.childworkflowfailureexception\n\nexceptions thrown by a workflow are received by a workflow client wrapped into com.uber.cadence.client.workflowfailureexception.\n\nin this example a workflow client executes a workflow which executes a child workflow which executes an activity which throws an ioexception. the resulting exception stack trace is:\n\n com.uber.cadence.client.workflowfailureexception: workflowtype="greetingworkflow::getgreeting", workflowid="38b9ce7a-e370-4cd8-a9f3-35e7295f7b3d", runid="37ceb58c-9271-4fca-b5aa-ba06c5495214\n at com.uber.cadence.internal.dispatcher.untypedworkflowstubimpl.getresult(untypedworkflowstubimpl.java:139)\n at com.uber.cadence.internal.dispatcher.untypedworkflowstubimpl.getresult(untypedworkflowstubimpl.java:111)\n at com.uber.cadence.internal.dispatcher.workflowexternalinvocationhandler.startworkflow(workflowexternalinvocationhandler.java:187)\n at com.uber.cadence.internal.dispatcher.workflowexternalinvocationhandler.invoke(workflowexternalinvocationhandler.java:113)\n at com.sun.proxy.$proxy2.getgreeting(unknown source)\n at com.uber.cadence.samples.hello.helloexception.main(helloexception.java:117)\n caused by: com.uber.cadence.workflow.childworkflowfailureexception: workflowtype="greetingchild::composegreeting", id="37ceb58c-9271-4fca-b5aa-ba06c5495214:1", runid="47859b47-da4c-4225-876a-462421c98c72, eventid=10\n at java.lang.thread.getstacktrace(thread.java:1559)\n at com.uber.cadence.internal.dispatcher.childworkflowinvocationhandler.executechildworkflow(childworkflowinvocationhandler.java:114)\n at com.uber.cadence.internal.dispatcher.childworkflowinvocationhandler.invoke(childworkflowinvocationhandler.java:71)\n at com.sun.proxy.$proxy5.composegreeting(unknown source:0)\n at com.uber.cadence.samples.hello.helloexception$greetingworkflowimpl.getgreeting(helloexception.java:70)\n at sun.reflect.nativemethodaccessorimpl.invoke0(native method:0)\n at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)\n at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)\n at java.lang.reflect.method.invoke(method.java:498)\n at com.uber.cadence.internal.worker.pojoworkflowimplementationfactory$pojoworkflowimplementation.execute(pojoworkflowimplementationfactory.java:160)\n caused by: com.uber.cadence.workflow.activityfailureexception: activitytype="greetingactivities::composegreeting" activityid="1", eventid=7\n at java.lang.thread.getstacktrace(thread.java:1559)\n at com.uber.cadence.internal.dispatcher.activityinvocationhandler.invoke(activityinvocationhandler.java:75)\n at com.sun.proxy.$proxy6.composegreeting(unknown source:0)\n at com.uber.cadence.samples.hello.helloexception$greetingchildimpl.composegreeting(helloexception.java:85)\n ... 5 more\n caused by: java.io.ioexception: hello world!\n at com.uber.cadence.samples.hello.helloexception$greetingactivitiesimpl.composegreeting(helloexception.java:93)\n at sun.reflect.nativemethodaccessorimpl.invoke0(native method:0)\n at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)\n at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)\n at java.lang.reflect.method.invoke(method.java:498)\n at com.uber.cadence.internal.worker.pojoactivityimplementationfactory$pojoactivityimplementation.execute(pojoactivityimplementationfactory.java:162)\n\n\nnote that ioexception is a checked exception. the standard java way of adding throws ioexception to method signature of activity, child and workflow interfaces is not going to help. it is because at all levels it is never received directly, but in wrapped form. propagating it without wrapping would not allow adding additional context information like activity, child workflow and parent workflow types and ids. the cadence library solution is to provide a special wrapper method workflow.wrap(exception) which wraps a checked exception in a special runtime exception. it is special because the framework strips it when chaining exceptions across logical process boundaries. in this example ioexception is directly attached to activityfailureexception besides being wrapped when rethrown.\n\npublic class helloexception {\n\n static final string task_list = "helloexception";\n\n public interface greetingworkflow {\n @workflowmethod\n string getgreeting(string name);\n }\n\n public interface greetingchild {\n @workflowmethod\n string composegreeting(string greeting, string name);\n }\n\n public interface greetingactivities {\n string composegreeting(string greeting, string name);\n }\n\n /** parent implementation that calls greetingchild#composegreeting.**/\n public static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n return child.composegreeting("hello", name);\n }\n }\n\n /** child workflow implementation.**/\n public static class greetingchildimpl implements greetingchild {\n private final greetingactivities activities =\n workflow.newactivitystub(\n greetingactivities.class,\n new activityoptions.builder()\n .setscheduletoclosetimeout(duration.ofseconds(10))\n .build());\n\n @override\n public string composegreeting(string greeting, string name) {\n return activities.composegreeting(greeting, name);\n }\n }\n\n static class greetingactivitiesimpl implements greetingactivities {\n @override\n public string composegreeting(string greeting, string name) {\n try {\n throw new ioexception(greeting + " " + name + "!");\n } catch (ioexception e) {\n // wrapping the exception as checked exceptions in activity and workflow interface methods\n // are prohibited.\n // it will be unwrapped and attached as a cause to the activityfailureexception.\n throw workflow.wrap(e);\n }\n }\n }\n\n public static void main(string[] args) {\n // get a new client\n // note: to set a different options, you can do like this:\n // clientoptions.newbuilder().setrpctimeout(5 * 1000).build();\n workflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(clientoptions.defaultinstance()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n // get worker to poll the task list.\n workerfactory factory = workerfactory.newinstance(workflowclient);\n worker worker = factory.newworker(task_list);\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class, greetingchildimpl.class);\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n factory.start();\n\n workflowoptions workflowoptions =\n new workflowoptions.builder()\n .settasklist(task_list)\n .setexecutionstarttoclosetimeout(duration.ofseconds(30))\n .build();\n greetingworkflow workflow =\n workflowclient.newworkflowstub(greetingworkflow.class, workflowoptions);\n try {\n workflow.getgreeting("world");\n throw new illegalstateexception("unreachable");\n } catch (workflowexception e) {\n throwable cause = throwables.getrootcause(e);\n // prints "hello world!"\n system.out.println(cause.getmessage());\n system.out.println("\\nstack trace:\\n" + throwables.getstacktraceasstring(e));\n }\n system.exit(0);\n }\n \n}\n\n\nthe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(string[] args) {\n worker.factory factory = new worker.factory(domain);\n worker worker = factory.newworker(task_list);\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class, greetingchildimpl.class);\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n factory.start();\n\n workflowclient workflowclient = workflowclient.newinstance(domain);\n workflowoptions workflowoptions =\n new workflowoptions.builder()\n .settasklist(task_list)\n .setexecutionstarttoclosetimeout(duration.ofseconds(30))\n .build();\n greetingworkflow workflow =\n workflowclient.newworkflowstub(greetingworkflow.class, workflowoptions);\n try {\n workflow.getgreeting("world");\n throw new illegalstateexception("unreachable");\n } catch (workflowexception e) {\n throwable cause = throwables.getrootcause(e);\n // prints "hello world!"\n system.out.println(cause.getmessage());\n system.out.println("\\nstack trace:\\n" + throwables.getstacktraceasstring(e));\n }\n system.exit(0);\n}\n',charsets:{}},{title:"Side Effect",frontmatter:{layout:"default",title:"Side Effect",permalink:"/docs/java-client/side-effect",readingShow:"top"},regularPath:"/docs/04-java-client/16-side-effect.html",relativePath:"docs/04-java-client/16-side-effect.md",key:"v-53d65f58",path:"/docs/java-client/side-effect/",headers:[{level:2,title:"Mutable Side Effect",slug:"mutable-side-effect",normalizedTitle:"mutable side effect",charIndex:1563}],codeSwitcherOptions:{},headersStr:"Mutable Side Effect",content:"# Side Effect\n\nSide Effect allow workflow executes the provided function once, records its result into the workflow history. The recorded result on history will be returned without executing the provided function during replay. This guarantees the deterministic requirement for workflow as the exact same result will be returned in replay. Common use case is to run some short non-deterministic code in workflow, like getting random number. The only way to fail SideEffect is to panic which causes decision task failure. The decision task after timeout is rescheduled and re-executed giving SideEffect another chance to succeed.\n\n!!Caution: do not use sideEffect function to modify any workflow state. Only use the SideEffect's return value. For example this code is BROKEN:\n\nBad example:\n\n AtomicInteger random = new AtomicInteger();\n Workflow.sideEffect(() -> {\n random.set(random.nextInt(100));\n return null;\n });\n // random will always be 0 in replay, thus this code is non-deterministic\n if random.get() < 50 {\n ....\n } else {\n ....\n }\n\n\nOn replay the provided function is not executed, the random will always be 0, and the workflow could takes a different path breaking the determinism.\n\nHere is the correct way to use sideEffect:\n\nGood example:\n\n int random = Workflow.sideEffect(Integer.class, () -> random.nextInt(100));\n if random < 50 {\n ....\n } else {\n ....\n }\n\n\nIf function throws any exception it is not delivered to the workflow code. It is wrapped in an Error causing failure of the current decision.\n\n\n# Mutable Side Effect\n\nMutableSideEffect is similar to sideEffect, in allowing calls of non-deterministic functions from workflow code. The difference is that every sideEffect call in non-replay mode results in a new marker event recorded into the history. However, mutableSideEffect only records a new marker if a value has changed. During the replay, mutableSideEffect will not execute the function again, but it will return the exact same value as it was returning during the non-replay run.\n\nOne good use case of mutableSideEffect is to access a dynamically changing config without breaking determinism. Even if called very frequently the config value is recorded only when it changes not causing any performance degradation due to a large history size.\n\n!!Caution: do not use mutableSideEffect function to modify any workflow sate. Only use the mutableSideEffect's return value.\n\nIf function throws any exception it is not delivered to the workflow code. It is wrapped in an Error causing failure of the current decision.",normalizedContent:"# side effect\n\nside effect allow workflow executes the provided function once, records its result into the workflow history. the recorded result on history will be returned without executing the provided function during replay. this guarantees the deterministic requirement for workflow as the exact same result will be returned in replay. common use case is to run some short non-deterministic code in workflow, like getting random number. the only way to fail sideeffect is to panic which causes decision task failure. the decision task after timeout is rescheduled and re-executed giving sideeffect another chance to succeed.\n\n!!caution: do not use sideeffect function to modify any workflow state. only use the sideeffect's return value. for example this code is broken:\n\nbad example:\n\n atomicinteger random = new atomicinteger();\n workflow.sideeffect(() -> {\n random.set(random.nextint(100));\n return null;\n });\n // random will always be 0 in replay, thus this code is non-deterministic\n if random.get() < 50 {\n ....\n } else {\n ....\n }\n\n\non replay the provided function is not executed, the random will always be 0, and the workflow could takes a different path breaking the determinism.\n\nhere is the correct way to use sideeffect:\n\ngood example:\n\n int random = workflow.sideeffect(integer.class, () -> random.nextint(100));\n if random < 50 {\n ....\n } else {\n ....\n }\n\n\nif function throws any exception it is not delivered to the workflow code. it is wrapped in an error causing failure of the current decision.\n\n\n# mutable side effect\n\nmutablesideeffect is similar to sideeffect, in allowing calls of non-deterministic functions from workflow code. the difference is that every sideeffect call in non-replay mode results in a new marker event recorded into the history. however, mutablesideeffect only records a new marker if a value has changed. during the replay, mutablesideeffect will not execute the function again, but it will return the exact same value as it was returning during the non-replay run.\n\none good use case of mutablesideeffect is to access a dynamically changing config without breaking determinism. even if called very frequently the config value is recorded only when it changes not causing any performance degradation due to a large history size.\n\n!!caution: do not use mutablesideeffect function to modify any workflow sate. only use the mutablesideeffect's return value.\n\nif function throws any exception it is not delivered to the workflow code. it is wrapped in an error causing failure of the current decision.",charsets:{}},{title:"Continue As New",frontmatter:{layout:"default",title:"Continue As New",permalink:"/docs/java-client/continue-as-new",readingShow:"top"},regularPath:"/docs/04-java-client/15-continue-as-new.html",relativePath:"docs/04-java-client/15-continue-as-new.md",key:"v-68ae0de4",path:"/docs/java-client/continue-as-new/",codeSwitcherOptions:{},headersStr:null,content:'# Continue as new\n\nthat need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the is inside the body of the for loop. The problem with this approach is that the history for that will keep growing to a point where it reaches the maximum size enforced by the service.\n\nContinueAsNew is the low level construct that enables implementing such without the risk of failures down the road. The operation atomically completes the current execution and starts a new execution of the with the same . The new execution will not carry over any history from the old execution.\n\n@Override\npublic void greet(String name) {\n activities.greet("Hello " + name + "!");\n Workflow.continueAsNew(name);\n}\n\n',normalizedContent:'# continue as new\n\nthat need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the is inside the body of the for loop. the problem with this approach is that the history for that will keep growing to a point where it reaches the maximum size enforced by the service.\n\ncontinueasnew is the low level construct that enables implementing such without the risk of failures down the road. the operation atomically completes the current execution and starts a new execution of the with the same . the new execution will not carry over any history from the old execution.\n\n@override\npublic void greet(string name) {\n activities.greet("hello " + name + "!");\n workflow.continueasnew(name);\n}\n\n',charsets:{}},{title:"Testing",frontmatter:{layout:"default",title:"Testing",permalink:"/docs/java-client/testing",readingShow:"top"},regularPath:"/docs/04-java-client/17-testing.html",relativePath:"docs/04-java-client/17-testing.md",key:"v-56629f80",path:"/docs/java-client/testing/",headers:[{level:2,title:"Workflow Test Environment",slug:"workflow-test-environment",normalizedTitle:"workflow test environment",charIndex:833}],codeSwitcherOptions:{},headersStr:"Workflow Test Environment",content:'# Activity Test Environment\n\nTestActivityEnvironment is the helper class for unit testing activity implementations. Supports calls to Activity methods from the tested activities. An example test:\n\nSee full example here.\n\n\n public interface TestActivity {\n String activity1(String input);\n }\n\n private static class ActivityImpl implements TestActivity {\n @Override\n public String activity1(String input) {\n return Activity.getTask().getActivityType().getName() + "-" + input;\n }\n }\n\n @Test\n public void testSuccess() {\n testEnvironment.registerActivitiesImplementations(new ActivityImpl());\n TestActivity activity = testEnvironment.newActivityStub(TestActivity.class);\n String result = activity.activity1("input1");\n assertEquals("TestActivity::activity1-input1", result);\n }\n\n\n\n\n# Workflow Test Environment\n\nTestWorkflowEnvironment provides workflow unit testing capabilities.\n\nTesting the workflow code is hard as it might be potentially very long running. The included in-memory implementation of the Cadence service supports an automatic time skipping. Anytime a workflow under the test as well as the unit test code are waiting on a timer (or sleep) the internal service time is automatically advanced to the nearest time that unblocks one of the waiting threads. This way a workflow that runs in production for months is unit tested in milliseconds. Here is an example of a test that executes in a few milliseconds instead of over two hours that are needed for the workflow to complete.\n\nSee full example here.\n\npublic class SignaledWorkflowImpl implements SignaledWorkflow {\n private String signalInput;\n\n @Override\n public String workflow1(String input) {\n Workflow.sleep(Duration.ofHours(1));\n Workflow.await(() -> signalInput != null);\n Workflow.sleep(Duration.ofHours(1));\n return signalInput + "-" + input;\n }\n\n @Override\n public void processSignal(String input) {\n signalInput = input;\n }\n}\n\n@Test\npublic void testSignal() throws ExecutionException, InterruptedException {\n // Get a workflow stub using the same task list the worker uses.\n WorkflowOptions workflowOptions =\n new WorkflowOptions.Builder()\n .setTaskList(HelloSignal.TASK_LIST)\n .setExecutionStartToCloseTimeout(Duration.ofDays(30))\n .build();\n GreetingWorkflow workflow =\n workflowClient.newWorkflowStub(GreetingWorkflow.class, workflowOptions);\n\n // Start workflow asynchronously to not use another thread to signal.\n WorkflowClient.start(workflow::getGreetings);\n\n // After start for getGreeting returns, the workflow is guaranteed to be started.\n // So we can send a signal to it using workflow stub immediately.\n // But just to demonstrate the unit testing of a long running workflow adding a long sleep here.\n testEnv.sleep(Duration.ofDays(1));\n // This workflow keeps receiving signals until exit is called\n workflow.waitForName("World");\n workflow.waitForName("Universe");\n workflow.exit();\n // Calling synchronous getGreeting after workflow has started reconnects to the existing\n // workflow and\n // blocks until result is available. Note that this behavior assumes that WorkflowOptions are\n // not configured\n // with WorkflowIdReusePolicy.AllowDuplicate. In that case the call would fail with\n // WorkflowExecutionAlreadyStartedException.\n List greetings = workflow.getGreetings();\n assertEquals(2, greetings.size());\n assertEquals("Hello World!", greetings.get(0));\n assertEquals("Hello Universe!", greetings.get(1));\n}\n',normalizedContent:'# activity test environment\n\ntestactivityenvironment is the helper class for unit testing activity implementations. supports calls to activity methods from the tested activities. an example test:\n\nsee full example here.\n\n\n public interface testactivity {\n string activity1(string input);\n }\n\n private static class activityimpl implements testactivity {\n @override\n public string activity1(string input) {\n return activity.gettask().getactivitytype().getname() + "-" + input;\n }\n }\n\n @test\n public void testsuccess() {\n testenvironment.registeractivitiesimplementations(new activityimpl());\n testactivity activity = testenvironment.newactivitystub(testactivity.class);\n string result = activity.activity1("input1");\n assertequals("testactivity::activity1-input1", result);\n }\n\n\n\n\n# workflow test environment\n\ntestworkflowenvironment provides workflow unit testing capabilities.\n\ntesting the workflow code is hard as it might be potentially very long running. the included in-memory implementation of the cadence service supports an automatic time skipping. anytime a workflow under the test as well as the unit test code are waiting on a timer (or sleep) the internal service time is automatically advanced to the nearest time that unblocks one of the waiting threads. this way a workflow that runs in production for months is unit tested in milliseconds. here is an example of a test that executes in a few milliseconds instead of over two hours that are needed for the workflow to complete.\n\nsee full example here.\n\npublic class signaledworkflowimpl implements signaledworkflow {\n private string signalinput;\n\n @override\n public string workflow1(string input) {\n workflow.sleep(duration.ofhours(1));\n workflow.await(() -> signalinput != null);\n workflow.sleep(duration.ofhours(1));\n return signalinput + "-" + input;\n }\n\n @override\n public void processsignal(string input) {\n signalinput = input;\n }\n}\n\n@test\npublic void testsignal() throws executionexception, interruptedexception {\n // get a workflow stub using the same task list the worker uses.\n workflowoptions workflowoptions =\n new workflowoptions.builder()\n .settasklist(hellosignal.task_list)\n .setexecutionstarttoclosetimeout(duration.ofdays(30))\n .build();\n greetingworkflow workflow =\n workflowclient.newworkflowstub(greetingworkflow.class, workflowoptions);\n\n // start workflow asynchronously to not use another thread to signal.\n workflowclient.start(workflow::getgreetings);\n\n // after start for getgreeting returns, the workflow is guaranteed to be started.\n // so we can send a signal to it using workflow stub immediately.\n // but just to demonstrate the unit testing of a long running workflow adding a long sleep here.\n testenv.sleep(duration.ofdays(1));\n // this workflow keeps receiving signals until exit is called\n workflow.waitforname("world");\n workflow.waitforname("universe");\n workflow.exit();\n // calling synchronous getgreeting after workflow has started reconnects to the existing\n // workflow and\n // blocks until result is available. note that this behavior assumes that workflowoptions are\n // not configured\n // with workflowidreusepolicy.allowduplicate. in that case the call would fail with\n // workflowexecutionalreadystartedexception.\n list greetings = workflow.getgreetings();\n assertequals(2, greetings.size());\n assertequals("hello world!", greetings.get(0));\n assertequals("hello universe!", greetings.get(1));\n}\n',charsets:{}},{title:"Workflow Replay and Shadowing",frontmatter:{layout:"default",title:"Workflow Replay and Shadowing",permalink:"/docs/java-client/workflow-replay-shadowing",readingShow:"top"},regularPath:"/docs/04-java-client/18-workflow-replay-shadowing.html",relativePath:"docs/04-java-client/18-workflow-replay-shadowing.md",key:"v-7a33750a",path:"/docs/java-client/workflow-replay-shadowing/",headers:[{level:2,title:"Workflow Replayer",slug:"workflow-replayer",normalizedTitle:"workflow replayer",charIndex:469},{level:3,title:"Write a Replay Test",slug:"write-a-replay-test",normalizedTitle:"write a replay test",charIndex:824},{level:3,title:"Sample Replay Test",slug:"sample-replay-test",normalizedTitle:"sample replay test",charIndex:2164},{level:2,title:"Workflow Shadower",slug:"workflow-shadower",normalizedTitle:"workflow shadower",charIndex:491},{level:3,title:"Shadow Options",slug:"shadow-options",normalizedTitle:"shadow options",charIndex:3279},{level:3,title:"Local Shadowing Test",slug:"local-shadowing-test",normalizedTitle:"local shadowing test",charIndex:4976},{level:3,title:"Shadowing Worker",slug:"shadowing-worker",normalizedTitle:"shadowing worker",charIndex:6137}],codeSwitcherOptions:{},headersStr:"Workflow Replayer Write a Replay Test Sample Replay Test Workflow Shadower Shadow Options Local Shadowing Test Shadowing Worker",content:"# Workflow Replay and Shadowing\n\nIn the Versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. However, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. To help you identify incompatible changes and catch them before production traffic is impacted, we implemented Workflow Replayer and Workflow Shadower.\n\n\n# Workflow Replayer\n\nWorkflow Replayer is a testing component for replaying existing workflow histories against a workflow definition. The replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# Write a Replay Test\n\n# Step 1: Prepare workflow histories\n\nReplayer can read workflow history from a local json file or fetch it directly from the Cadence server. If you would like to use the first method, you can use the following CLI command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nThe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# Step 2: Call the replay method\n\nOnce you have the workflow history or have the connection to Cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nWorkflowReplayer.replayWorkflowExecution(history, MyWorkflowImpl.class);\n\n// if workflow history is stored in a json file\nWorkflowReplayer.replayWorkflowExecutionFromResource(\"workflowHistory.json\", MyWorkflowImpl.class);\n\n// if workflow history is read from a File\nWorkflowReplayer.replayWorkflowExecution(historyFileObject, MyWorkflowImpl.class);\n\n\n# Step 3: Catch returned exception\n\nIf an exception is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\n\n# Sample Replay Test\n\nThis sample is also available in our samples repo at here.\n\npublic class HelloActivityReplayTest {\n @Test\n public void testReplay() throws Exception {\n WorkflowReplayer.replayWorkflowExecutionFromResource(\n \"HelloActivity.json\", HelloActivity.GreetingWorkflowImpl.class);\n }\n}\n\n\n\n# Workflow Shadower\n\nWorkflow Replayer works well when verifying the compatibility against a small number of workflows histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nWorkflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# Shadow Options\n\nComplete documentation on shadow options which includes default values, accepted values, etc. can be found here. The following sections are just a brief description of each option.\n\n# Scan Filters\n\n * WorkflowQuery: If you are familiar with our advanced visibility query syntax, you can specify a query directly. If specified, all other scan filters must be left empty.\n * WorkflowTypes: A list of workflow Type names.\n * WorkflowStatuses: A list of workflow status.\n * WorkflowStartTimeFilter: Min and max timestamp for workflow start time.\n * WorkflowSamplingRate: Sampling workflows from the scan result before executing the replay test.\n\n# Shadow Exit Condition\n\n * ExpirationInterval: Shadowing will exit when the specified interval has passed.\n * ShadowCount: Shadowing will exit after this number of workflow has been replayed. Note: replay maybe skipped due to errors like can't fetch history, history too short, etc. Skipped workflows won't be taken into account for ShadowCount.\n\n# Shadow Mode\n\n * Normal: Shadowing will complete after all workflows matches WorkflowQuery (after sampling) have been replayed or when exit condition is met.\n * Continuous: A new round of shadowing will be started after all workflows matches WorkflowQuery have been replayed. There will be a 5 min wait period between each round, and currently this wait period is not configurable. Shadowing will complete only when ExitCondition is met. ExitCondition must be specified when using this mode.\n\n# Shadow Concurrency\n\n * Concurrency: workflow replay concurrency. If not specified, it will default to 1. For local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# Local Shadowing Test\n\nLocal shadowing test is similar to the replay test. First create a workflow shadower with optional shadow and replay options, then register the workflow that needs to be shadowed. Finally, call the Run method to start the shadowing. The method will return if shadowing has finished or any non-deterministic error is found.\n\nHere's a simple example. The example is also available here.\n\npublic void testShadowing() throws Throwable {\n IWorkflowService service = new WorkflowServiceTChannel(ClientOptions.defaultInstance());\n\n ShadowingOptions options = ShadowingOptions\n .newBuilder()\n .setDomain(DOMAIN)\n .setShadowMode(Mode.Normal)\n .setWorkflowTypes(Lists.newArrayList(\"GreetingWorkflow::getGreeting\"))\n .setWorkflowStatuses(Lists.newArrayList(WorkflowStatus.OPEN, WorkflowStatus.CLOSED))\n .setExitCondition(new ExitCondition().setExpirationIntervalInSeconds(60))\n .build();\n WorkflowShadower shadower = new WorkflowShadower(service, options, TASK_LIST);\n shadower.registerWorkflowImplementationTypes(HelloActivity.GreetingWorkflowImpl.class);\n\n shadower.run();\n}\n\n\n\n# Shadowing Worker\n\nNOTE:\n\n * All shadow workflows are running in one Cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * The Cadence server used for scanning and getting workflow history will also be the Cadence server for running your shadow workflow. Currently, there's no way to specify different Cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nYour worker can also be configured to run in shadow mode to run shadow tests as a workflow. This is useful if there's a number of workflows that need to be replayed. Using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. It can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nWhen running in shadow mode, the normal decision worker will be disabled so that it won't update any production workflows. A special shadow activity worker will be started to execute activities for scanning and replaying workflows. The actual shadow workflow logic is controlled by Cadence server and your worker is only responsible for scanning and replaying workflows.\n\nReplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nTo enable the shadow mode, you can initialize a shadowing worker and pass in the shadowing options.\n\nTo enable the shadowing worker, here is a example. The example is also available here:\n\nWorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(ClientOptions.defaultInstance()),\n WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());\n ShadowingOptions options = ShadowingOptions\n .newBuilder()\n .setDomain(DOMAIN)\n .setShadowMode(Mode.Normal)\n .setWorkflowTypes(Lists.newArrayList(\"GreetingWorkflow::getGreeting\"))\n .setWorkflowStatuses(Lists.newArrayList(WorkflowStatus.OPEN, WorkflowStatus.CLOSED))\n .setExitCondition(new ExitCondition().setExpirationIntervalInSeconds(60))\n .build();\n\n ShadowingWorker shadowingWorker = new ShadowingWorker(\n workflowClient,\n \"HelloActivity\",\n WorkerOptions.defaultInstance(),\n options);\n shadowingWorker.registerWorkflowImplementationTypes(HelloActivity.GreetingWorkflowImpl.class);\n\tshadowingWorker.start();\n\n\nRegistered workflows will be forwarded to the underlying WorkflowReplayer. DataConverter, WorkflowInterceptorChainFactories, ContextPropagators, and Tracer specified in the worker.Options will also be used as ReplayOptions. Since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.",normalizedContent:"# workflow replay and shadowing\n\nin the versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. however, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. to help you identify incompatible changes and catch them before production traffic is impacted, we implemented workflow replayer and workflow shadower.\n\n\n# workflow replayer\n\nworkflow replayer is a testing component for replaying existing workflow histories against a workflow definition. the replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# write a replay test\n\n# step 1: prepare workflow histories\n\nreplayer can read workflow history from a local json file or fetch it directly from the cadence server. if you would like to use the first method, you can use the following cli command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nthe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# step 2: call the replay method\n\nonce you have the workflow history or have the connection to cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nworkflowreplayer.replayworkflowexecution(history, myworkflowimpl.class);\n\n// if workflow history is stored in a json file\nworkflowreplayer.replayworkflowexecutionfromresource(\"workflowhistory.json\", myworkflowimpl.class);\n\n// if workflow history is read from a file\nworkflowreplayer.replayworkflowexecution(historyfileobject, myworkflowimpl.class);\n\n\n# step 3: catch returned exception\n\nif an exception is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\n\n# sample replay test\n\nthis sample is also available in our samples repo at here.\n\npublic class helloactivityreplaytest {\n @test\n public void testreplay() throws exception {\n workflowreplayer.replayworkflowexecutionfromresource(\n \"helloactivity.json\", helloactivity.greetingworkflowimpl.class);\n }\n}\n\n\n\n# workflow shadower\n\nworkflow replayer works well when verifying the compatibility against a small number of workflows histories. if there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nworkflow shadower is built on top of workflow replayer to address this problem. the basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from cadence server and run the replay test. it can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# shadow options\n\ncomplete documentation on shadow options which includes default values, accepted values, etc. can be found here. the following sections are just a brief description of each option.\n\n# scan filters\n\n * workflowquery: if you are familiar with our advanced visibility query syntax, you can specify a query directly. if specified, all other scan filters must be left empty.\n * workflowtypes: a list of workflow type names.\n * workflowstatuses: a list of workflow status.\n * workflowstarttimefilter: min and max timestamp for workflow start time.\n * workflowsamplingrate: sampling workflows from the scan result before executing the replay test.\n\n# shadow exit condition\n\n * expirationinterval: shadowing will exit when the specified interval has passed.\n * shadowcount: shadowing will exit after this number of workflow has been replayed. note: replay maybe skipped due to errors like can't fetch history, history too short, etc. skipped workflows won't be taken into account for shadowcount.\n\n# shadow mode\n\n * normal: shadowing will complete after all workflows matches workflowquery (after sampling) have been replayed or when exit condition is met.\n * continuous: a new round of shadowing will be started after all workflows matches workflowquery have been replayed. there will be a 5 min wait period between each round, and currently this wait period is not configurable. shadowing will complete only when exitcondition is met. exitcondition must be specified when using this mode.\n\n# shadow concurrency\n\n * concurrency: workflow replay concurrency. if not specified, it will default to 1. for local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# local shadowing test\n\nlocal shadowing test is similar to the replay test. first create a workflow shadower with optional shadow and replay options, then register the workflow that needs to be shadowed. finally, call the run method to start the shadowing. the method will return if shadowing has finished or any non-deterministic error is found.\n\nhere's a simple example. the example is also available here.\n\npublic void testshadowing() throws throwable {\n iworkflowservice service = new workflowservicetchannel(clientoptions.defaultinstance());\n\n shadowingoptions options = shadowingoptions\n .newbuilder()\n .setdomain(domain)\n .setshadowmode(mode.normal)\n .setworkflowtypes(lists.newarraylist(\"greetingworkflow::getgreeting\"))\n .setworkflowstatuses(lists.newarraylist(workflowstatus.open, workflowstatus.closed))\n .setexitcondition(new exitcondition().setexpirationintervalinseconds(60))\n .build();\n workflowshadower shadower = new workflowshadower(service, options, task_list);\n shadower.registerworkflowimplementationtypes(helloactivity.greetingworkflowimpl.class);\n\n shadower.run();\n}\n\n\n\n# shadowing worker\n\nnote:\n\n * all shadow workflows are running in one cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * the cadence server used for scanning and getting workflow history will also be the cadence server for running your shadow workflow. currently, there's no way to specify different cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nyour worker can also be configured to run in shadow mode to run shadow tests as a workflow. this is useful if there's a number of workflows that need to be replayed. using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. it can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nwhen running in shadow mode, the normal decision worker will be disabled so that it won't update any production workflows. a special shadow activity worker will be started to execute activities for scanning and replaying workflows. the actual shadow workflow logic is controlled by cadence server and your worker is only responsible for scanning and replaying workflows.\n\nreplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nto enable the shadow mode, you can initialize a shadowing worker and pass in the shadowing options.\n\nto enable the shadowing worker, here is a example. the example is also available here:\n\nworkflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(clientoptions.defaultinstance()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n shadowingoptions options = shadowingoptions\n .newbuilder()\n .setdomain(domain)\n .setshadowmode(mode.normal)\n .setworkflowtypes(lists.newarraylist(\"greetingworkflow::getgreeting\"))\n .setworkflowstatuses(lists.newarraylist(workflowstatus.open, workflowstatus.closed))\n .setexitcondition(new exitcondition().setexpirationintervalinseconds(60))\n .build();\n\n shadowingworker shadowingworker = new shadowingworker(\n workflowclient,\n \"helloactivity\",\n workeroptions.defaultinstance(),\n options);\n shadowingworker.registerworkflowimplementationtypes(helloactivity.greetingworkflowimpl.class);\n\tshadowingworker.start();\n\n\nregistered workflows will be forwarded to the underlying workflowreplayer. dataconverter, workflowinterceptorchainfactories, contextpropagators, and tracer specified in the worker.options will also be used as replayoptions. since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.",charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/java-client",readingShow:"top"},regularPath:"/docs/04-java-client/",relativePath:"docs/04-java-client/index.md",key:"v-c1687e0a",path:"/docs/java-client/",codeSwitcherOptions:{},headersStr:null,content:"# Java client\n\nThe following are important links for the Cadence Java client:\n\n * GitHub project: https://github.com/uber/cadence-java-client\n * Samples: https://github.com/uber/cadence-java-samples\n * JavaDoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\nAdd cadence-client as a dependency to your pom.xml:\n\n\n com.uber.cadence\n cadence-client\n LATEST.RELEASE.VERSION\n\n\n\nor to build.gradle:\n\ndependencies {\n implementation group: 'com.uber.cadence', name: 'cadence-client', version: 'LATEST.RELEASE.VERSION'\n}\n\n\nIf you are using gradle 6.9 or older, you can use compile group\n\ndependencies {\n compile group: 'com.uber.cadence', name: 'cadence-client', version: 'LATEST.RELEASE.VERSION'\n}\n\n\nRelease versions are available in the release page",normalizedContent:"# java client\n\nthe following are important links for the cadence java client:\n\n * github project: https://github.com/uber/cadence-java-client\n * samples: https://github.com/uber/cadence-java-samples\n * javadoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\nadd cadence-client as a dependency to your pom.xml:\n\n\n com.uber.cadence\n cadence-client\n latest.release.version\n\n\n\nor to build.gradle:\n\ndependencies {\n implementation group: 'com.uber.cadence', name: 'cadence-client', version: 'latest.release.version'\n}\n\n\nif you are using gradle 6.9 or older, you can use compile group\n\ndependencies {\n compile group: 'com.uber.cadence', name: 'cadence-client', version: 'latest.release.version'\n}\n\n\nrelease versions are available in the release page",charsets:{}},{title:"Creating workflows",frontmatter:{layout:"default",title:"Creating workflows",permalink:"/docs/go-client/create-workflows",readingShow:"top"},regularPath:"/docs/05-go-client/02-create-workflows.html",relativePath:"docs/05-go-client/02-create-workflows.md",key:"v-861efabc",path:"/docs/go-client/create-workflows/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:968},{level:2,title:"Declaration",slug:"declaration",normalizedTitle:"declaration",charIndex:1991},{level:2,title:"Implementation",slug:"implementation",normalizedTitle:"implementation",charIndex:934},{level:3,title:"Special Cadence client library functions and types",slug:"special-cadence-client-library-functions-and-types",normalizedTitle:"special cadence client library functions and types",charIndex:4738},{level:3,title:"Failing a workflow",slug:"failing-a-workflow",normalizedTitle:"failing a workflow",charIndex:5529},{level:2,title:"Registration",slug:"registration",normalizedTitle:"registration",charIndex:5664}],codeSwitcherOptions:{},headersStr:"Overview Declaration Implementation Special Cadence client library functions and types Failing a workflow Registration",content:'# Creating workflows\n\nThe is the implementation of the coordination logic. The Cadence programming framework (aka client library) allows you to write the coordination logic as simple procedural code that uses standard Go data modeling. The client library takes care of the communication between the service and the Cadence service, and ensures state persistence between even in case of failures. Furthermore, any particular execution is not tied to a particular machine. Different steps of the coordination logic can end up executing on different instances, with the framework ensuring that the necessary state is recreated on the executing the step.\n\nHowever, in order to facilitate this operational model, both the Cadence programming framework and the managed service impose some requirements and restrictions on the implementation of the coordination logic. The details of these requirements and restrictions are described in the Implementation section below.\n\n\n# Overview\n\nThe sample code below shows a simple implementation of a that executes one . The also passes the sole parameter it receives as part of its initialization as a parameter to the .\n\npackage sample\n\nimport (\n "time"\n\n "go.uber.org/cadence/workflow"\n)\n\nfunc init() {\n workflow.Register(SimpleWorkflow)\n}\n\nfunc SimpleWorkflow(ctx workflow.Context, value string) error {\n ao := workflow.ActivityOptions{\n TaskList: "sampleTaskList",\n ScheduleToCloseTimeout: time.Second * 60,\n ScheduleToStartTimeout: time.Second * 60,\n StartToCloseTimeout: time.Second * 60,\n HeartbeatTimeout: time.Second * 10,\n WaitForCancellation: false,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n\n future := workflow.ExecuteActivity(ctx, SimpleActivity, value)\n var result string\n if err := future.Get(ctx, &result); err != nil {\n return err\n }\n workflow.GetLogger(ctx).Info("Done", zap.String("result", result))\n return nil\n}\n\n\n\n# Declaration\n\nIn the Cadence programing model, a is implemented with a function. The function declaration specifies the parameters the accepts as well as any values it might return.\n\nfunc SimpleWorkflow(ctx workflow.Context, value string) error\n\n\nLet’s deconstruct the declaration above:\n\n * The first parameter to the function is ctx workflow.Context. This is a required parameter for all functions and is used by the Cadence client library to pass execution context. Virtually all the client library functions that are callable from the functions require this ctx parameter. This context parameter is the same concept as the standard context.Context provided by Go. The only difference between workflow.Context and context.Context is that the Done() function in workflow.Context returns workflow.Channel instead the standard go chan.\n * The second parameter, string, is a custom parameter that can be used to pass data into the on start. A can have one or more such parameters. All parameters to a function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers.\n * Since it only declares error as the return value, this means that the does not return a value. The error return value is used to indicate an error was encountered during execution and the should be terminated.\n\n\n# Implementation\n\nIn order to support the synchronous and sequential programming model for the implementation, there are certain restrictions and requirements on how the implementation must behave in order to guarantee correctness. The requirements are that:\n\n * Execution must be deterministic\n * Execution must be idempotent\n\nA straightforward way to think about these requirements is that the code is as follows:\n\n * code can only read and manipulate local state or state received as return values from Cadence client library functions.\n * code should not affect changes in external systems other than through invocation of .\n * code should interact with time only through the functions provided by the Cadence client library (i.e. workflow.Now(), workflow.Sleep()).\n * code should not create and interact with goroutines directly, it should instead use the functions provided by the Cadence client library (i.e., workflow.Go() instead of go, workflow.Channel instead of chan, workflow.Selector instead of select).\n * code should do all logging via the logger provided by the Cadence client library (i.e., workflow.GetLogger()).\n * code should not iterate over maps using range because the order of map iteration is randomized.\n\nNow that we have laid the ground rules, we can take a look at some of the special functions and types used for writing Cadence and how to implement some common patterns.\n\n\n# Special Cadence client library functions and types\n\nThe Cadence client library provides a number of functions and types as alternatives to some native Go functions and types. Usage of these replacement functions/types is necessary in order to ensure that the code execution is deterministic and repeatable within an execution context.\n\nCoroutine related constructs:\n\n * workflow.Go : This is a replacement for the the go statement.\n * workflow.Channel : This is a replacement for the native chan type. Cadence provides support for both buffered and unbuffered channels.\n * workflow.Selector : This is a replacement for the select statement.\n\nTime related functions:\n\n * workflow.Now() : This is a replacement for time.Now().\n * workflow.Sleep() : This is a replacement for time.Sleep().\n\n\n# Failing a workflow\n\nTo mark a as failed, all that needs to happen is for the function to return an error via the err return value.\n\n\n# Registration\n\nFor some client code to be able to invoke a type, the process needs to be aware of all the implementations it has access to. A is registered with the following call:\n\nworkflow.Register(SimpleWorkflow)\n\n\nThis call essentially creates an in-memory mapping inside the process between the fully qualified function name and the implementation. It is safe to call this registration method from an init() function. If the receives for a type it does not know, it will fail that . However, the failure of the will not cause the entire to fail.',normalizedContent:'# creating workflows\n\nthe is the implementation of the coordination logic. the cadence programming framework (aka client library) allows you to write the coordination logic as simple procedural code that uses standard go data modeling. the client library takes care of the communication between the service and the cadence service, and ensures state persistence between even in case of failures. furthermore, any particular execution is not tied to a particular machine. different steps of the coordination logic can end up executing on different instances, with the framework ensuring that the necessary state is recreated on the executing the step.\n\nhowever, in order to facilitate this operational model, both the cadence programming framework and the managed service impose some requirements and restrictions on the implementation of the coordination logic. the details of these requirements and restrictions are described in the implementation section below.\n\n\n# overview\n\nthe sample code below shows a simple implementation of a that executes one . the also passes the sole parameter it receives as part of its initialization as a parameter to the .\n\npackage sample\n\nimport (\n "time"\n\n "go.uber.org/cadence/workflow"\n)\n\nfunc init() {\n workflow.register(simpleworkflow)\n}\n\nfunc simpleworkflow(ctx workflow.context, value string) error {\n ao := workflow.activityoptions{\n tasklist: "sampletasklist",\n scheduletoclosetimeout: time.second * 60,\n scheduletostarttimeout: time.second * 60,\n starttoclosetimeout: time.second * 60,\n heartbeattimeout: time.second * 10,\n waitforcancellation: false,\n }\n ctx = workflow.withactivityoptions(ctx, ao)\n\n future := workflow.executeactivity(ctx, simpleactivity, value)\n var result string\n if err := future.get(ctx, &result); err != nil {\n return err\n }\n workflow.getlogger(ctx).info("done", zap.string("result", result))\n return nil\n}\n\n\n\n# declaration\n\nin the cadence programing model, a is implemented with a function. the function declaration specifies the parameters the accepts as well as any values it might return.\n\nfunc simpleworkflow(ctx workflow.context, value string) error\n\n\nlet’s deconstruct the declaration above:\n\n * the first parameter to the function is ctx workflow.context. this is a required parameter for all functions and is used by the cadence client library to pass execution context. virtually all the client library functions that are callable from the functions require this ctx parameter. this context parameter is the same concept as the standard context.context provided by go. the only difference between workflow.context and context.context is that the done() function in workflow.context returns workflow.channel instead the standard go chan.\n * the second parameter, string, is a custom parameter that can be used to pass data into the on start. a can have one or more such parameters. all parameters to a function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers.\n * since it only declares error as the return value, this means that the does not return a value. the error return value is used to indicate an error was encountered during execution and the should be terminated.\n\n\n# implementation\n\nin order to support the synchronous and sequential programming model for the implementation, there are certain restrictions and requirements on how the implementation must behave in order to guarantee correctness. the requirements are that:\n\n * execution must be deterministic\n * execution must be idempotent\n\na straightforward way to think about these requirements is that the code is as follows:\n\n * code can only read and manipulate local state or state received as return values from cadence client library functions.\n * code should not affect changes in external systems other than through invocation of .\n * code should interact with time only through the functions provided by the cadence client library (i.e. workflow.now(), workflow.sleep()).\n * code should not create and interact with goroutines directly, it should instead use the functions provided by the cadence client library (i.e., workflow.go() instead of go, workflow.channel instead of chan, workflow.selector instead of select).\n * code should do all logging via the logger provided by the cadence client library (i.e., workflow.getlogger()).\n * code should not iterate over maps using range because the order of map iteration is randomized.\n\nnow that we have laid the ground rules, we can take a look at some of the special functions and types used for writing cadence and how to implement some common patterns.\n\n\n# special cadence client library functions and types\n\nthe cadence client library provides a number of functions and types as alternatives to some native go functions and types. usage of these replacement functions/types is necessary in order to ensure that the code execution is deterministic and repeatable within an execution context.\n\ncoroutine related constructs:\n\n * workflow.go : this is a replacement for the the go statement.\n * workflow.channel : this is a replacement for the native chan type. cadence provides support for both buffered and unbuffered channels.\n * workflow.selector : this is a replacement for the select statement.\n\ntime related functions:\n\n * workflow.now() : this is a replacement for time.now().\n * workflow.sleep() : this is a replacement for time.sleep().\n\n\n# failing a workflow\n\nto mark a as failed, all that needs to happen is for the function to return an error via the err return value.\n\n\n# registration\n\nfor some client code to be able to invoke a type, the process needs to be aware of all the implementations it has access to. a is registered with the following call:\n\nworkflow.register(simpleworkflow)\n\n\nthis call essentially creates an in-memory mapping inside the process between the fully qualified function name and the implementation. it is safe to call this registration method from an init() function. if the receives for a type it does not know, it will fail that . however, the failure of the will not cause the entire to fail.',charsets:{}},{title:"Starting workflows",frontmatter:{layout:"default",title:"Starting workflows",permalink:"/docs/go-client/start-workflows",readingShow:"top"},regularPath:"/docs/05-go-client/02.5-starting-workflows.html",relativePath:"docs/05-go-client/02.5-starting-workflows.md",key:"v-76c4aa02",path:"/docs/go-client/start-workflows/",headers:[{level:2,title:"Starting a workflow",slug:"starting-a-workflow",normalizedTitle:"starting a workflow",charIndex:408},{level:2,title:"Jitter Start and Batches of Workflows",slug:"jitter-start-and-batches-of-workflows",normalizedTitle:"jitter start and batches of workflows",charIndex:1321},{level:2,title:"StartWorkflowOptions",slug:"startworkflowoptions",normalizedTitle:"startworkflowoptions",charIndex:791}],codeSwitcherOptions:{},headersStr:"Starting a workflow Jitter Start and Batches of Workflows StartWorkflowOptions",content:'# Starting workflows\n\nStarting workflows can be done from any service that can send requests to the Cadence server. There is no requirement for workflows to be started from the worker services.\n\nGenerally workflows can either be started using a direct reference to the workflow code, or by referring to the registered name of the function. In Workflow Registration we show how to register the workflows.\n\n\n# Starting a workflow\n\nAfter creating a workflow we can start it. This can be done from the cli, but typically we want to start workflow programmatically e.g. from an http handler. We can do this using the client.StartWorkflow function:\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceClient client.Client \n# Initialize cadenceClient\n\ncadenceClient.StartWorkflow(\n ctx,\n client.StartWorkflowOptions{\n TaskList: "workflow-task-list",\n ExecutionStartToCloseTimeout: 10 * time.Second,\n },\n WorkflowFunc,\n workflowArg1,\n workflowArg2,\n workflowArg3,\n ...\n)\n\n\nThe will start the workflow defined in the function WorkflowFunc, note that for named workflows WorkflowFunc could be replaced by the name e.g. "WorkflowFuncName".\n\nworkflowArg1, workflowArg2, workflowArg3 are arguments to the workflow, as specified in WorkflowFunc, note that the arguments needs to be serializable.\n\n\n# Jitter Start and Batches of Workflows\n\nBelow we list all the startWorkflowOptions, however a particularly useful option is JitterStart.\n\nStarting many workflows at the same time will have Cadence trying to schedule all the workflows immediately. This can result in overloading Cadence and the database backing Cadence, as well as the workers processing the workflows.\n\nThis is especially bad when the workflow starts comes in batches, such as an end of month load. These sudden loads can lead to both Cadence and the workers needing to immediately scale up. Scaling up often takes some time, causing queues in Cadence, delaying the execution of all workflows, potentially causing workflows to timeout.\n\nTo solve this we can start our workflows with JitterStart. JitterStart will start the workflow at a random point between now and now + JitterStart, so if we e.g. start 1000 workflows at 12:00 AM with a JitterStart of 6 hours, the workflows will be randomly started between 12:00 AM and 6:00 PM.\n\nThis makes the sudden load of 1000 workflows much more manageable.\n\nFor many batch-like workloads a random delay is completely acceptable as the batch just needs to be processed e.g. before the end of the day.\n\nAdding a JitterStart of 6 hours in the example above is as simple as adding\n\nJitterStart: 6 * time.Hour,\n\n\nto the options like so,\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceClient client.Client\n# Initialize cadenceClient\n\ncadenceClient.StartWorkflow(\n ctx,\n client.StartWorkflowOptions{\n TaskList: "workflow-task-list",\n ExecutionStartToCloseTimeout: 10 * time.Second,\n JitterStart: 6 * time.Hour, // Added JitterStart\n },\n WorkflowFunc,\n workflowArg1,\n workflowArg2,\n workflowArg3,\n ...\n)\n\n\nnow the workflow will start at a random point between now and six hours from now.\n\n\n# StartWorkflowOptions\n\nThe client.StartWorkflowOptions specifies the behavior of this particular workflow. The invocation above only specifies the two mandatory options; TaskList and ExecutionStartToCloseTimeout, all the options are described in the inline documentation:\n\ntype StartWorkflowOptions struct {\n\t// ID - The business identifier of the workflow execution.\n\t// Optional: defaulted to a uuid.\n\tID string\n\n\t// TaskList - The decisions of the workflow are scheduled on this queue.\n\t// This is also the default task list on which activities are scheduled. The workflow author can choose\n\t// to override this using activity options.\n\t// Mandatory: No default.\n\tTaskList string\n\n\t// ExecutionStartToCloseTimeout - The timeout for duration of workflow execution.\n\t// The resolution is seconds.\n\t// Mandatory: No default.\n\tExecutionStartToCloseTimeout time.Duration\n\n\t// DecisionTaskStartToCloseTimeout - The timeout for processing decision task from the time the worker\n\t// pulled this task. If a decision task is lost, it is retried after this timeout.\n\t// The resolution is seconds.\n\t// Optional: defaulted to 10 secs.\n\tDecisionTaskStartToCloseTimeout time.Duration\n\n\t// WorkflowIDReusePolicy - Whether server allow reuse of workflow ID, can be useful\n\t// for dedup logic if set to WorkflowIdReusePolicyRejectDuplicate.\n\t// Optional: defaulted to WorkflowIDReusePolicyAllowDuplicateFailedOnly.\n\tWorkflowIDReusePolicy WorkflowIDReusePolicy\n\n\t// RetryPolicy - Optional retry policy for workflow. If a retry policy is specified, in case of workflow failure\n\t// server will start new workflow execution if needed based on the retry policy.\n\tRetryPolicy *RetryPolicy\n\n\t// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run\n\t// as a cron based on the schedule. The scheduling will be based on UTC time. Schedule for next run only happen\n\t// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed\n\t// or timeout, the workflow will be retried based on the retry policy. While the workflow is retrying, it won\'t\n\t// schedule its next run. If next schedule is due while workflow is running (or retrying), then it will skip that\n\t// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).\n\t// The cron spec is as following:\n\t// ┌───────────── minute (0 - 59)\n\t// │ ┌───────────── hour (0 - 23)\n\t// │ │ ┌───────────── day of the month (1 - 31)\n\t// │ │ │ ┌───────────── month (1 - 12)\n\t// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n\t// │ │ │ │ │\n\t// │ │ │ │ │\n\t// * * * * *\n\tCronSchedule string\n\n\t// Memo - Optional non-indexed info that will be shown in list workflow.\n\tMemo map[string]interface{}\n\n\t// SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only\n\t// supported when Cadence server is using ElasticSearch). The key and value type must be registered on Cadence server side.\n\t// Use GetSearchAttributes API to get valid key and corresponding value type.\n\tSearchAttributes map[string]interface{}\n\n\t// DelayStartSeconds - Seconds to delay the workflow start\n\t// The resolution is seconds.\n\t// Optional: defaulted to 0 seconds\n\tDelayStart time.Duration\n\n\t// JitterStart - Seconds to jitter the workflow start. For example, if set to 10, the workflow will start some time between 0-10 seconds.\n\t// This works with CronSchedule and with DelayStart.\n\t// Optional: defaulted to 0 seconds\n\tJitterStart time.Duration\n}\n',normalizedContent:'# starting workflows\n\nstarting workflows can be done from any service that can send requests to the cadence server. there is no requirement for workflows to be started from the worker services.\n\ngenerally workflows can either be started using a direct reference to the workflow code, or by referring to the registered name of the function. in workflow registration we show how to register the workflows.\n\n\n# starting a workflow\n\nafter creating a workflow we can start it. this can be done from the cli, but typically we want to start workflow programmatically e.g. from an http handler. we can do this using the client.startworkflow function:\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceclient client.client \n# initialize cadenceclient\n\ncadenceclient.startworkflow(\n ctx,\n client.startworkflowoptions{\n tasklist: "workflow-task-list",\n executionstarttoclosetimeout: 10 * time.second,\n },\n workflowfunc,\n workflowarg1,\n workflowarg2,\n workflowarg3,\n ...\n)\n\n\nthe will start the workflow defined in the function workflowfunc, note that for named workflows workflowfunc could be replaced by the name e.g. "workflowfuncname".\n\nworkflowarg1, workflowarg2, workflowarg3 are arguments to the workflow, as specified in workflowfunc, note that the arguments needs to be serializable.\n\n\n# jitter start and batches of workflows\n\nbelow we list all the startworkflowoptions, however a particularly useful option is jitterstart.\n\nstarting many workflows at the same time will have cadence trying to schedule all the workflows immediately. this can result in overloading cadence and the database backing cadence, as well as the workers processing the workflows.\n\nthis is especially bad when the workflow starts comes in batches, such as an end of month load. these sudden loads can lead to both cadence and the workers needing to immediately scale up. scaling up often takes some time, causing queues in cadence, delaying the execution of all workflows, potentially causing workflows to timeout.\n\nto solve this we can start our workflows with jitterstart. jitterstart will start the workflow at a random point between now and now + jitterstart, so if we e.g. start 1000 workflows at 12:00 am with a jitterstart of 6 hours, the workflows will be randomly started between 12:00 am and 6:00 pm.\n\nthis makes the sudden load of 1000 workflows much more manageable.\n\nfor many batch-like workloads a random delay is completely acceptable as the batch just needs to be processed e.g. before the end of the day.\n\nadding a jitterstart of 6 hours in the example above is as simple as adding\n\njitterstart: 6 * time.hour,\n\n\nto the options like so,\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceclient client.client\n# initialize cadenceclient\n\ncadenceclient.startworkflow(\n ctx,\n client.startworkflowoptions{\n tasklist: "workflow-task-list",\n executionstarttoclosetimeout: 10 * time.second,\n jitterstart: 6 * time.hour, // added jitterstart\n },\n workflowfunc,\n workflowarg1,\n workflowarg2,\n workflowarg3,\n ...\n)\n\n\nnow the workflow will start at a random point between now and six hours from now.\n\n\n# startworkflowoptions\n\nthe client.startworkflowoptions specifies the behavior of this particular workflow. the invocation above only specifies the two mandatory options; tasklist and executionstarttoclosetimeout, all the options are described in the inline documentation:\n\ntype startworkflowoptions struct {\n\t// id - the business identifier of the workflow execution.\n\t// optional: defaulted to a uuid.\n\tid string\n\n\t// tasklist - the decisions of the workflow are scheduled on this queue.\n\t// this is also the default task list on which activities are scheduled. the workflow author can choose\n\t// to override this using activity options.\n\t// mandatory: no default.\n\ttasklist string\n\n\t// executionstarttoclosetimeout - the timeout for duration of workflow execution.\n\t// the resolution is seconds.\n\t// mandatory: no default.\n\texecutionstarttoclosetimeout time.duration\n\n\t// decisiontaskstarttoclosetimeout - the timeout for processing decision task from the time the worker\n\t// pulled this task. if a decision task is lost, it is retried after this timeout.\n\t// the resolution is seconds.\n\t// optional: defaulted to 10 secs.\n\tdecisiontaskstarttoclosetimeout time.duration\n\n\t// workflowidreusepolicy - whether server allow reuse of workflow id, can be useful\n\t// for dedup logic if set to workflowidreusepolicyrejectduplicate.\n\t// optional: defaulted to workflowidreusepolicyallowduplicatefailedonly.\n\tworkflowidreusepolicy workflowidreusepolicy\n\n\t// retrypolicy - optional retry policy for workflow. if a retry policy is specified, in case of workflow failure\n\t// server will start new workflow execution if needed based on the retry policy.\n\tretrypolicy *retrypolicy\n\n\t// cronschedule - optional cron schedule for workflow. if a cron schedule is specified, the workflow will run\n\t// as a cron based on the schedule. the scheduling will be based on utc time. schedule for next run only happen\n\t// after the current run is completed/failed/timeout. if a retrypolicy is also supplied, and the workflow failed\n\t// or timeout, the workflow will be retried based on the retry policy. while the workflow is retrying, it won\'t\n\t// schedule its next run. if next schedule is due while workflow is running (or retrying), then it will skip that\n\t// schedule. cron workflow will not stop until it is terminated or cancelled (by returning cadence.cancelederror).\n\t// the cron spec is as following:\n\t// ┌───────────── minute (0 - 59)\n\t// │ ┌───────────── hour (0 - 23)\n\t// │ │ ┌───────────── day of the month (1 - 31)\n\t// │ │ │ ┌───────────── month (1 - 12)\n\t// │ │ │ │ ┌───────────── day of the week (0 - 6) (sunday to saturday)\n\t// │ │ │ │ │\n\t// │ │ │ │ │\n\t// * * * * *\n\tcronschedule string\n\n\t// memo - optional non-indexed info that will be shown in list workflow.\n\tmemo map[string]interface{}\n\n\t// searchattributes - optional indexed info that can be used in query of list/scan/count workflow apis (only\n\t// supported when cadence server is using elasticsearch). the key and value type must be registered on cadence server side.\n\t// use getsearchattributes api to get valid key and corresponding value type.\n\tsearchattributes map[string]interface{}\n\n\t// delaystartseconds - seconds to delay the workflow start\n\t// the resolution is seconds.\n\t// optional: defaulted to 0 seconds\n\tdelaystart time.duration\n\n\t// jitterstart - seconds to jitter the workflow start. for example, if set to 10, the workflow will start some time between 0-10 seconds.\n\t// this works with cronschedule and with delaystart.\n\t// optional: defaulted to 0 seconds\n\tjitterstart time.duration\n}\n',charsets:{}},{title:"Worker service",frontmatter:{layout:"default",title:"Worker service",permalink:"/docs/go-client/workers",readingShow:"top"},regularPath:"/docs/05-go-client/01-workers.html",relativePath:"docs/05-go-client/01-workers.md",key:"v-e5936714",path:"/docs/go-client/workers/",codeSwitcherOptions:{},headersStr:null,content:'# Worker service\n\nA or service is a service that hosts the and implementations. The polls the Cadence service for , performs those , and communicates execution results back to the Cadence service. services are developed, deployed, and operated by Cadence customers.\n\nYou can run a Cadence in a new or an existing service. Use the framework APIs to start the Cadence and link in all and implementations that you require the service to execute.\n\nThe following is an example worker service utilising tchannel, one of the two transport protocols supported by Cadence.\n\npackage main\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/worker"\n\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/api/transport"\n "go.uber.org/yarpc/transport/tchannel"\n)\n\nvar HostPort = "127.0.0.1:7933"\nvar Domain = "SimpleDomain"\nvar TaskListName = "SimpleWorker"\nvar ClientName = "SimpleWorker"\nvar CadenceService = "cadence-frontend"\n\nfunc main() {\n startWorker(buildLogger(), buildCadenceClient())\n}\n\nfunc buildLogger() *zap.Logger {\n config := zap.NewDevelopmentConfig()\n config.Level.SetLevel(zapcore.InfoLevel)\n\n var err error\n logger, err := config.Build()\n if err != nil {\n panic("Failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n ch, err := tchannel.NewChannelTransport(tchannel.ServiceName(ClientName))\n if err != nil {\n panic("Failed to setup tchannel")\n }\n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n Name: ClientName,\n Outbounds: yarpc.Outbounds{\n CadenceService: {Unary: ch.NewSingleOutbound(HostPort)},\n },\n })\n if err := dispatcher.Start(); err != nil {\n panic("Failed to start dispatcher")\n }\n\n return workflowserviceclient.New(dispatcher.ClientConfig(CadenceService))\n}\n\nfunc startWorker(logger *zap.Logger, service workflowserviceclient.Interface) {\n // TaskListName identifies set of client workflows, activities, and workers.\n // It could be your group or client or application name.\n workerOptions := worker.Options{\n Logger: logger,\n MetricsScope: tally.NewTestScope(TaskListName, map[string]string{}),\n }\n\n worker := worker.New(\n service,\n Domain,\n TaskListName,\n workerOptions)\n err := worker.Start()\n if err != nil {\n panic("Failed to start worker")\n }\n\n logger.Info("Started Worker.", zap.String("worker", TaskListName))\n}\n\n\nThe other supported transport protocol is gRPC. A worker service using gRPC can be set up in similar fashion, but the buildCadenceClient function will need the following alterations, and some of the imported packages need to change.\n\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\n.\n.\n.\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n\n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n Name: ClientName,\n Outbounds: yarpc.Outbounds{\n CadenceService: {Unary: grpc.NewTransport().NewSingleOutbound(HostPort)},\n },\n })\n if err := dispatcher.Start(); err != nil {\n panic("Failed to start dispatcher")\n }\n\n clientConfig := dispatcher.ClientConfig(CadenceService)\n\n return compatibility.NewThrift2ProtoAdapter(\n apiv1.NewDomainAPIYARPCClient(clientConfig),\n apiv1.NewWorkflowAPIYARPCClient(clientConfig),\n apiv1.NewWorkerAPIYARPCClient(clientConfig),\n apiv1.NewVisibilityAPIYARPCClient(clientConfig),\n )\n}\n\n\nNote also that the HostPort variable must be changed to target the gRPC listener port of the Cadence cluster (typically, 7833).\n\nFinally, gRPC can also support TLS connections between Go clients and the Cadence server. This requires the following alterations to the imported packages, and the buildCadenceClient function. Note that this also requires you replace "path/to/cert/file" in the function with a path to a valid certificate file matching the TLS configuration of the Cadence server.\n\n\nimport (\n\n "fmt"\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n "go.uber.org/yarpc/peer"\n "go.uber.org/yarpc/peer/hostport"\n\n "crypto/tls"\n "crypto/x509"\n "io/ioutil"\n\n "google.golang.org/grpc/credentials"\n)\n\n.\n.\n.\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n grpcTransport := grpc.NewTransport()\n var dialOptions []grpc.DialOption\n \n caCert, err := ioutil.ReadFile("/path/to/cert/file")\n if err != nil {\n fmt.Printf("Failed to load server CA certificate: %v", zap.Error(err))\n }\n \n caCertPool := x509.NewCertPool()\n if !caCertPool.AppendCertsFromPEM(caCert) {\n fmt.Errorf("Failed to add server CA\'s certificate")\n }\n \n tlsConfig := tls.Config{\n RootCAs: caCertPool,\n }\n \n creds := credentials.NewTLS(&tlsConfig)\n dialOptions = append(dialOptions, grpc.DialerCredentials(creds))\n \n dialer := grpcTransport.NewDialer(dialOptions...)\n outbound := grpcTransport.NewOutbound(\n peer.NewSingle(hostport.PeerIdentifier(HostPort), dialer)\n )\n \n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n Name: ClientName,\n Outbounds: yarpc.Outbounds{\n CadenceService: {Unary: outbound},\n },\n })\n if err := dispatcher.Start(); err != nil {\n panic("Failed to start dispatcher")\n }\n \n clientConfig := dispatcher.ClientConfig(CadenceService)\n \n return compatibility.NewThrift2ProtoAdapter(\n apiv1.NewDomainAPIYARPCClient(clientConfig),\n apiv1.NewWorkflowAPIYARPCClient(clientConfig),\n apiv1.NewWorkerAPIYARPCClient(clientConfig),\n apiv1.NewVisibilityAPIYARPCClient(clientConfig),\n )\n}\n',normalizedContent:'# worker service\n\na or service is a service that hosts the and implementations. the polls the cadence service for , performs those , and communicates execution results back to the cadence service. services are developed, deployed, and operated by cadence customers.\n\nyou can run a cadence in a new or an existing service. use the framework apis to start the cadence and link in all and implementations that you require the service to execute.\n\nthe following is an example worker service utilising tchannel, one of the two transport protocols supported by cadence.\n\npackage main\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/worker"\n\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/api/transport"\n "go.uber.org/yarpc/transport/tchannel"\n)\n\nvar hostport = "127.0.0.1:7933"\nvar domain = "simpledomain"\nvar tasklistname = "simpleworker"\nvar clientname = "simpleworker"\nvar cadenceservice = "cadence-frontend"\n\nfunc main() {\n startworker(buildlogger(), buildcadenceclient())\n}\n\nfunc buildlogger() *zap.logger {\n config := zap.newdevelopmentconfig()\n config.level.setlevel(zapcore.infolevel)\n\n var err error\n logger, err := config.build()\n if err != nil {\n panic("failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n ch, err := tchannel.newchanneltransport(tchannel.servicename(clientname))\n if err != nil {\n panic("failed to setup tchannel")\n }\n dispatcher := yarpc.newdispatcher(yarpc.config{\n name: clientname,\n outbounds: yarpc.outbounds{\n cadenceservice: {unary: ch.newsingleoutbound(hostport)},\n },\n })\n if err := dispatcher.start(); err != nil {\n panic("failed to start dispatcher")\n }\n\n return workflowserviceclient.new(dispatcher.clientconfig(cadenceservice))\n}\n\nfunc startworker(logger *zap.logger, service workflowserviceclient.interface) {\n // tasklistname identifies set of client workflows, activities, and workers.\n // it could be your group or client or application name.\n workeroptions := worker.options{\n logger: logger,\n metricsscope: tally.newtestscope(tasklistname, map[string]string{}),\n }\n\n worker := worker.new(\n service,\n domain,\n tasklistname,\n workeroptions)\n err := worker.start()\n if err != nil {\n panic("failed to start worker")\n }\n\n logger.info("started worker.", zap.string("worker", tasklistname))\n}\n\n\nthe other supported transport protocol is grpc. a worker service using grpc can be set up in similar fashion, but the buildcadenceclient function will need the following alterations, and some of the imported packages need to change.\n\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\n.\n.\n.\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n\n dispatcher := yarpc.newdispatcher(yarpc.config{\n name: clientname,\n outbounds: yarpc.outbounds{\n cadenceservice: {unary: grpc.newtransport().newsingleoutbound(hostport)},\n },\n })\n if err := dispatcher.start(); err != nil {\n panic("failed to start dispatcher")\n }\n\n clientconfig := dispatcher.clientconfig(cadenceservice)\n\n return compatibility.newthrift2protoadapter(\n apiv1.newdomainapiyarpcclient(clientconfig),\n apiv1.newworkflowapiyarpcclient(clientconfig),\n apiv1.newworkerapiyarpcclient(clientconfig),\n apiv1.newvisibilityapiyarpcclient(clientconfig),\n )\n}\n\n\nnote also that the hostport variable must be changed to target the grpc listener port of the cadence cluster (typically, 7833).\n\nfinally, grpc can also support tls connections between go clients and the cadence server. this requires the following alterations to the imported packages, and the buildcadenceclient function. note that this also requires you replace "path/to/cert/file" in the function with a path to a valid certificate file matching the tls configuration of the cadence server.\n\n\nimport (\n\n "fmt"\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n "go.uber.org/yarpc/peer"\n "go.uber.org/yarpc/peer/hostport"\n\n "crypto/tls"\n "crypto/x509"\n "io/ioutil"\n\n "google.golang.org/grpc/credentials"\n)\n\n.\n.\n.\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n grpctransport := grpc.newtransport()\n var dialoptions []grpc.dialoption\n \n cacert, err := ioutil.readfile("/path/to/cert/file")\n if err != nil {\n fmt.printf("failed to load server ca certificate: %v", zap.error(err))\n }\n \n cacertpool := x509.newcertpool()\n if !cacertpool.appendcertsfrompem(cacert) {\n fmt.errorf("failed to add server ca\'s certificate")\n }\n \n tlsconfig := tls.config{\n rootcas: cacertpool,\n }\n \n creds := credentials.newtls(&tlsconfig)\n dialoptions = append(dialoptions, grpc.dialercredentials(creds))\n \n dialer := grpctransport.newdialer(dialoptions...)\n outbound := grpctransport.newoutbound(\n peer.newsingle(hostport.peeridentifier(hostport), dialer)\n )\n \n dispatcher := yarpc.newdispatcher(yarpc.config{\n name: clientname,\n outbounds: yarpc.outbounds{\n cadenceservice: {unary: outbound},\n },\n })\n if err := dispatcher.start(); err != nil {\n panic("failed to start dispatcher")\n }\n \n clientconfig := dispatcher.clientconfig(cadenceservice)\n \n return compatibility.newthrift2protoadapter(\n apiv1.newdomainapiyarpcclient(clientconfig),\n apiv1.newworkflowapiyarpcclient(clientconfig),\n apiv1.newworkerapiyarpcclient(clientconfig),\n apiv1.newvisibilityapiyarpcclient(clientconfig),\n )\n}\n',charsets:{}},{title:"Executing activities",frontmatter:{layout:"default",title:"Executing activities",permalink:"/docs/go-client/execute-activity",readingShow:"top"},regularPath:"/docs/05-go-client/04-execute-activity.html",relativePath:"docs/05-go-client/04-execute-activity.md",key:"v-caeda73c",path:"/docs/go-client/execute-activity/",headers:[{level:2,title:"Activity options",slug:"activity-options",normalizedTitle:"activity options",charIndex:796},{level:2,title:"Activity timeouts",slug:"activity-timeouts",normalizedTitle:"activity timeouts",charIndex:1282},{level:2,title:"ExecuteActivity call",slug:"executeactivity-call",normalizedTitle:"executeactivity call",charIndex:2346}],codeSwitcherOptions:{},headersStr:"Activity options Activity timeouts ExecuteActivity call",content:'# Executing activities\n\nThe primary responsibility of a implementation is to schedule for execution. The most straightforward way to do this is via the library method workflow.ExecuteActivity. The following sample code demonstrates making this call:\n\nao := cadence.ActivityOptions{\n TaskList: "sampleTaskList",\n ScheduleToCloseTimeout: time.Second * 60,\n ScheduleToStartTimeout: time.Second * 60,\n StartToCloseTimeout: time.Second * 60,\n HeartbeatTimeout: time.Second * 10,\n WaitForCancellation: false,\n}\nctx = cadence.WithActivityOptions(ctx, ao)\n\nfuture := workflow.ExecuteActivity(ctx, SimpleActivity, value)\nvar result string\nif err := future.Get(ctx, &result); err != nil {\n return err\n}\n\n\nLet\'s take a look at each component of this call.\n\n\n# Activity options\n\nBefore calling workflow.ExecuteActivity(), you must configure ActivityOptions for the invocation. These options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. The child context is then passed into the workflow.ExecuteActivity() call. If multiple are sharing the same option values, then the same context instance can be used when calling workflow.ExecuteActivity().\n\n\n# Activity timeouts\n\nThere can be various kinds of timeouts associated with an . Cadence guarantees that are executed at most once, so an either succeeds or fails with one of the following timeouts:\n\nTIMEOUT DESCRIPTION\nStartToCloseTimeout Maximum time that a worker can take to process a task after\n it has received the task.\nScheduleToStartTimeout Time a task can wait to be picked up by an after a schedules\n it. If there are no workers available to process this task\n for the specified duration, the task will time out.\nScheduleToCloseTimeout Time a task can take to complete after it is scheduled by a\n . This is usually greater than the sum of StartToClose and\n ScheduleToStart timeouts.\nHeartbeatTimeout If a task doesn\'t heartbeat to the Cadence service for this\n duration, it will be considered to have failed. This is\n useful for long-running tasks.\n\n\n# ExecuteActivity call\n\nThe first parameter in the call is the required cadence.Context object. This type is a copy of context.Context with the Done() method returning cadence.Channel instead of the native Go chan.\n\nThe second parameter is the function that we registered as an function. This parameter can also be a string representing the fully qualified name of the function. The benefit of passing in the actual function object is that the framework can validate parameters.\n\nThe remaining parameters are passed to the as part of the call. In our example, we have a single parameter: value. This list of parameters must match the list of parameters declared by the function. The Cadence client library will validate this.\n\nThe method call returns immediately and returns a cadence.Future. This allows you to execute more code without having to wait for the scheduled to complete.\n\nWhen you are ready to process the results of the , call the Get() method on the future object returned. The parameters to this method are the ctx object we passed to the workflow.ExecuteActivity() call and an output parameter that will receive the output of the . The type of the output parameter must match the type of the return value declared by the function. The Get() method will block until the completes and results are available.\n\nYou can retrieve the result value returned by workflow.ExecuteActivity() from the future and use it like any normal result from a synchronous function call. The following sample code demonstrates how you can use the result if it is a string value:\n\nvar result string\nif err := future.Get(ctx1, &result); err != nil {\n return err\n}\n\nswitch result {\ncase "apple":\n // Do something.\ncase "banana":\n // Do something.\ndefault:\n return err\n}\n\n\nIn this example, we called the Get() method on the returned future immediately after workflow.ExecuteActivity(). However, this is not necessary. If you want to execute multiple in parallel, you can repeatedly call workflow.ExecuteActivity(), store the returned futures, and then wait for all to complete by calling the Get() methods of the future at a later time.\n\nTo implement more complex wait conditions on returned future objects, use the cadence.Selector class.',normalizedContent:'# executing activities\n\nthe primary responsibility of a implementation is to schedule for execution. the most straightforward way to do this is via the library method workflow.executeactivity. the following sample code demonstrates making this call:\n\nao := cadence.activityoptions{\n tasklist: "sampletasklist",\n scheduletoclosetimeout: time.second * 60,\n scheduletostarttimeout: time.second * 60,\n starttoclosetimeout: time.second * 60,\n heartbeattimeout: time.second * 10,\n waitforcancellation: false,\n}\nctx = cadence.withactivityoptions(ctx, ao)\n\nfuture := workflow.executeactivity(ctx, simpleactivity, value)\nvar result string\nif err := future.get(ctx, &result); err != nil {\n return err\n}\n\n\nlet\'s take a look at each component of this call.\n\n\n# activity options\n\nbefore calling workflow.executeactivity(), you must configure activityoptions for the invocation. these options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. the child context is then passed into the workflow.executeactivity() call. if multiple are sharing the same option values, then the same context instance can be used when calling workflow.executeactivity().\n\n\n# activity timeouts\n\nthere can be various kinds of timeouts associated with an . cadence guarantees that are executed at most once, so an either succeeds or fails with one of the following timeouts:\n\ntimeout description\nstarttoclosetimeout maximum time that a worker can take to process a task after\n it has received the task.\nscheduletostarttimeout time a task can wait to be picked up by an after a schedules\n it. if there are no workers available to process this task\n for the specified duration, the task will time out.\nscheduletoclosetimeout time a task can take to complete after it is scheduled by a\n . this is usually greater than the sum of starttoclose and\n scheduletostart timeouts.\nheartbeattimeout if a task doesn\'t heartbeat to the cadence service for this\n duration, it will be considered to have failed. this is\n useful for long-running tasks.\n\n\n# executeactivity call\n\nthe first parameter in the call is the required cadence.context object. this type is a copy of context.context with the done() method returning cadence.channel instead of the native go chan.\n\nthe second parameter is the function that we registered as an function. this parameter can also be a string representing the fully qualified name of the function. the benefit of passing in the actual function object is that the framework can validate parameters.\n\nthe remaining parameters are passed to the as part of the call. in our example, we have a single parameter: value. this list of parameters must match the list of parameters declared by the function. the cadence client library will validate this.\n\nthe method call returns immediately and returns a cadence.future. this allows you to execute more code without having to wait for the scheduled to complete.\n\nwhen you are ready to process the results of the , call the get() method on the future object returned. the parameters to this method are the ctx object we passed to the workflow.executeactivity() call and an output parameter that will receive the output of the . the type of the output parameter must match the type of the return value declared by the function. the get() method will block until the completes and results are available.\n\nyou can retrieve the result value returned by workflow.executeactivity() from the future and use it like any normal result from a synchronous function call. the following sample code demonstrates how you can use the result if it is a string value:\n\nvar result string\nif err := future.get(ctx1, &result); err != nil {\n return err\n}\n\nswitch result {\ncase "apple":\n // do something.\ncase "banana":\n // do something.\ndefault:\n return err\n}\n\n\nin this example, we called the get() method on the returned future immediately after workflow.executeactivity(). however, this is not necessary. if you want to execute multiple in parallel, you can repeatedly call workflow.executeactivity(), store the returned futures, and then wait for all to complete by calling the get() methods of the future at a later time.\n\nto implement more complex wait conditions on returned future objects, use the cadence.selector class.',charsets:{}},{title:"Activity overview",frontmatter:{layout:"default",title:"Activity overview",permalink:"/docs/go-client/activities",readingShow:"top"},regularPath:"/docs/05-go-client/03-activities.html",relativePath:"docs/05-go-client/03-activities.md",key:"v-43760982",path:"/docs/go-client/activities/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:1160},{level:3,title:"Declaration",slug:"declaration",normalizedTitle:"declaration",charIndex:1849},{level:3,title:"Implementation",slug:"implementation",normalizedTitle:"implementation",charIndex:2975},{level:3,title:"Registration",slug:"registration",normalizedTitle:"registration",charIndex:5198},{level:2,title:"Failing an Activity",slug:"failing-an-activity",normalizedTitle:"failing an activity",charIndex:5603}],codeSwitcherOptions:{},headersStr:"Overview Declaration Implementation Registration Failing an Activity",content:'# Activity overview\n\nAn is the implementation of a particular in the business logic.\n\nare implemented as functions. Data can be passed directly to an via function parameters. The parameters can be either basic types or structs, with the only requirement being that the parameters must be serializable. Though it is not required, we recommend that the first parameter of an function is of type context.Context, in order to allow the to interact with other framework methods. The function must return an error value, and can optionally return a result value. The result value can be either a basic type or a struct with the only requirement being that it is serializable.\n\nThe values passed to through invocation parameters or returned through the result value are recorded in the execution history. The entire execution history is transferred from the Cadence service to with every that the logic needs to process. A large execution history can thus adversely impact the performance of your . Therefore, be mindful of the amount of data you transfer via invocation parameters or return values. Otherwise, no additional limitations exist on implementations.\n\n\n# Overview\n\nThe following example demonstrates a simple that accepts a string parameter, appends a word to it, and then returns a result.\n\npackage simple\n\nimport (\n "context"\n\n "go.uber.org/cadence/activity"\n "go.uber.org/zap"\n)\n\nfunc init() {\n activity.Register(SimpleActivity)\n}\n\n// SimpleActivity is a sample Cadence activity function that takes one parameter and\n// returns a string containing the parameter value.\nfunc SimpleActivity(ctx context.Context, value string) (string, error) {\n activity.GetLogger(ctx).Info("SimpleActivity called.", zap.String("Value", value))\n return "Processed: " + value, nil\n}\n\n\nLet\'s take a look at each component of this activity.\n\n\n# Declaration\n\nIn the Cadence programing model, an is implemented with a function. The function declaration specifies the parameters the accepts as well as any values it might return. An function can take zero or many specific parameters and can return one or two values. It must always at least return an error value. The function can accept as parameters and return as results any serializable type.\n\nfunc SimpleActivity(ctx context.Context, value string) (string, error)\n\nThe first parameter to the function is context.Context. This is an optional parameter and can be omitted. This parameter is the standard Go context. The second string parameter is a custom specific parameter that can be used to pass data into the on start. An can have one or more such parameters. All parameters to an function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers. The declares two return values: string and error. The string return value is used to return the result of the . The error return value is used to indicate that an error was encountered during execution.\n\n\n# Implementation\n\nYou can write implementation code in the same way that you would any other Go service code. Additionally, you can use the usual loggers and metrics controllers, and the standard Go concurrency constructs.\n\n# Heart Beating\n\nFor long-running , Cadence provides an API for the code to report both liveness and progress back to the Cadence managed service.\n\nprogress := 0\nfor hasWork {\n // Send heartbeat message to the server.\n cadence.RecordActivityHeartbeat(ctx, progress)\n // Do some work.\n ...\n progress++\n}\n\n\nWhen an times out due to a missed heartbeat, the last value of the details (progress in the above sample) is returned from the cadence.ExecuteActivity function as the details field of TimeoutError with TimeoutType_HEARTBEAT.\n\nNew auto heartbeat option in Cadence Go Client 0.17.0 release: In case you don\'t need to report progress, but still want to report liveness of your worker through heartbeating for your long running activities, there\'s a new auto-heartbeat option that you can enable when you register your activity. When this option is enabled Cadence library will do the heartbeat for you in the background.\n\n\tRegisterActivityOptions struct {\n\t\t...\n\t\t// Automatically send heartbeats for this activity at an interval that is less than the HeartbeatTimeout.\n\t\t// This option has no effect if the activity is executed with a HeartbeatTimeout of 0.\n\t\t// Default: false\n\t\tEnableAutoHeartbeat bool\n\t}\n\n\nYou can also heartbeat an from an external source:\n\n// Instantiate a Cadence service client.\ncadence.Client client = cadence.NewClient(...)\n\n// Record heartbeat.\nerr := client.RecordActivityHeartbeat(taskToken, details)\n\n\nThe parameters of the RecordActivityHeartbeat function are:\n\n * taskToken: The value of the binary TaskToken field of the ActivityInfo struct retrieved inside the .\n * details: The serializable payload containing progress information.\n\n# Cancellation\n\nWhen an is cancelled, or its has completed or failed, the context passed into its function is cancelled, which sets its channel’s closed state to Done. An can use that to perform any necessary cleanup and abort its execution. Cancellation is only delivered to that call RecordActivityHeartbeat.\n\n\n# Registration\n\nTo make the visible to the process hosting it, the must be registered via a call to activity.Register.\n\nfunc init() {\n activity.Register(SimpleActivity)\n}\n\n\nThis call creates an in-memory mapping inside the process between the fully qualified function name and the implementation. If a receives a request to start an execution for an type it does not know, it will fail that request.\n\n\n# Failing an Activity\n\nTo mark an as failed, the function must return an error via the error return value.',normalizedContent:'# activity overview\n\nan is the implementation of a particular in the business logic.\n\nare implemented as functions. data can be passed directly to an via function parameters. the parameters can be either basic types or structs, with the only requirement being that the parameters must be serializable. though it is not required, we recommend that the first parameter of an function is of type context.context, in order to allow the to interact with other framework methods. the function must return an error value, and can optionally return a result value. the result value can be either a basic type or a struct with the only requirement being that it is serializable.\n\nthe values passed to through invocation parameters or returned through the result value are recorded in the execution history. the entire execution history is transferred from the cadence service to with every that the logic needs to process. a large execution history can thus adversely impact the performance of your . therefore, be mindful of the amount of data you transfer via invocation parameters or return values. otherwise, no additional limitations exist on implementations.\n\n\n# overview\n\nthe following example demonstrates a simple that accepts a string parameter, appends a word to it, and then returns a result.\n\npackage simple\n\nimport (\n "context"\n\n "go.uber.org/cadence/activity"\n "go.uber.org/zap"\n)\n\nfunc init() {\n activity.register(simpleactivity)\n}\n\n// simpleactivity is a sample cadence activity function that takes one parameter and\n// returns a string containing the parameter value.\nfunc simpleactivity(ctx context.context, value string) (string, error) {\n activity.getlogger(ctx).info("simpleactivity called.", zap.string("value", value))\n return "processed: " + value, nil\n}\n\n\nlet\'s take a look at each component of this activity.\n\n\n# declaration\n\nin the cadence programing model, an is implemented with a function. the function declaration specifies the parameters the accepts as well as any values it might return. an function can take zero or many specific parameters and can return one or two values. it must always at least return an error value. the function can accept as parameters and return as results any serializable type.\n\nfunc simpleactivity(ctx context.context, value string) (string, error)\n\nthe first parameter to the function is context.context. this is an optional parameter and can be omitted. this parameter is the standard go context. the second string parameter is a custom specific parameter that can be used to pass data into the on start. an can have one or more such parameters. all parameters to an function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers. the declares two return values: string and error. the string return value is used to return the result of the . the error return value is used to indicate that an error was encountered during execution.\n\n\n# implementation\n\nyou can write implementation code in the same way that you would any other go service code. additionally, you can use the usual loggers and metrics controllers, and the standard go concurrency constructs.\n\n# heart beating\n\nfor long-running , cadence provides an api for the code to report both liveness and progress back to the cadence managed service.\n\nprogress := 0\nfor haswork {\n // send heartbeat message to the server.\n cadence.recordactivityheartbeat(ctx, progress)\n // do some work.\n ...\n progress++\n}\n\n\nwhen an times out due to a missed heartbeat, the last value of the details (progress in the above sample) is returned from the cadence.executeactivity function as the details field of timeouterror with timeouttype_heartbeat.\n\nnew auto heartbeat option in cadence go client 0.17.0 release: in case you don\'t need to report progress, but still want to report liveness of your worker through heartbeating for your long running activities, there\'s a new auto-heartbeat option that you can enable when you register your activity. when this option is enabled cadence library will do the heartbeat for you in the background.\n\n\tregisteractivityoptions struct {\n\t\t...\n\t\t// automatically send heartbeats for this activity at an interval that is less than the heartbeattimeout.\n\t\t// this option has no effect if the activity is executed with a heartbeattimeout of 0.\n\t\t// default: false\n\t\tenableautoheartbeat bool\n\t}\n\n\nyou can also heartbeat an from an external source:\n\n// instantiate a cadence service client.\ncadence.client client = cadence.newclient(...)\n\n// record heartbeat.\nerr := client.recordactivityheartbeat(tasktoken, details)\n\n\nthe parameters of the recordactivityheartbeat function are:\n\n * tasktoken: the value of the binary tasktoken field of the activityinfo struct retrieved inside the .\n * details: the serializable payload containing progress information.\n\n# cancellation\n\nwhen an is cancelled, or its has completed or failed, the context passed into its function is cancelled, which sets its channel’s closed state to done. an can use that to perform any necessary cleanup and abort its execution. cancellation is only delivered to that call recordactivityheartbeat.\n\n\n# registration\n\nto make the visible to the process hosting it, the must be registered via a call to activity.register.\n\nfunc init() {\n activity.register(simpleactivity)\n}\n\n\nthis call creates an in-memory mapping inside the process between the fully qualified function name and the implementation. if a receives a request to start an execution for an type it does not know, it will fail that request.\n\n\n# failing an activity\n\nto mark an as failed, the function must return an error via the error return value.',charsets:{}},{title:"Child workflows",frontmatter:{layout:"default",title:"Child workflows",permalink:"/docs/go-client/child-workflows",readingShow:"top"},regularPath:"/docs/05-go-client/05-child-workflows.html",relativePath:"docs/05-go-client/05-child-workflows.md",key:"v-0327ca12",path:"/docs/go-client/child-workflows/",codeSwitcherOptions:{},headersStr:null,content:'# Child workflows\n\nworkflow.ExecuteChildWorkflow enables the scheduling of other from within a \'s implementation. The parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\ncwo := workflow.ChildWorkflowOptions{\n // Do not specify WorkflowID if you want Cadence to generate a unique ID for the child execution.\n WorkflowID: "BID-SIMPLE-CHILD-WORKFLOW",\n ExecutionStartToCloseTimeout: time.Minute * 30,\n}\nctx = workflow.WithChildWorkflowOptions(ctx, cwo)\n\nvar result string\nfuture := workflow.ExecuteChildWorkflow(ctx, SimpleChildWorkflow, value)\nif err := future.Get(ctx, &result); err != nil {\n workflow.GetLogger(ctx).Error("SimpleChildWorkflow failed.", zap.Error(err))\n return err\n}\n\n\nLet\'s take a look at each component of this call.\n\nBefore calling workflow.ExecuteChildworkflow(), you must configure ChildWorkflowOptions for the invocation. These options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. The child context is then passed into the workflow.ExecuteChildWorkflow() call. If multiple child are sharing the same option values, then the same context instance can be used when calling workflow.ExecuteChildworkflow().\n\nThe first parameter in the call is the required cadence.Context object. This type is a copy of context.Context with the Done() method returning cadence.Channel instead of the native Go chan.\n\nThe second parameter is the function that we registered as a function. This parameter can also be a string representing the fully qualified name of the function. The benefit of this is that when you pass in the actual function object, the framework can validate parameters.\n\nThe remaining parameters are passed to the as part of the call. In our example, we have a single parameter: value. This list of parameters must match the list of parameters declared by the function.\n\nThe method call returns immediately and returns a cadence.Future. This allows you to execute more code without having to wait for the scheduled to complete.\n\nWhen you are ready to process the results of the , call the Get() method on the returned future object. The parameters to this method is the ctx object we passed to the workflow.ExecuteChildWorkflow() call and an output parameter that will receive the output of the . The type of the output parameter must match the type of the return value declared by the function. The Get() method will block until the completes and results are available.\n\nThe workflow.ExecuteChildWorkflow() function is similar to workflow.ExecuteActivity(). All of the patterns described for using workflow.ExecuteActivity() apply to the workflow.ExecuteChildWorkflow() function as well.\n\nWhen a parent is cancelled by the user, the child can be cancelled or abandoned based on a configurable child policy.',normalizedContent:'# child workflows\n\nworkflow.executechildworkflow enables the scheduling of other from within a \'s implementation. the parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\ncwo := workflow.childworkflowoptions{\n // do not specify workflowid if you want cadence to generate a unique id for the child execution.\n workflowid: "bid-simple-child-workflow",\n executionstarttoclosetimeout: time.minute * 30,\n}\nctx = workflow.withchildworkflowoptions(ctx, cwo)\n\nvar result string\nfuture := workflow.executechildworkflow(ctx, simplechildworkflow, value)\nif err := future.get(ctx, &result); err != nil {\n workflow.getlogger(ctx).error("simplechildworkflow failed.", zap.error(err))\n return err\n}\n\n\nlet\'s take a look at each component of this call.\n\nbefore calling workflow.executechildworkflow(), you must configure childworkflowoptions for the invocation. these options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. the child context is then passed into the workflow.executechildworkflow() call. if multiple child are sharing the same option values, then the same context instance can be used when calling workflow.executechildworkflow().\n\nthe first parameter in the call is the required cadence.context object. this type is a copy of context.context with the done() method returning cadence.channel instead of the native go chan.\n\nthe second parameter is the function that we registered as a function. this parameter can also be a string representing the fully qualified name of the function. the benefit of this is that when you pass in the actual function object, the framework can validate parameters.\n\nthe remaining parameters are passed to the as part of the call. in our example, we have a single parameter: value. this list of parameters must match the list of parameters declared by the function.\n\nthe method call returns immediately and returns a cadence.future. this allows you to execute more code without having to wait for the scheduled to complete.\n\nwhen you are ready to process the results of the , call the get() method on the returned future object. the parameters to this method is the ctx object we passed to the workflow.executechildworkflow() call and an output parameter that will receive the output of the . the type of the output parameter must match the type of the return value declared by the function. the get() method will block until the completes and results are available.\n\nthe workflow.executechildworkflow() function is similar to workflow.executeactivity(). all of the patterns described for using workflow.executeactivity() apply to the workflow.executechildworkflow() function as well.\n\nwhen a parent is cancelled by the user, the child can be cancelled or abandoned based on a configurable child policy.',charsets:{}},{title:"Activity and workflow retries",frontmatter:{layout:"default",title:"Activity and workflow retries",permalink:"/docs/go-client/retries",readingShow:"top"},regularPath:"/docs/05-go-client/06-retries.html",relativePath:"docs/05-go-client/06-retries.md",key:"v-5fac5e6c",path:"/docs/go-client/retries/",codeSwitcherOptions:{},headersStr:null,content:"# Activity and workflow retries\n\nand can fail due to various intermediate conditions. In those cases, we want to retry the failed or child or even the parent . This can be achieved by supplying an optional retry policy. A retry policy looks like the following:\n\n// RetryPolicy defines the retry policy.\nRetryPolicy struct {\n // Backoff interval for the first retry. If coefficient is 1.0 then it is used for all retries.\n // Required, no default value.\n InitialInterval time.Duration\n\n // Coefficient used to calculate the next retry backoff interval.\n // The next retry interval is previous interval multiplied by this coefficient.\n // Must be 1 or larger. Default is 2.0.\n BackoffCoefficient float64\n\n // Maximum backoff interval between retries. Exponential backoff leads to interval increase.\n // This value is the cap of the interval. Default is 100x of initial interval.\n MaximumInterval time.Duration\n\n // Maximum time to retry. Either ExpirationInterval or MaximumAttempts is required.\n // When exceeded the retries stop even if maximum retries is not reached yet.\n // First (non-retry) attempt is unaffected by this field and is guaranteed to run \n // for the entirety of the workflow timeout duration (ExecutionStartToCloseTimeoutSeconds).\n ExpirationInterval time.Duration\n\n // Maximum number of attempts. When exceeded the retries stop even if not expired yet.\n // If not set or set to 0, it means unlimited, and relies on ExpirationInterval to stop.\n // Either MaximumAttempts or ExpirationInterval is required.\n MaximumAttempts int32\n\n // Non-Retriable errors. This is optional. Cadence server will stop retry if error reason matches this list.\n // Error reason for custom error is specified when your activity/workflow returns cadence.NewCustomError(reason).\n // Error reason for panic error is \"cadenceInternal:Panic\".\n // Error reason for any other error is \"cadenceInternal:Generic\".\n // Error reason for timeouts is: \"cadenceInternal:Timeout TIMEOUT_TYPE\". TIMEOUT_TYPE could be START_TO_CLOSE or HEARTBEAT.\n // Note that cancellation is not a failure, so it won't be retried.\n NonRetriableErrorReasons []string\n}\n\n\nTo enable retry, supply a custom retry policy to ActivityOptions or ChildWorkflowOptions when you execute them.\n\nexpiration := time.Minute * 10\nretryPolicy := &cadence.RetryPolicy{\n InitialInterval: time.Second,\n BackoffCoefficient: 2,\n MaximumInterval: expiration,\n ExpirationInterval: time.Minute * 10,\n MaximumAttempts: 5,\n}\nao := workflow.ActivityOptions{\n ScheduleToStartTimeout: expiration,\n StartToCloseTimeout: expiration,\n HeartbeatTimeout: time.Second * 30,\n RetryPolicy: retryPolicy, // Enable retry.\n}\nctx = workflow.WithActivityOptions(ctx, ao)\nactivityFuture := workflow.ExecuteActivity(ctx, SampleActivity, params)\n\n\nIf heartbeat its progress before it failed, the retry attempt will contain the progress so implementation could resume from failed progress like:\n\nfunc SampleActivity(ctx context.Context, inputArg InputParams) error {\n startIdx := inputArg.StartIndex\n if activity.HasHeartbeatDetails(ctx) {\n // Recover from finished progress.\n var finishedIndex int\n if err := activity.GetHeartbeatDetails(ctx, &finishedIndex); err == nil {\n startIdx = finishedIndex + 1 // Start from next one.\n }\n }\n\n // Normal activity logic...\n for i:=startIdx; i 0 && signalVal != "SOME_VALUE" {\n return errors.New("signalVal")\n}\n\n\nIn the example above, the code uses workflow.GetSignalChannel to open a workflow.Channel for the named . We then use a workflow.Selector to wait on this channel and process the payload received with the .\n\n\n# SignalWithStart\n\nYou may not know if a is running and can accept a . The client.SignalWithStartWorkflow API allows you to send a to the current instance if one exists or to create a new run and then send the . SignalWithStartWorkflow therefore doesn\'t take a as a parameter.',normalizedContent:'# signals\n\nprovide a mechanism to send data directly to a running . previously, you had two options for passing data to the implementation:\n\n * via start parameters\n * as return values from\n\nwith start parameters, we could only pass in values before began.\n\nreturn values from allowed us to pass information to a running , but this approach comes with its own complications. one major drawback is reliance on polling. this means that the data needs to be stored in a third-party location until it\'s ready to be picked up by the . further, the lifecycle of this requires management, and the requires manual restart if it fails before acquiring the data.\n\n, on the other hand, provide a fully asynchronous and durable mechanism for providing data to a running . when a is received for a running , cadence persists the and the payload in the history. the can then process the at any time afterwards without the risk of losing the information. the also has the option to stop execution by blocking on a channel.\n\nvar signalval string\nsignalchan := workflow.getsignalchannel(ctx, signalname)\n\ns := workflow.newselector(ctx)\ns.addreceive(signalchan, func(c workflow.channel, more bool) {\n c.receive(ctx, &signalval)\n workflow.getlogger(ctx).info("received signal!", zap.string("signal", signalname), zap.string("value", signalval))\n})\ns.select(ctx)\n\nif len(signalval) > 0 && signalval != "some_value" {\n return errors.new("signalval")\n}\n\n\nin the example above, the code uses workflow.getsignalchannel to open a workflow.channel for the named . we then use a workflow.selector to wait on this channel and process the payload received with the .\n\n\n# signalwithstart\n\nyou may not know if a is running and can accept a . the client.signalwithstartworkflow api allows you to send a to the current instance if one exists or to create a new run and then send the . signalwithstartworkflow therefore doesn\'t take a as a parameter.',charsets:{}},{title:"Side effect",frontmatter:{layout:"default",title:"Side effect",permalink:"/docs/go-client/side-effect",readingShow:"top"},regularPath:"/docs/05-go-client/10-side-effect.html",relativePath:"docs/05-go-client/10-side-effect.md",key:"v-d0383dd4",path:"/docs/go-client/side-effect/",codeSwitcherOptions:{},headersStr:null,content:'# Side effect\n\nworkflow.SideEffect is useful for short, nondeterministic code snippets, such as getting a random value or generating a UUID. It executes the provided function once and records its result into the history. workflow.SideEffect does not re-execute upon replay, but instead returns the recorded result. It can be seen as an "inline" . Something to note about workflow.SideEffect is that, unlike the Cadence guarantee of at-most-once execution for , there is no such guarantee with workflow.SideEffect. Under certain failure conditions, workflow.SideEffect can end up executing a function more than once.\n\nThe only way to fail SideEffect is to panic, which causes a failure. After the timeout, Cadence reschedules and then re-executes the , giving SideEffect another chance to succeed. Do not return any data from SideEffect other than through its recorded return value.\n\nThe following sample demonstrates how to use SideEffect:\n\nencodedRandom := SideEffect(func(ctx cadence.Context) interface{} {\n return rand.Intn(100)\n})\n\nvar random int\nencodedRandom.Get(&random)\nif random < 50 {\n ...\n} else {\n ...\n}\n',normalizedContent:'# side effect\n\nworkflow.sideeffect is useful for short, nondeterministic code snippets, such as getting a random value or generating a uuid. it executes the provided function once and records its result into the history. workflow.sideeffect does not re-execute upon replay, but instead returns the recorded result. it can be seen as an "inline" . something to note about workflow.sideeffect is that, unlike the cadence guarantee of at-most-once execution for , there is no such guarantee with workflow.sideeffect. under certain failure conditions, workflow.sideeffect can end up executing a function more than once.\n\nthe only way to fail sideeffect is to panic, which causes a failure. after the timeout, cadence reschedules and then re-executes the , giving sideeffect another chance to succeed. do not return any data from sideeffect other than through its recorded return value.\n\nthe following sample demonstrates how to use sideeffect:\n\nencodedrandom := sideeffect(func(ctx cadence.context) interface{} {\n return rand.intn(100)\n})\n\nvar random int\nencodedrandom.get(&random)\nif random < 50 {\n ...\n} else {\n ...\n}\n',charsets:{}},{title:"Queries",frontmatter:{layout:"default",title:"Queries",permalink:"/docs/go-client/queries",readingShow:"top"},regularPath:"/docs/05-go-client/11-queries.html",relativePath:"docs/05-go-client/11-queries.md",key:"v-a1460e54",path:"/docs/go-client/queries/",headers:[{level:2,title:"Consistent Query",slug:"consistent-query",normalizedTitle:"consistent query",charIndex:2009}],codeSwitcherOptions:{},headersStr:"Consistent Query",content:'# Queries\n\nIf a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. You can use the Cadence to perform this . For example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nThis command uses __stack_trace, which is a built-in type supported by the Cadence client library. You can add custom types to handle such as the current state of a , or how many the has completed. To do this, you need to set up a handler using workflow.SetQueryHandler.\n\nThe handler must be a function that returns two values:\n\n 1. A serializable result\n 2. An error\n\nThe handler function can receive any number of input parameters, but all input parameters must be serializable. The following sample code sets up a handler that handles the type of current_state:\n\nfunc MyWorkflow(ctx workflow.Context, input string) error {\n currentState := "started" // This could be any serializable struct.\n err := workflow.SetQueryHandler(ctx, "current_state", func() (string, error) {\n return currentState, nil\n })\n if err != nil {\n currentState = "failed to register query handler"\n return err\n }\n // Your normal workflow code begins here, and you update the currentState as the code makes progress.\n currentState = "waiting timer"\n err = NewTimer(ctx, time.Hour).Get(ctx, nil)\n if err != nil {\n currentState = "timer failed"\n return err\n }\n\n currentState = "waiting activity"\n ctx = WithActivityOptions(ctx, myActivityOptions)\n err = ExecuteActivity(ctx, MyActivity, "my_input").Get(ctx, nil)\n if err != nil {\n currentState = "activity failed"\n return err\n }\n currentState = "done"\n return nil\n}\n\n\nYou can now current_state by using the\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nYou can also issue a from code using the QueryWorkflow() API on a Cadence client object.\n\n\n# Consistent Query\n\nhas two consistency levels, eventual and strong. Consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nIn this example if were to change state, may or may not see that state update reflected in the result. This is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. A strongly consistent is guaranteed to be based on state which includes all that came before the was issued. An is considered to have come before a if the call creating the external returned success before the was issued. External which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nIn order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nIn order to run a using the go client do the following:\n\nresp, err := cadenceClient.QueryWorkflowWithOptions(ctx, &client.QueryWorkflowWithOptionsRequest{\n WorkflowID: workflowID,\n RunID: runID,\n QueryType: queryType,\n QueryConsistencyLevel: shared.QueryConsistencyLevelStrong.Ptr(),\n})\n\n\nWhen using strongly consistent you should expect higher latency than eventually consistent .',normalizedContent:'# queries\n\nif a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. you can use the cadence to perform this . for example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nthis command uses __stack_trace, which is a built-in type supported by the cadence client library. you can add custom types to handle such as the current state of a , or how many the has completed. to do this, you need to set up a handler using workflow.setqueryhandler.\n\nthe handler must be a function that returns two values:\n\n 1. a serializable result\n 2. an error\n\nthe handler function can receive any number of input parameters, but all input parameters must be serializable. the following sample code sets up a handler that handles the type of current_state:\n\nfunc myworkflow(ctx workflow.context, input string) error {\n currentstate := "started" // this could be any serializable struct.\n err := workflow.setqueryhandler(ctx, "current_state", func() (string, error) {\n return currentstate, nil\n })\n if err != nil {\n currentstate = "failed to register query handler"\n return err\n }\n // your normal workflow code begins here, and you update the currentstate as the code makes progress.\n currentstate = "waiting timer"\n err = newtimer(ctx, time.hour).get(ctx, nil)\n if err != nil {\n currentstate = "timer failed"\n return err\n }\n\n currentstate = "waiting activity"\n ctx = withactivityoptions(ctx, myactivityoptions)\n err = executeactivity(ctx, myactivity, "my_input").get(ctx, nil)\n if err != nil {\n currentstate = "activity failed"\n return err\n }\n currentstate = "done"\n return nil\n}\n\n\nyou can now current_state by using the\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nyou can also issue a from code using the queryworkflow() api on a cadence client object.\n\n\n# consistent query\n\nhas two consistency levels, eventual and strong. consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nin this example if were to change state, may or may not see that state update reflected in the result. this is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. a strongly consistent is guaranteed to be based on state which includes all that came before the was issued. an is considered to have come before a if the call creating the external returned success before the was issued. external which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nin order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nin order to run a using the go client do the following:\n\nresp, err := cadenceclient.queryworkflowwithoptions(ctx, &client.queryworkflowwithoptionsrequest{\n workflowid: workflowid,\n runid: runid,\n querytype: querytype,\n queryconsistencylevel: shared.queryconsistencylevelstrong.ptr(),\n})\n\n\nwhen using strongly consistent you should expect higher latency than eventually consistent .',charsets:{}},{title:"Continue as new",frontmatter:{layout:"default",title:"Continue as new",permalink:"/docs/go-client/continue-as-new",readingShow:"top"},regularPath:"/docs/05-go-client/09-continue-as-new.html",relativePath:"docs/05-go-client/09-continue-as-new.md",key:"v-7732347a",path:"/docs/go-client/continue-as-new/",codeSwitcherOptions:{},headersStr:null,content:"# Continue as new\n\nthat need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the is inside the body of the for loop. The problem with this approach is that the history for that will keep growing to a point where it reaches the maximum size enforced by the service.\n\nContinueAsNew is the low level construct that enables implementing such without the risk of failures down the road. The operation atomically completes the current execution and starts a new execution of the with the same . The new execution will not carry over any history from the old execution. To trigger this behavior, the function should terminate by returning the special ContinueAsNewError error:\n\nfunc SimpleWorkflow(workflow.Context ctx, value string) error {\n ...\n return workflow.NewContinueAsNewError(ctx, SimpleWorkflow, value)\n}\n",normalizedContent:"# continue as new\n\nthat need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the is inside the body of the for loop. the problem with this approach is that the history for that will keep growing to a point where it reaches the maximum size enforced by the service.\n\ncontinueasnew is the low level construct that enables implementing such without the risk of failures down the road. the operation atomically completes the current execution and starts a new execution of the with the same . the new execution will not carry over any history from the old execution. to trigger this behavior, the function should terminate by returning the special continueasnewerror error:\n\nfunc simpleworkflow(workflow.context ctx, value string) error {\n ...\n return workflow.newcontinueasnewerror(ctx, simpleworkflow, value)\n}\n",charsets:{}},{title:"Async activity completion",frontmatter:{layout:"default",title:"Async activity completion",permalink:"/docs/go-client/activity-async-completion",readingShow:"top"},regularPath:"/docs/05-go-client/12-activity-async-completion.html",relativePath:"docs/05-go-client/12-activity-async-completion.md",key:"v-0a1dd2ec",path:"/docs/go-client/activity-async-completion/",codeSwitcherOptions:{},headersStr:null,content:'# Asynchronous activity completion\n\nThere are certain scenarios when completing an upon completion of its function is not possible or desirable. For example, you might have an application that requires user input in order to complete the . You could implement the with a polling mechanism, but a simpler and less resource-intensive implementation is to asynchronously complete a Cadence .\n\nThere two parts to implementing an asynchronously completed activity:\n\n 1. The provides the information necessary for completion from an external system and notifies the Cadence service that it is waiting for that outside callback.\n 2. The external service calls the Cadence service to complete the .\n\nThe following example demonstrates the first part:\n\n// Retrieve the activity information needed to asynchronously complete the activity.\nactivityInfo := cadence.GetActivityInfo(ctx)\ntaskToken := activityInfo.TaskToken\n\n// Send the taskToken to the external service that will complete the activity.\n...\n\n// Return from the activity a function indicating that Cadence should wait for an async completion\n// message.\nreturn "", activity.ErrResultPending\n\n\nThe following code demonstrates how to complete the successfully:\n\n// Instantiate a Cadence service client.\n// The same client can be used to complete or fail any number of activities.\ncadence.Client client = cadence.NewClient(...)\n\n// Complete the activity.\nclient.CompleteActivity(taskToken, result, nil)\n\n\nTo fail the , you would do the following:\n\n// Fail the activity.\nclient.CompleteActivity(taskToken, nil, err)\n\n\nFollowing are the parameters of the CompleteActivity function:\n\n * taskToken: The value of the binary TaskToken field of the ActivityInfo struct retrieved inside the .\n * result: The return value to record for the . The type of this value must match the type of the return value declared by the function.\n * err: The error code to return if the terminates with an error.\n\nIf error is not null, the value of the result field is ignored.',normalizedContent:'# asynchronous activity completion\n\nthere are certain scenarios when completing an upon completion of its function is not possible or desirable. for example, you might have an application that requires user input in order to complete the . you could implement the with a polling mechanism, but a simpler and less resource-intensive implementation is to asynchronously complete a cadence .\n\nthere two parts to implementing an asynchronously completed activity:\n\n 1. the provides the information necessary for completion from an external system and notifies the cadence service that it is waiting for that outside callback.\n 2. the external service calls the cadence service to complete the .\n\nthe following example demonstrates the first part:\n\n// retrieve the activity information needed to asynchronously complete the activity.\nactivityinfo := cadence.getactivityinfo(ctx)\ntasktoken := activityinfo.tasktoken\n\n// send the tasktoken to the external service that will complete the activity.\n...\n\n// return from the activity a function indicating that cadence should wait for an async completion\n// message.\nreturn "", activity.errresultpending\n\n\nthe following code demonstrates how to complete the successfully:\n\n// instantiate a cadence service client.\n// the same client can be used to complete or fail any number of activities.\ncadence.client client = cadence.newclient(...)\n\n// complete the activity.\nclient.completeactivity(tasktoken, result, nil)\n\n\nto fail the , you would do the following:\n\n// fail the activity.\nclient.completeactivity(tasktoken, nil, err)\n\n\nfollowing are the parameters of the completeactivity function:\n\n * tasktoken: the value of the binary tasktoken field of the activityinfo struct retrieved inside the .\n * result: the return value to record for the . the type of this value must match the type of the return value declared by the function.\n * err: the error code to return if the terminates with an error.\n\nif error is not null, the value of the result field is ignored.',charsets:{}},{title:"Testing",frontmatter:{layout:"default",title:"Testing",permalink:"/docs/go-client/workflow-testing",readingShow:"top"},regularPath:"/docs/05-go-client/13-workflow-testing.html",relativePath:"docs/05-go-client/13-workflow-testing.md",key:"v-c8a8f07c",path:"/docs/go-client/workflow-testing/",headers:[{level:2,title:"Setup",slug:"setup",normalizedTitle:"setup",charIndex:619},{level:2,title:"A Simple Test",slug:"a-simple-test",normalizedTitle:"a simple test",charIndex:2858},{level:2,title:"Activity mocking and overriding",slug:"activity-mocking-and-overriding",normalizedTitle:"activity mocking and overriding",charIndex:3534},{level:2,title:"Testing signals",slug:"testing-signals",normalizedTitle:"testing signals",charIndex:6905}],codeSwitcherOptions:{},headersStr:"Setup A Simple Test Activity mocking and overriding Testing signals",content:'# Testing\n\nThe Cadence Go client library provides a test framework to facilitate testing implementations. The framework is suited for implementing unit tests as well as functional tests of the logic.\n\nThe following code implements unit tests for the SimpleWorkflow sample:\n\npackage sample\n\nimport (\n "errors"\n "testing"\n\n "github.com/stretchr/testify/mock"\n "github.com/stretchr/testify/suite"\n\n "go.uber.org/cadence"\n "go.uber.org/cadence/testsuite"\n)\n\ntype UnitTestSuite struct {\n suite.Suite\n testsuite.WorkflowTestSuite\n\n env *testsuite.TestWorkflowEnvironment\n}\n\nfunc (s *UnitTestSuite) SetupTest() {\n s.env = s.NewTestWorkflowEnvironment()\n}\n\nfunc (s *UnitTestSuite) AfterTest(suiteName, testName string) {\n s.env.AssertExpectations(s.T())\n}\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_Success() {\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n func(ctx context.Context, value string) (string, error) {\n s.Equal("test_success", value)\n return value, nil\n }\n )\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n "", errors.New("SimpleActivityFailure"))\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure")\n\n s.True(s.env.IsWorkflowCompleted())\n\n s.NotNil(s.env.GetWorkflowError())\n s.True(cadence.IsGenericError(s.env.GetWorkflowError()))\n s.Equal("SimpleActivityFailure", s.env.GetWorkflowError().Error())\n}\n\nfunc TestUnitTestSuite(t *testing.T) {\n suite.Run(t, new(UnitTestSuite))\n}\n\n\n\n# Setup\n\nTo run unit tests, we first define a "test suite" struct that absorbs both the basic suite functionality from testify via suite.Suite and the suite functionality from the Cadence test framework via cadence.WorkflowTestSuite. Because every test in this test suite will test our , we add a property to our struct to hold an instance of the test environment. This allows us to initialize the test environment in a setup method. For testing , we use a cadence.TestWorkflowEnvironment.\n\nNext, we implement a SetupTest method to setup a new test environment before each test. Doing so ensures that each test runs in its own isolated sandbox. We also implement an AfterTest function where we assert that all mocks we set up were indeed called by invoking s.env.AssertExpectations(s.T()).\n\nFinally, we create a regular test function recognized by "go test" and pass the struct to suite.Run.\n\n\n# A Simple Test\n\nThe most simple test case we can write is to have the test environment execute the and then evaluate the results.\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_Success() {\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\n\nCalling s.env.ExecuteWorkflow(...) executes the logic and any invoked inside the test process. The first parameter of s.env.ExecuteWorkflow(...) contains the functions, and any subsequent parameters contain values for custom input parameters declared by the function.\n\n> Note that unless the invocations are mocked or implementation replaced (see Activity mocking and overriding), the test environment will execute the actual code including any calls to outside services.\n\nAfter executing the in the above example, we assert that the ran through completion via the call to s.env.IsWorkflowComplete(). We also assert that no errors were returned by asserting on the return value of s.env.GetWorkflowError(). If our returned a value, we could have retrieved that value via a call to s.env.GetWorkflowResult(&value) and had additional asserts on that value.\n\n\n# Activity mocking and overriding\n\nWhen running unit tests on , we want to test the logic in isolation. Additionally, we want to inject errors during our test runs. The test framework provides two mechanisms that support these scenarios: mocking and overriding. Both of these mechanisms allow you to change the behavior of invoked by your without the need to modify the actual code.\n\nLet\'s take a look at a test that simulates a test that fails via the "activity mocking" mechanism.\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n "", errors.New("SimpleActivityFailure"))\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure")\n\n s.True(s.env.IsWorkflowCompleted())\n\n s.NotNil(s.env.GetWorkflowError())\n _, ok := s.env.GetWorkflowError().(*cadence.GenericError)\n s.True(ok)\n s.Equal("SimpleActivityFailure", s.env.GetWorkflowError().Error())\n}\n\n\nThis test simulates the execution of the SimpleActivity that is invoked by our SimpleWorkflow returning an error. We accomplish this by setting up a mock on the test environment for the SimpleActivity that returns an error.\n\ns.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n "", errors.New("SimpleActivityFailure"))\n\n\nWith the mock set up we can now execute the via the s.env.ExecuteWorkflow(...) method and assert that the completed successfully and returned the expected error.\n\nSimply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate logic. However, sometimes we want to replace the with an alternate implementation to support a more complex test scenario. Let\'s assume we want to validate that the gets called with the correct parameters.\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n func(ctx context.Context, value string) (string, error) {\n s.Equal("test_success", value)\n return value, nil\n }\n )\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\n\nIn this example, we provide a function implementation as the parameter to Return. This allows us to provide an alternate implementation for the SimpleActivity. The framework will execute this function whenever the is invoked and pass on the return value from the function as the result of the invocation. Additionally, the framework will validate that the signature of the “mock” function matches the signature of the original function.\n\nSince this can be an entire function, there is no limitation as to what we can do here. In this example, we assert that the “value” param has the same content as the value param we passed to the .\n\n\n# Testing signals\n\nTo test signals we can use the functions s.env.SignalWorkflow, and s.env.SignalWorkflowByID. These functions needs to be called inside s.env.RegisterDelayedCallback, as the signal should be send while the is running. It is important to register the signal before calling s.env.ExecuteWorkflow, otherwise the signal will not be send.\n\nIf our is waiting for a signal with name signalName we can register to send this signal before the workflow is executed like this:\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_Signal() {\n // Send the signal\n\ts.env.RegisterDelayedCallback(func() {\n\t\ts.env.SignalWorkflow(signalName, signalData)\n\t}, time.Minute*10)\n\n // Execute the workflow\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\n\nNote that the s.env.RegisterDelayedCallback function does not actually wait 10 minutes in the unit test instead the cadence test framework uses an internal clock which knows which event is the next, and executes it immediately.',normalizedContent:'# testing\n\nthe cadence go client library provides a test framework to facilitate testing implementations. the framework is suited for implementing unit tests as well as functional tests of the logic.\n\nthe following code implements unit tests for the simpleworkflow sample:\n\npackage sample\n\nimport (\n "errors"\n "testing"\n\n "github.com/stretchr/testify/mock"\n "github.com/stretchr/testify/suite"\n\n "go.uber.org/cadence"\n "go.uber.org/cadence/testsuite"\n)\n\ntype unittestsuite struct {\n suite.suite\n testsuite.workflowtestsuite\n\n env *testsuite.testworkflowenvironment\n}\n\nfunc (s *unittestsuite) setuptest() {\n s.env = s.newtestworkflowenvironment()\n}\n\nfunc (s *unittestsuite) aftertest(suitename, testname string) {\n s.env.assertexpectations(s.t())\n}\n\nfunc (s *unittestsuite) test_simpleworkflow_success() {\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\nfunc (s *unittestsuite) test_simpleworkflow_activityparamcorrect() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n func(ctx context.context, value string) (string, error) {\n s.equal("test_success", value)\n return value, nil\n }\n )\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\nfunc (s *unittestsuite) test_simpleworkflow_activityfails() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n "", errors.new("simpleactivityfailure"))\n s.env.executeworkflow(simpleworkflow, "test_failure")\n\n s.true(s.env.isworkflowcompleted())\n\n s.notnil(s.env.getworkflowerror())\n s.true(cadence.isgenericerror(s.env.getworkflowerror()))\n s.equal("simpleactivityfailure", s.env.getworkflowerror().error())\n}\n\nfunc testunittestsuite(t *testing.t) {\n suite.run(t, new(unittestsuite))\n}\n\n\n\n# setup\n\nto run unit tests, we first define a "test suite" struct that absorbs both the basic suite functionality from testify via suite.suite and the suite functionality from the cadence test framework via cadence.workflowtestsuite. because every test in this test suite will test our , we add a property to our struct to hold an instance of the test environment. this allows us to initialize the test environment in a setup method. for testing , we use a cadence.testworkflowenvironment.\n\nnext, we implement a setuptest method to setup a new test environment before each test. doing so ensures that each test runs in its own isolated sandbox. we also implement an aftertest function where we assert that all mocks we set up were indeed called by invoking s.env.assertexpectations(s.t()).\n\nfinally, we create a regular test function recognized by "go test" and pass the struct to suite.run.\n\n\n# a simple test\n\nthe most simple test case we can write is to have the test environment execute the and then evaluate the results.\n\nfunc (s *unittestsuite) test_simpleworkflow_success() {\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\n\ncalling s.env.executeworkflow(...) executes the logic and any invoked inside the test process. the first parameter of s.env.executeworkflow(...) contains the functions, and any subsequent parameters contain values for custom input parameters declared by the function.\n\n> note that unless the invocations are mocked or implementation replaced (see activity mocking and overriding), the test environment will execute the actual code including any calls to outside services.\n\nafter executing the in the above example, we assert that the ran through completion via the call to s.env.isworkflowcomplete(). we also assert that no errors were returned by asserting on the return value of s.env.getworkflowerror(). if our returned a value, we could have retrieved that value via a call to s.env.getworkflowresult(&value) and had additional asserts on that value.\n\n\n# activity mocking and overriding\n\nwhen running unit tests on , we want to test the logic in isolation. additionally, we want to inject errors during our test runs. the test framework provides two mechanisms that support these scenarios: mocking and overriding. both of these mechanisms allow you to change the behavior of invoked by your without the need to modify the actual code.\n\nlet\'s take a look at a test that simulates a test that fails via the "activity mocking" mechanism.\n\nfunc (s *unittestsuite) test_simpleworkflow_activityfails() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n "", errors.new("simpleactivityfailure"))\n s.env.executeworkflow(simpleworkflow, "test_failure")\n\n s.true(s.env.isworkflowcompleted())\n\n s.notnil(s.env.getworkflowerror())\n _, ok := s.env.getworkflowerror().(*cadence.genericerror)\n s.true(ok)\n s.equal("simpleactivityfailure", s.env.getworkflowerror().error())\n}\n\n\nthis test simulates the execution of the simpleactivity that is invoked by our simpleworkflow returning an error. we accomplish this by setting up a mock on the test environment for the simpleactivity that returns an error.\n\ns.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n "", errors.new("simpleactivityfailure"))\n\n\nwith the mock set up we can now execute the via the s.env.executeworkflow(...) method and assert that the completed successfully and returned the expected error.\n\nsimply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate logic. however, sometimes we want to replace the with an alternate implementation to support a more complex test scenario. let\'s assume we want to validate that the gets called with the correct parameters.\n\nfunc (s *unittestsuite) test_simpleworkflow_activityparamcorrect() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n func(ctx context.context, value string) (string, error) {\n s.equal("test_success", value)\n return value, nil\n }\n )\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\n\nin this example, we provide a function implementation as the parameter to return. this allows us to provide an alternate implementation for the simpleactivity. the framework will execute this function whenever the is invoked and pass on the return value from the function as the result of the invocation. additionally, the framework will validate that the signature of the “mock” function matches the signature of the original function.\n\nsince this can be an entire function, there is no limitation as to what we can do here. in this example, we assert that the “value” param has the same content as the value param we passed to the .\n\n\n# testing signals\n\nto test signals we can use the functions s.env.signalworkflow, and s.env.signalworkflowbyid. these functions needs to be called inside s.env.registerdelayedcallback, as the signal should be send while the is running. it is important to register the signal before calling s.env.executeworkflow, otherwise the signal will not be send.\n\nif our is waiting for a signal with name signalname we can register to send this signal before the workflow is executed like this:\n\nfunc (s *unittestsuite) test_simpleworkflow_signal() {\n // send the signal\n\ts.env.registerdelayedcallback(func() {\n\t\ts.env.signalworkflow(signalname, signaldata)\n\t}, time.minute*10)\n\n // execute the workflow\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\n\nnote that the s.env.registerdelayedcallback function does not actually wait 10 minutes in the unit test instead the cadence test framework uses an internal clock which knows which event is the next, and executes it immediately.',charsets:{}},{title:"Versioning",frontmatter:{layout:"default",title:"Versioning",permalink:"/docs/go-client/workflow-versioning",readingShow:"top"},regularPath:"/docs/05-go-client/14-workflow-versioning.html",relativePath:"docs/05-go-client/14-workflow-versioning.md",key:"v-0b9844ac",path:"/docs/go-client/workflow-versioning/",headers:[{level:2,title:"workflow.GetVersion()",slug:"workflow-getversion",normalizedTitle:"workflow.getversion()",charIndex:315},{level:2,title:"Sanity checking",slug:"sanity-checking",normalizedTitle:"sanity checking",charIndex:5619}],codeSwitcherOptions:{},headersStr:"workflow.GetVersion() Sanity checking",content:'# Versioning\n\nThe definition code of a Cadence must be deterministic because Cadence uses sourcing to reconstruct the state by replaying the saved history data on the definition code. This means that any incompatible update to the definition code could cause a non-deterministic issue if not handled correctly.\n\n\n# workflow.GetVersion()\n\nConsider the following definition:\n\nfunc MyWorkflow(ctx workflow.Context, data string) (string, error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Minute,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n var result1 string\n err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n if err != nil {\n return "", err\n }\n var result2 string\n err = workflow.ExecuteActivity(ctx, ActivityB, result1).Get(ctx, &result2)\n return result2, err\n}\n\n\nNow let\'s say we have replaced ActivityA with ActivityC, and deployed the updated code. If there is an existing that was started by the original version of the code, where ActivityA had already completed and the result was recorded to history, the new version of the code will pick up that and try to resume from there. However, the will fail because the new code expects a result for ActivityC from the history data, but instead it gets the result for ActivityA. This causes the to fail on the non-deterministic error.\n\nThus we use workflow.GetVersion().\n\nvar err error\nv := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 1)\nif v == workflow.DefaultVersion {\n err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)\n}\nif err != nil {\n return "", err\n}\n\nvar result2 string\nerr = workflow.ExecuteActivity(ctx, ActivityB, result1).Get(ctx, &result2)\nreturn result2, err\n\n\nWhen workflow.GetVersion() is run for the new , it records a marker in the history so that all future calls to GetVersion for this change ID--Step 1 in the example--on this will always return the given version number, which is 1 in the example.\n\nIf you make an additional change, such as replacing ActivityC with ActivityD, you need to add some additional code:\n\nv := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 2)\nif v == workflow.DefaultVersion {\n err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n} else if v == 1 {\n err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n}\n\n\nNote that we have changed maxSupported from 1 to 2. A that had already passed this GetVersion() call before it was introduced will return DefaultVersion. A that was run with maxSupported set to 1, will return 1. New will return 2.\n\nAfter you are sure that all of the prior to version 1 have completed, you can remove the code for that version. It should now look like the following:\n\nv := workflow.GetVersion(ctx, "Step1", 1, 2)\nif v == 1 {\n err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n}\n\n\nYou\'ll note that minSupported has changed from DefaultVersion to 1. If an older version of the history is replayed on this code, it will fail because the minimum expected version is 1. After you are sure that all of the for version 1 have completed, then you can remove 1 so that your code would look like the following:\n\n_ := workflow.GetVersion(ctx, "Step1", 2, 2)\nerr = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n\n\nNote that we have preserved the call to GetVersion(). There are two reasons to preserve this call:\n\n 1. This ensures that if there is a still running for an older version, it will fail here and not proceed.\n 2. If you need to make additional changes for Step1, such as changing ActivityD to ActivityE, you only need to update maxVersion from 2 to 3 and branch from there.\n\nYou only need to preserve the first call to GetVersion() for each changeID. All subsequent calls to GetVersion() with the same change ID are safe to remove. If necessary, you can remove the first GetVersion() call, but you need to ensure the following:\n\n * All executions with an older version are completed.\n * You can no longer use Step1 for the changeID. If you need to make changes to that same part in the future, such as change from ActivityD to ActivityE, you would need to use a different changeID like Step1-fix2, and start minVersion from DefaultVersion again. The code would look like the following:\n\nv := workflow.GetVersion(ctx, "Step1-fix2", workflow.DefaultVersion, 1)\nif v == workflow.DefaultVersion {\n err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityE, data).Get(ctx, &result1)\n}\n\n\nUpgrading a is straightforward if you don\'t need to preserve your currently running . You can simply terminate all of the currently running and suspend new ones from being created while you deploy the new version of your code, which does not use GetVersion(), and then resume creation. However, that is often not the case, and you need to take care of the currently running , so using GetVersion() to update your code is the method to use.\n\nHowever, if you want your currently running to proceed based on the current logic, but you want to ensure new are running on new logic, you can define your as a new WorkflowType, and change your start path (calls to StartWorkflow()) to start the new type.\n\n\n# Sanity checking\n\nThe Cadence client SDK performs a sanity check to help prevent obvious incompatible changes. The sanity check verifies whether a made in replay matches the recorded in history, in the same order. The is generated by calling any of the following methods:\n\n * workflow.ExecuteActivity()\n * workflow.ExecuteChildWorkflow()\n * workflow.NewTimer()\n * workflow.Sleep()\n * workflow.SideEffect()\n * workflow.RequestCancelWorkflow()\n * workflow.SignalExternalWorkflow()\n * workflow.UpsertSearchAttributes()\n\nAdding, removing, or reordering any of the above methods triggers the sanity check and results in a non-deterministic error.\n\nThe sanity check does not perform a thorough check. For example, it does not check on the \'s input arguments or the timer duration. If the check is enforced on every property, then it becomes too restricted and harder to maintain the code. For example, if you move your code from one package to another package, that changes the ActivityType, which technically becomes a different . But, we don\'t want to fail on that change, so we only check the function name part of the ActivityType.',normalizedContent:'# versioning\n\nthe definition code of a cadence must be deterministic because cadence uses sourcing to reconstruct the state by replaying the saved history data on the definition code. this means that any incompatible update to the definition code could cause a non-deterministic issue if not handled correctly.\n\n\n# workflow.getversion()\n\nconsider the following definition:\n\nfunc myworkflow(ctx workflow.context, data string) (string, error) {\n ao := workflow.activityoptions{\n scheduletostarttimeout: time.minute,\n starttoclosetimeout: time.minute,\n }\n ctx = workflow.withactivityoptions(ctx, ao)\n var result1 string\n err := workflow.executeactivity(ctx, activitya, data).get(ctx, &result1)\n if err != nil {\n return "", err\n }\n var result2 string\n err = workflow.executeactivity(ctx, activityb, result1).get(ctx, &result2)\n return result2, err\n}\n\n\nnow let\'s say we have replaced activitya with activityc, and deployed the updated code. if there is an existing that was started by the original version of the code, where activitya had already completed and the result was recorded to history, the new version of the code will pick up that and try to resume from there. however, the will fail because the new code expects a result for activityc from the history data, but instead it gets the result for activitya. this causes the to fail on the non-deterministic error.\n\nthus we use workflow.getversion().\n\nvar err error\nv := workflow.getversion(ctx, "step1", workflow.defaultversion, 1)\nif v == workflow.defaultversion {\n err = workflow.executeactivity(ctx, activitya, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activityc, data).get(ctx, &result1)\n}\nif err != nil {\n return "", err\n}\n\nvar result2 string\nerr = workflow.executeactivity(ctx, activityb, result1).get(ctx, &result2)\nreturn result2, err\n\n\nwhen workflow.getversion() is run for the new , it records a marker in the history so that all future calls to getversion for this change id--step 1 in the example--on this will always return the given version number, which is 1 in the example.\n\nif you make an additional change, such as replacing activityc with activityd, you need to add some additional code:\n\nv := workflow.getversion(ctx, "step1", workflow.defaultversion, 2)\nif v == workflow.defaultversion {\n err = workflow.executeactivity(ctx, activitya, data).get(ctx, &result1)\n} else if v == 1 {\n err = workflow.executeactivity(ctx, activityc, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n}\n\n\nnote that we have changed maxsupported from 1 to 2. a that had already passed this getversion() call before it was introduced will return defaultversion. a that was run with maxsupported set to 1, will return 1. new will return 2.\n\nafter you are sure that all of the prior to version 1 have completed, you can remove the code for that version. it should now look like the following:\n\nv := workflow.getversion(ctx, "step1", 1, 2)\nif v == 1 {\n err = workflow.executeactivity(ctx, activityc, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n}\n\n\nyou\'ll note that minsupported has changed from defaultversion to 1. if an older version of the history is replayed on this code, it will fail because the minimum expected version is 1. after you are sure that all of the for version 1 have completed, then you can remove 1 so that your code would look like the following:\n\n_ := workflow.getversion(ctx, "step1", 2, 2)\nerr = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n\n\nnote that we have preserved the call to getversion(). there are two reasons to preserve this call:\n\n 1. this ensures that if there is a still running for an older version, it will fail here and not proceed.\n 2. if you need to make additional changes for step1, such as changing activityd to activitye, you only need to update maxversion from 2 to 3 and branch from there.\n\nyou only need to preserve the first call to getversion() for each changeid. all subsequent calls to getversion() with the same change id are safe to remove. if necessary, you can remove the first getversion() call, but you need to ensure the following:\n\n * all executions with an older version are completed.\n * you can no longer use step1 for the changeid. if you need to make changes to that same part in the future, such as change from activityd to activitye, you would need to use a different changeid like step1-fix2, and start minversion from defaultversion again. the code would look like the following:\n\nv := workflow.getversion(ctx, "step1-fix2", workflow.defaultversion, 1)\nif v == workflow.defaultversion {\n err = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activitye, data).get(ctx, &result1)\n}\n\n\nupgrading a is straightforward if you don\'t need to preserve your currently running . you can simply terminate all of the currently running and suspend new ones from being created while you deploy the new version of your code, which does not use getversion(), and then resume creation. however, that is often not the case, and you need to take care of the currently running , so using getversion() to update your code is the method to use.\n\nhowever, if you want your currently running to proceed based on the current logic, but you want to ensure new are running on new logic, you can define your as a new workflowtype, and change your start path (calls to startworkflow()) to start the new type.\n\n\n# sanity checking\n\nthe cadence client sdk performs a sanity check to help prevent obvious incompatible changes. the sanity check verifies whether a made in replay matches the recorded in history, in the same order. the is generated by calling any of the following methods:\n\n * workflow.executeactivity()\n * workflow.executechildworkflow()\n * workflow.newtimer()\n * workflow.sleep()\n * workflow.sideeffect()\n * workflow.requestcancelworkflow()\n * workflow.signalexternalworkflow()\n * workflow.upsertsearchattributes()\n\nadding, removing, or reordering any of the above methods triggers the sanity check and results in a non-deterministic error.\n\nthe sanity check does not perform a thorough check. for example, it does not check on the \'s input arguments or the timer duration. if the check is enforced on every property, then it becomes too restricted and harder to maintain the code. for example, if you move your code from one package to another package, that changes the activitytype, which technically becomes a different . but, we don\'t want to fail on that change, so we only check the function name part of the activitytype.',charsets:{}},{title:"Distributed CRON",frontmatter:{layout:"default",title:"Distributed CRON",permalink:"/docs/go-client/distributed-cron",readingShow:"top"},regularPath:"/docs/05-go-client/16-distributed-cron.html",relativePath:"docs/05-go-client/16-distributed-cron.md",key:"v-35913a62",path:"/docs/go-client/distributed-cron/",headers:[{level:2,title:"Convert existing cron workflow",slug:"convert-existing-cron-workflow",normalizedTitle:"convert existing cron workflow",charIndex:2151},{level:2,title:"Retrieve last successful result",slug:"retrieve-last-successful-result",normalizedTitle:"retrieve last successful result",charIndex:2614}],codeSwitcherOptions:{},headersStr:"Convert existing cron workflow Retrieve last successful result",content:'# Distributed CRON\n\nIt is relatively straightforward to turn any Cadence into a Cron . All you need is to supply a cron schedule when starting the using the CronSchedule parameter of StartWorkflowOptions.\n\nYou can also start a using the Cadence with an optional cron schedule using the --cron argument.\n\nFor with CronSchedule:\n\n * Cron schedule is based on UTC time. For example cron schedule "15 8 * * *" will run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * If a failed and a RetryPolicy is supplied to the StartWorkflowOptions as well, the will retry based on the RetryPolicy. While the is retrying, the server will not schedule the next cron run.\n * Cadence server only schedules the next cron run after the current run is completed. If the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * Cron will not stop until they are terminated or cancelled.\n\nCadence supports the standard cron spec:\n\n// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for next run only happen\n// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won\'t\n// schedule its next run. If next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).\n// The cron spec is as following:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\nCronSchedule string\n\n\nCadence also supports more advanced cron expressions.\n\nThe crontab guru site is useful for testing your cron expressions.\n\n\n# Convert existing cron workflow\n\nBefore CronSchedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return ContinueAsNew. One problem with that implementation is that if the fails or times out, the cron would stop.\n\nTo convert those to make use of Cadence CronSchedule, all you need is to remove the delay timer and return without using ContinueAsNew. Then start the with the desired CronSchedule.\n\n\n# Retrieve last successful result\n\nSometimes it is useful to obtain the progress of previous successful runs. This is supported by two new APIs in the client library: HasLastCompletionResult and GetLastCompletionResult. Below is an example of how to use this in Go:\n\nfunc CronWorkflow(ctx workflow.Context) (CronResult, error) {\n startTimestamp := time.Time{} // By default start from 0 time.\n if workflow.HasLastCompletionResult(ctx) {\n var progress CronResult\n if err := workflow.GetLastCompletionResult(ctx, &progress); err == nil {\n startTimestamp = progress.LastSyncTimestamp\n }\n }\n endTimestamp := workflow.Now(ctx)\n\n // Process work between startTimestamp (exclusive), endTimestamp (inclusive).\n // Business logic implementation goes here.\n\n result := CronResult{LastSyncTimestamp: endTimestamp}\n return result, nil\n}\n\n\nNote that this works even if one of the cron schedule runs failed. The next schedule will still get the last successful result if it ever successfully completed at least once. For example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these APIs.',normalizedContent:'# distributed cron\n\nit is relatively straightforward to turn any cadence into a cron . all you need is to supply a cron schedule when starting the using the cronschedule parameter of startworkflowoptions.\n\nyou can also start a using the cadence with an optional cron schedule using the --cron argument.\n\nfor with cronschedule:\n\n * cron schedule is based on utc time. for example cron schedule "15 8 * * *" will run daily at 8:15am utc. another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * if a failed and a retrypolicy is supplied to the startworkflowoptions as well, the will retry based on the retrypolicy. while the is retrying, the server will not schedule the next cron run.\n * cadence server only schedules the next cron run after the current run is completed. if the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * cron will not stop until they are terminated or cancelled.\n\ncadence supports the standard cron spec:\n\n// cronschedule - optional cron schedule for workflow. if a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. the scheduling will be based on utc time. the schedule for next run only happen\n// after the current run is completed/failed/timeout. if a retrypolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. while the workflow is retrying, it won\'t\n// schedule its next run. if next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. cron workflow will not stop until it is terminated or cancelled (by returning cadence.cancelederror).\n// the cron spec is as following:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (sunday to saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\ncronschedule string\n\n\ncadence also supports more advanced cron expressions.\n\nthe crontab guru site is useful for testing your cron expressions.\n\n\n# convert existing cron workflow\n\nbefore cronschedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return continueasnew. one problem with that implementation is that if the fails or times out, the cron would stop.\n\nto convert those to make use of cadence cronschedule, all you need is to remove the delay timer and return without using continueasnew. then start the with the desired cronschedule.\n\n\n# retrieve last successful result\n\nsometimes it is useful to obtain the progress of previous successful runs. this is supported by two new apis in the client library: haslastcompletionresult and getlastcompletionresult. below is an example of how to use this in go:\n\nfunc cronworkflow(ctx workflow.context) (cronresult, error) {\n starttimestamp := time.time{} // by default start from 0 time.\n if workflow.haslastcompletionresult(ctx) {\n var progress cronresult\n if err := workflow.getlastcompletionresult(ctx, &progress); err == nil {\n starttimestamp = progress.lastsynctimestamp\n }\n }\n endtimestamp := workflow.now(ctx)\n\n // process work between starttimestamp (exclusive), endtimestamp (inclusive).\n // business logic implementation goes here.\n\n result := cronresult{lastsynctimestamp: endtimestamp}\n return result, nil\n}\n\n\nnote that this works even if one of the cron schedule runs failed. the next schedule will still get the last successful result if it ever successfully completed at least once. for example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these apis.',charsets:{}},{title:"Sessions",frontmatter:{layout:"default",title:"Sessions",permalink:"/docs/go-client/sessions",readingShow:"top"},regularPath:"/docs/05-go-client/15-sessions.html",relativePath:"docs/05-go-client/15-sessions.md",key:"v-edf882bc",path:"/docs/go-client/sessions/",headers:[{level:2,title:"Use Cases",slug:"use-cases",normalizedTitle:"use cases",charIndex:254},{level:2,title:"Basic Usage",slug:"basic-usage",normalizedTitle:"basic usage",charIndex:822},{level:3,title:"Sample Code",slug:"sample-code",normalizedTitle:"sample code",charIndex:3519},{level:2,title:"Session Metadata",slug:"session-metadata",normalizedTitle:"session metadata",charIndex:4548},{level:2,title:"Concurrent Session Limitation",slug:"concurrent-session-limitation",normalizedTitle:"concurrent session limitation",charIndex:3044},{level:2,title:"Recreate Session",slug:"recreate-session",normalizedTitle:"recreate session",charIndex:5565},{level:2,title:"Q & A",slug:"q-a",normalizedTitle:"q & a",charIndex:null},{level:3,title:"Is there a complete example?",slug:"is-there-a-complete-example",normalizedTitle:"is there a complete example?",charIndex:6228},{level:3,title:"What happens to my activity if the worker dies?",slug:"what-happens-to-my-activity-if-the-worker-dies",normalizedTitle:"what happens to my activity if the worker dies?",charIndex:6369},{level:3,title:"Is the concurrent session limitation per process or per host?",slug:"is-the-concurrent-session-limitation-per-process-or-per-host",normalizedTitle:"is the concurrent session limitation per process or per host?",charIndex:6577},{level:2,title:"Future Work",slug:"future-work",normalizedTitle:"future work",charIndex:6753}],codeSwitcherOptions:{},headersStr:"Use Cases Basic Usage Sample Code Session Metadata Concurrent Session Limitation Recreate Session Q & A Is there a complete example? What happens to my activity if the worker dies? Is the concurrent session limitation per process or per host? Future Work",content:"# Sessions\n\nThe session framework provides a straightforward interface for scheduling multiple on a single without requiring you to manually specify the name. It also includes features like concurrent session limitation and worker failure detection.\n\n\n# Use Cases\n\n * File Processing: You may want to implement a that can download a file, process it, and then upload the modified version. If these three steps are implemented as three different , all of them should be executed by the same .\n\n * Machine Learning Model Training: Training a machine learning model typically involves three stages: download the data set, optimize the model, and upload the trained parameter. Since the models may consume a large amount of resources (GPU memory for example), the number of models processed on a host needs to be limited.\n\n\n# Basic Usage\n\nBefore using the session framework to write your code, you need to configure your to process sessions. To do that, set the EnableSessionWorker field of worker.Options to true when starting your .\n\nThe most important APIs provided by the session framework are workflow.CreateSession() and workflow.CompleteSession(). The basic idea is that all the executed within a session will be processed by the same and these two APIs allow you to create new sessions and close them after all finish executing.\n\nHere's a more detailed description of these two APIs:\n\ntype SessionOptions struct {\n // ExecutionTimeout: required, no default.\n // Specifies the maximum amount of time the session can run.\n ExecutionTimeout time.Duration\n\n // CreationTimeout: required, no default.\n // Specifies how long session creation can take before returning an error.\n CreationTimeout time.Duration\n}\n\nfunc CreateSession(ctx Context, sessionOptions *SessionOptions) (Context, error)\n\n\nCreateSession() takes in workflow.Context, sessionOptions and returns a new context which contains metadata information of the created session (referred to as the session context below). When it's called, it will check the name specified in the ActivityOptions (or in the StartWorkflowOptions if the name is not specified in ActivityOptions), and create the session on one of the which is polling that .\n\nThe returned session context should be used to execute all belonging to the session. The context will be cancelled if the executing this session dies or CompleteSession() is called. When using the returned session context to execute , a workflow.ErrSessionFailed error may be returned if the session framework detects that the executing this session has died. The failure of your won't affect the state of the session, so you still need to handle the errors returned from your and call CompleteSession() if necessary.\n\nCreateSession() will return an error if the context passed in already contains an open session. If all the are currently busy and unable to handle new sessions, the framework will keep retrying until the CreationTimeout you specified in SessionOptions has passed before returning an error (check the Concurrent Session Limitation section for more details).\n\nfunc CompleteSession(ctx Context)\n\n\nCompleteSession() releases the resources reserved on the , so it's important to call it as soon as you no longer need the session. It will cancel the session context and therefore all the using that session context. Note that it's safe to call CompleteSession() on a failed session, meaning that you can call it from a defer function after the session is successfully created.\n\n\n# Sample Code\n\nfunc FileProcessingWorkflow(ctx workflow.Context, fileID string) (err error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Second * 5,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n\n so := &workflow.SessionOptions{\n CreationTimeout: time.Minute,\n ExecutionTimeout: time.Minute,\n }\n sessionCtx, err := workflow.CreateSession(ctx, so)\n if err != nil {\n return err\n }\n defer workflow.CompleteSession(sessionCtx)\n\n var fInfo *fileInfo\n err = workflow.ExecuteActivity(sessionCtx, downloadFileActivityName, fileID).Get(sessionCtx, &fInfo)\n if err != nil {\n return err\n }\n\n var fInfoProcessed *fileInfo\n err = workflow.ExecuteActivity(sessionCtx, processFileActivityName, *fInfo).Get(sessionCtx, &fInfoProcessed)\n if err != nil {\n return err\n }\n\n return workflow.ExecuteActivity(sessionCtx, uploadFileActivityName, *fInfoProcessed).Get(sessionCtx, nil)\n}\n\n\n\n# Session Metadata\n\ntype SessionInfo struct {\n // A unique ID for the session\n SessionID string\n\n // The hostname of the worker that is executing the session\n HostName string\n\n // ... other unexported fields\n}\n\nfunc GetSessionInfo(ctx Context) *SessionInfo\n\n\nThe session context also stores some session metadata, which can be retrieved by the GetSessionInfo() API. If the context passed in doesn't contain any session metadata, this API will return a nil pointer.\n\n\n# Concurrent Session Limitation\n\nTo limit the number of concurrent sessions running on a , set the MaxConcurrentSessionExecutionSize field of worker.Options to the desired value. By default this field is set to a very large value, so there's no need to manually set it if no limitation is needed.\n\nIf a hits this limitation, it won't accept any new CreateSession() requests until one of the existing sessions is completed. CreateSession() will return an error if the session can't be created within CreationTimeout.\n\n\n# Recreate Session\n\nFor long-running sessions, you may want to use the ContinueAsNew feature to split the into multiple runs when all need to be executed by the same . The RecreateSession() API is designed for such a use case.\n\nfunc RecreateSession(ctx Context, recreateToken []byte, sessionOptions *SessionOptions) (Context, error)\n\n\nIts usage is the same as CreateSession() except that it also takes in a recreateToken, which is needed to create a new session on the same as the previous one. You can get the token by calling the GetRecreateToken() method of the SessionInfo object.\n\ntoken := workflow.GetSessionInfo(sessionCtx).GetRecreateToken()\n\n\n\n# Q & A\n\n\n# Is there a complete example?\n\nYes, the file processing example in the cadence-sample repo has been updated to use the session framework.\n\n\n# What happens to my activity if the worker dies?\n\nIf your has already been scheduled, it will be cancelled. If not, you will get a workflow.ErrSessionFailed error when you call workflow.ExecuteActivity().\n\n\n# Is the concurrent session limitation per process or per host?\n\nIt's per process, so make sure there's only one process running on the host if you plan to use that feature.\n\n\n# Future Work\n\n * Support automatic session re-establishing Right now a session is considered failed if the process dies. However, for some use cases, you may only care whether host is alive or not. For these uses cases, the session should be automatically re-established if the process is restarted.\n\n * Support fine-grained concurrent session limitation The current implementation assumes that all sessions are consuming the same type of resource and there's only one global limitation. Our plan is to allow you to specify what type of resource your session will consume and enforce different limitations on different types of resources.",normalizedContent:"# sessions\n\nthe session framework provides a straightforward interface for scheduling multiple on a single without requiring you to manually specify the name. it also includes features like concurrent session limitation and worker failure detection.\n\n\n# use cases\n\n * file processing: you may want to implement a that can download a file, process it, and then upload the modified version. if these three steps are implemented as three different , all of them should be executed by the same .\n\n * machine learning model training: training a machine learning model typically involves three stages: download the data set, optimize the model, and upload the trained parameter. since the models may consume a large amount of resources (gpu memory for example), the number of models processed on a host needs to be limited.\n\n\n# basic usage\n\nbefore using the session framework to write your code, you need to configure your to process sessions. to do that, set the enablesessionworker field of worker.options to true when starting your .\n\nthe most important apis provided by the session framework are workflow.createsession() and workflow.completesession(). the basic idea is that all the executed within a session will be processed by the same and these two apis allow you to create new sessions and close them after all finish executing.\n\nhere's a more detailed description of these two apis:\n\ntype sessionoptions struct {\n // executiontimeout: required, no default.\n // specifies the maximum amount of time the session can run.\n executiontimeout time.duration\n\n // creationtimeout: required, no default.\n // specifies how long session creation can take before returning an error.\n creationtimeout time.duration\n}\n\nfunc createsession(ctx context, sessionoptions *sessionoptions) (context, error)\n\n\ncreatesession() takes in workflow.context, sessionoptions and returns a new context which contains metadata information of the created session (referred to as the session context below). when it's called, it will check the name specified in the activityoptions (or in the startworkflowoptions if the name is not specified in activityoptions), and create the session on one of the which is polling that .\n\nthe returned session context should be used to execute all belonging to the session. the context will be cancelled if the executing this session dies or completesession() is called. when using the returned session context to execute , a workflow.errsessionfailed error may be returned if the session framework detects that the executing this session has died. the failure of your won't affect the state of the session, so you still need to handle the errors returned from your and call completesession() if necessary.\n\ncreatesession() will return an error if the context passed in already contains an open session. if all the are currently busy and unable to handle new sessions, the framework will keep retrying until the creationtimeout you specified in sessionoptions has passed before returning an error (check the concurrent session limitation section for more details).\n\nfunc completesession(ctx context)\n\n\ncompletesession() releases the resources reserved on the , so it's important to call it as soon as you no longer need the session. it will cancel the session context and therefore all the using that session context. note that it's safe to call completesession() on a failed session, meaning that you can call it from a defer function after the session is successfully created.\n\n\n# sample code\n\nfunc fileprocessingworkflow(ctx workflow.context, fileid string) (err error) {\n ao := workflow.activityoptions{\n scheduletostarttimeout: time.second * 5,\n starttoclosetimeout: time.minute,\n }\n ctx = workflow.withactivityoptions(ctx, ao)\n\n so := &workflow.sessionoptions{\n creationtimeout: time.minute,\n executiontimeout: time.minute,\n }\n sessionctx, err := workflow.createsession(ctx, so)\n if err != nil {\n return err\n }\n defer workflow.completesession(sessionctx)\n\n var finfo *fileinfo\n err = workflow.executeactivity(sessionctx, downloadfileactivityname, fileid).get(sessionctx, &finfo)\n if err != nil {\n return err\n }\n\n var finfoprocessed *fileinfo\n err = workflow.executeactivity(sessionctx, processfileactivityname, *finfo).get(sessionctx, &finfoprocessed)\n if err != nil {\n return err\n }\n\n return workflow.executeactivity(sessionctx, uploadfileactivityname, *finfoprocessed).get(sessionctx, nil)\n}\n\n\n\n# session metadata\n\ntype sessioninfo struct {\n // a unique id for the session\n sessionid string\n\n // the hostname of the worker that is executing the session\n hostname string\n\n // ... other unexported fields\n}\n\nfunc getsessioninfo(ctx context) *sessioninfo\n\n\nthe session context also stores some session metadata, which can be retrieved by the getsessioninfo() api. if the context passed in doesn't contain any session metadata, this api will return a nil pointer.\n\n\n# concurrent session limitation\n\nto limit the number of concurrent sessions running on a , set the maxconcurrentsessionexecutionsize field of worker.options to the desired value. by default this field is set to a very large value, so there's no need to manually set it if no limitation is needed.\n\nif a hits this limitation, it won't accept any new createsession() requests until one of the existing sessions is completed. createsession() will return an error if the session can't be created within creationtimeout.\n\n\n# recreate session\n\nfor long-running sessions, you may want to use the continueasnew feature to split the into multiple runs when all need to be executed by the same . the recreatesession() api is designed for such a use case.\n\nfunc recreatesession(ctx context, recreatetoken []byte, sessionoptions *sessionoptions) (context, error)\n\n\nits usage is the same as createsession() except that it also takes in a recreatetoken, which is needed to create a new session on the same as the previous one. you can get the token by calling the getrecreatetoken() method of the sessioninfo object.\n\ntoken := workflow.getsessioninfo(sessionctx).getrecreatetoken()\n\n\n\n# q & a\n\n\n# is there a complete example?\n\nyes, the file processing example in the cadence-sample repo has been updated to use the session framework.\n\n\n# what happens to my activity if the worker dies?\n\nif your has already been scheduled, it will be cancelled. if not, you will get a workflow.errsessionfailed error when you call workflow.executeactivity().\n\n\n# is the concurrent session limitation per process or per host?\n\nit's per process, so make sure there's only one process running on the host if you plan to use that feature.\n\n\n# future work\n\n * support automatic session re-establishing right now a session is considered failed if the process dies. however, for some use cases, you may only care whether host is alive or not. for these uses cases, the session should be automatically re-established if the process is restarted.\n\n * support fine-grained concurrent session limitation the current implementation assumes that all sessions are consuming the same type of resource and there's only one global limitation. our plan is to allow you to specify what type of resource your session will consume and enforce different limitations on different types of resources.",charsets:{}},{title:"Tracing and context propagation",frontmatter:{layout:"default",title:"Tracing and context propagation",permalink:"/docs/go-client/tracing",readingShow:"top"},regularPath:"/docs/05-go-client/17-tracing.html",relativePath:"docs/05-go-client/17-tracing.md",key:"v-9d2716dc",path:"/docs/go-client/tracing/",headers:[{level:2,title:"Tracing",slug:"tracing",normalizedTitle:"tracing",charIndex:2},{level:2,title:"Context Propagation",slug:"context-propagation",normalizedTitle:"context propagation",charIndex:651},{level:3,title:"Server-Side Headers Support",slug:"server-side-headers-support",normalizedTitle:"server-side headers support",charIndex:1158},{level:3,title:"Context Propagators",slug:"context-propagators",normalizedTitle:"context propagators",charIndex:2070},{level:2,title:"Q & A",slug:"q-a",normalizedTitle:"q & a",charIndex:null},{level:3,title:"Is there a complete example?",slug:"is-there-a-complete-example",normalizedTitle:"is there a complete example?",charIndex:3015},{level:3,title:"Can I configure multiple context propagators?",slug:"can-i-configure-multiple-context-propagators",normalizedTitle:"can i configure multiple context propagators?",charIndex:3182}],codeSwitcherOptions:{},headersStr:"Tracing Context Propagation Server-Side Headers Support Context Propagators Q & A Is there a complete example? Can I configure multiple context propagators?",content:"# Tracing and context propagation\n\n\n# Tracing\n\nThe Go client provides distributed tracing support through OpenTracing. Tracing can be configured by providing an opentracing.Tracer implementation in ClientOptions and WorkerOptions during client and instantiation, respectively. Tracing allows you to view the call graph of a along with its , child etc. For more details on how to configure and leverage tracing, see the OpenTracing documentation. The OpenTracing support has been validated using Jaeger, but other implementations mentioned here should also work. Tracing support utilizes generic context propagation support provided by the client.\n\n\n# Context Propagation\n\nWe provide a standard way to propagate custom context across a . ClientOptions and WorkerOptions allow configuring a context propagator. The context propagator extracts and passes on information present in the context.Context and workflow.Context objects across the . Once a context propagator is configured, you should be able to access the required values in the context objects as you would normally do in Go. For a sample, the Go client implements a tracing context propagator.\n\n\n# Server-Side Headers Support\n\nOn the server side, Cadence provides a mechanism to propagate what it calls headers across different transitions.\n\nstruct Header {\n 10: optional map fields\n}\n\n\nThe client leverages this to pass around selected context information. HeaderReader and HeaderWriter are interfaces that allow reading and writing to the Cadence server headers. The client already provides implementations for these. HeaderWriter sets a field in the header. Headers is a map, so setting a value for the the same key multiple times will overwrite the previous values. HeaderReader iterates through the headers map and runs the provided handler function on each key/value pair, allowing you to deal with the fields you are interested in.\n\ntype HeaderWriter interface {\n Set(string, []byte)\n}\n\ntype HeaderReader interface {\n ForEachKey(handler func(string, []byte) error) error\n}\n\n\n\n# Context Propagators\n\nContext propagators require implementing the following four methods to propagate selected context across a workflow:\n\n * Inject is meant to pick out the context keys of interest from a Go context.Context object and write that into the headers using the HeaderWriter interface\n * InjectFromWorkflow is the same as above, but operates on a workflow.Context object\n * Extract reads the headers and places the information of interest back into the context.Context object\n * ExtractToWorkflow is the same as above, but operates on a workflow.Context object\n\nThe tracing context propagator shows a sample implementation of context propagation.\n\ntype ContextPropagator interface {\n Inject(context.Context, HeaderWriter) error\n\n Extract(context.Context, HeaderReader) (context.Context, error)\n\n InjectFromWorkflow(Context, HeaderWriter) error\n\n ExtractToWorkflow(Context, HeaderReader) (Context, error)\n}\n\n\n\n# Q & A\n\n\n# Is there a complete example?\n\nThe context propagation sample configures a custom context propagator and shows context propagation of custom keys across a and an .\n\n\n# Can I configure multiple context propagators?\n\nYes, we recommended that you configure multiple context propagators with each propagator meant to propagate a particular type of context.",normalizedContent:"# tracing and context propagation\n\n\n# tracing\n\nthe go client provides distributed tracing support through opentracing. tracing can be configured by providing an opentracing.tracer implementation in clientoptions and workeroptions during client and instantiation, respectively. tracing allows you to view the call graph of a along with its , child etc. for more details on how to configure and leverage tracing, see the opentracing documentation. the opentracing support has been validated using jaeger, but other implementations mentioned here should also work. tracing support utilizes generic context propagation support provided by the client.\n\n\n# context propagation\n\nwe provide a standard way to propagate custom context across a . clientoptions and workeroptions allow configuring a context propagator. the context propagator extracts and passes on information present in the context.context and workflow.context objects across the . once a context propagator is configured, you should be able to access the required values in the context objects as you would normally do in go. for a sample, the go client implements a tracing context propagator.\n\n\n# server-side headers support\n\non the server side, cadence provides a mechanism to propagate what it calls headers across different transitions.\n\nstruct header {\n 10: optional map fields\n}\n\n\nthe client leverages this to pass around selected context information. headerreader and headerwriter are interfaces that allow reading and writing to the cadence server headers. the client already provides implementations for these. headerwriter sets a field in the header. headers is a map, so setting a value for the the same key multiple times will overwrite the previous values. headerreader iterates through the headers map and runs the provided handler function on each key/value pair, allowing you to deal with the fields you are interested in.\n\ntype headerwriter interface {\n set(string, []byte)\n}\n\ntype headerreader interface {\n foreachkey(handler func(string, []byte) error) error\n}\n\n\n\n# context propagators\n\ncontext propagators require implementing the following four methods to propagate selected context across a workflow:\n\n * inject is meant to pick out the context keys of interest from a go context.context object and write that into the headers using the headerwriter interface\n * injectfromworkflow is the same as above, but operates on a workflow.context object\n * extract reads the headers and places the information of interest back into the context.context object\n * extracttoworkflow is the same as above, but operates on a workflow.context object\n\nthe tracing context propagator shows a sample implementation of context propagation.\n\ntype contextpropagator interface {\n inject(context.context, headerwriter) error\n\n extract(context.context, headerreader) (context.context, error)\n\n injectfromworkflow(context, headerwriter) error\n\n extracttoworkflow(context, headerreader) (context, error)\n}\n\n\n\n# q & a\n\n\n# is there a complete example?\n\nthe context propagation sample configures a custom context propagator and shows context propagation of custom keys across a and an .\n\n\n# can i configure multiple context propagators?\n\nyes, we recommended that you configure multiple context propagators with each propagator meant to propagate a particular type of context.",charsets:{}},{title:"Workflow Replay and Shadowing",frontmatter:{layout:"default",title:"Workflow Replay and Shadowing",permalink:"/docs/go-client/workflow-replay-shadowing",readingShow:"top"},regularPath:"/docs/05-go-client/18-workflow-replay-shadowing.html",relativePath:"docs/05-go-client/18-workflow-replay-shadowing.md",key:"v-d043b980",path:"/docs/go-client/workflow-replay-shadowing/",headers:[{level:2,title:"Workflow Replayer",slug:"workflow-replayer",normalizedTitle:"workflow replayer",charIndex:469},{level:3,title:"Write a Replay Test",slug:"write-a-replay-test",normalizedTitle:"write a replay test",charIndex:824},{level:3,title:"Sample Replay Test",slug:"sample-replay-test",normalizedTitle:"sample replay test",charIndex:3778},{level:2,title:"Workflow Shadower",slug:"workflow-shadower",normalizedTitle:"workflow shadower",charIndex:491},{level:3,title:"Shadow Options",slug:"shadow-options",normalizedTitle:"shadow options",charIndex:4923},{level:3,title:"Local Shadowing Test",slug:"local-shadowing-test",normalizedTitle:"local shadowing test",charIndex:6606},{level:3,title:"Shadowing Worker",slug:"shadowing-worker",normalizedTitle:"shadowing worker",charIndex:7673}],codeSwitcherOptions:{},headersStr:"Workflow Replayer Write a Replay Test Sample Replay Test Workflow Shadower Shadow Options Local Shadowing Test Shadowing Worker",content:"# Workflow Replay and Shadowing\n\nIn the Versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. However, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. To help you identify incompatible changes and catch them before production traffic is impacted, we implemented Workflow Replayer and Workflow Shadower.\n\n\n# Workflow Replayer\n\nWorkflow Replayer is a testing component for replaying existing workflow histories against a workflow definition. The replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# Write a Replay Test\n\n# Step 1: Create workflow replayer\n\nCreate a workflow Replayer by:\n\nreplayer := worker.NewWorkflowReplayer()\n\n\nor if custom data converter, context propagator, interceptor, etc. is used in your workflow:\n\noptions := worker.ReplayOptions{\n DataConverter: myDataConverter,\n ContextPropagators: []workflow.ContextPropagator{\n myContextPropagator,\n },\n WorkflowInterceptorChainFactories: []interceptors.WorkflowInterceptorFactory{\n myInterceptorFactory,\n },\n Tracer: myTracer,\n}\nreplayer := worker.NewWorkflowReplayWithOptions(options)\n\n\n# Step 2: Register workflow definition\n\nNext, register your workflow definitions as you normally do. Make sure workflows are registered the same way as they were when running and generating histories; otherwise the replay will not be able to find the corresponding definition.\n\nreplayer.RegisterWorkflow(myWorkflowFunc1)\nreplayer.RegisterWorkflow(myWorkflowFunc2, workflow.RegisterOptions{\n\tName: workflowName,\n})\n\n\n# Step 3: Prepare workflow histories\n\nReplayer can read workflow history from a local json file or fetch it directly from the Cadence server. If you would like to use the first method, you can use the following CLI command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nThe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# Step 4: Call the replay method\n\nOnce you have the workflow history or have the connection to Cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nerr := replayer.ReplayWorkflowHistory(logger, history)\n\n// if workflow history is stored in a json file\nerr = replayer.ReplayWorkflowHistoryFromJSONFile(logger, jsonFileName)\n\n// if workflow history is stored in a json file and you only want to replay part of it\n// NOTE: lastEventID can't be set arbitrarily. It must be the end of of a history events batch\n// when in doubt, set to the eventID of decisionTaskStarted events.\nerr = replayer.ReplayPartialWorkflowHistoryFromJSONFile(logger, jsonFileName, lastEventID)\n\n// if you want to fetch workflow history directly from cadence server\n// please check the Worker Service page for how to create a cadence service client\nerr = replayer.ReplayWorkflowExecution(ctx, cadenceServiceClient, logger, domain, execution)\n\n\n# Step 5: Check returned error\n\nIf an error is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\nNote: currently an error will be returned if there are less than 3 events in the history. It is because the first 3 events in the history has nothing to do with the workflow code, so Replayer can't tell if there's a incompatible change or not.\n\n\n# Sample Replay Test\n\nThis sample is also available in our samples repo at here.\n\nfunc TestReplayWorkflowHistoryFromFile(t *testing.T) {\n\treplayer := worker.NewWorkflowReplayer()\n\treplayer.RegisterWorkflow(helloWorldWorkflow)\n\terr := replayer.ReplayWorkflowHistoryFromJSONFile(zaptest.NewLogger(t), \"helloworld.json\")\n\trequire.NoError(t, err)\n}\n\n\n\n# Workflow Shadower\n\nWorkflow Replayer works well when verifying the compatibility against a small number of workflow histories. If there are lots of workflows in production need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nWorkflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each of workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# Shadow Options\n\nComplete documentation on shadow options which includes default values, accepted values, etc. can be found here. The following sections are just a brief description of each option.\n\n# Scan Filters\n\n * WorkflowQuery: If you are familiar with our advanced visibility query syntax, you can specify a query directly. If specified, all other scan filters must be left empty.\n * WorkflowTypes: A list of workflow Type names.\n * WorkflowStatus: A list of workflow status.\n * WorkflowStartTimeFilter: Min and max timestamp for workflow start time.\n * SamplingRate: Sampling workflows from the scan result before executing the replay test.\n\n# Shadow Exit Condition\n\n * ExpirationInterval: Shadowing will exit when the specified interval has passed.\n * ShadowCount: Shadowing will exit after this number of workflow has been replayed. Note: replay maybe skipped due to errors like can't fetch history, history too short, etc. Skipped workflows won't be taken account into ShadowCount.\n\n# Shadow Mode\n\n * Normal: Shadowing will complete after all workflows matches WorkflowQuery (after sampling) have been replayed or when exit condition is met.\n * Continuous: A new round of shadowing will be started after all workflows matches WorkflowQuery have been replayed. There will be a 5 min wait period between each round, and currently this wait period is not configurable. Shadowing will complete only when ExitCondition is met. ExitCondition must be specified when using this mode.\n\n# Shadow Concurrency\n\n * Concurrency: workflow replay concurrency. If not specified, will be default to 1. For local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# Local Shadowing Test\n\nLocal shadowing test is similar to the replay test. First create a workflow shadower with optional shadow and replay options, then register the workflow that need to be shadowed. Finally, call the Run method to start the shadowing. The method will return if shadowing has finished or any non-deterministic error is found.\n\nHere's a simple example. The example is also available here.\n\nfunc TestShadowWorkflow(t *testing.T) {\n\toptions := worker.ShadowOptions{\n\t\tWorkflowStartTimeFilter: worker.TimeFilter{\n\t\t\tMinTimestamp: time.Now().Add(-time.Hour),\n\t\t},\n\t\tExitCondition: worker.ShadowExitCondition{\n\t\t\tShadowCount: 10,\n\t\t},\n\t}\n\n // please check the Worker Service page for how to create a cadence service client\n\tservice := buildCadenceClient()\n\tshadower, err := worker.NewWorkflowShadower(service, \"samples-domain\", options, worker.ReplayOptions{}, zaptest.NewLogger(t))\n\tassert.NoError(t, err)\n\n\tshadower.RegisterWorkflowWithOptions(helloWorldWorkflow, workflow.RegisterOptions{Name: \"helloWorld\"})\n\tassert.NoError(t, shadower.Run())\n}\n\n\n\n# Shadowing Worker\n\nNOTE:\n\n * All shadow workflows are running in one Cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * The Cadence server used for scanning and getting workflow history will also be the Cadence server for running your shadow workflow. Currently, there's no way to specify different Cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nYour worker can also be configured to run in shadow mode to run shadow tests as a workflow. This is useful if there's a number of workflows need to be replayed. Using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. It can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nWhen running in shadow mode, the normal decision, activity and session worker will be disabled so that it won't update any production workflows. A special shadow activity worker will be started to execute activities for scanning and replaying workflows. The actual shadow workflow logic is controlled by Cadence server and your worker is only responsible for scanning and replaying workflows.\n\nReplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nTo enable the shadow mode, the only change needed is setting the EnableShadowWorker field in worker.Options to true, and then specify the ShadowOptions.\n\nRegistered workflows will be forwarded to the underlying WorkflowReplayer. DataConverter, WorkflowInterceptorChainFactories, ContextPropagators, and Tracer specified in the worker.Options will also be used as ReplayOptions. Since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.\n\nA sample setup can be found here.",normalizedContent:"# workflow replay and shadowing\n\nin the versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. however, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. to help you identify incompatible changes and catch them before production traffic is impacted, we implemented workflow replayer and workflow shadower.\n\n\n# workflow replayer\n\nworkflow replayer is a testing component for replaying existing workflow histories against a workflow definition. the replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# write a replay test\n\n# step 1: create workflow replayer\n\ncreate a workflow replayer by:\n\nreplayer := worker.newworkflowreplayer()\n\n\nor if custom data converter, context propagator, interceptor, etc. is used in your workflow:\n\noptions := worker.replayoptions{\n dataconverter: mydataconverter,\n contextpropagators: []workflow.contextpropagator{\n mycontextpropagator,\n },\n workflowinterceptorchainfactories: []interceptors.workflowinterceptorfactory{\n myinterceptorfactory,\n },\n tracer: mytracer,\n}\nreplayer := worker.newworkflowreplaywithoptions(options)\n\n\n# step 2: register workflow definition\n\nnext, register your workflow definitions as you normally do. make sure workflows are registered the same way as they were when running and generating histories; otherwise the replay will not be able to find the corresponding definition.\n\nreplayer.registerworkflow(myworkflowfunc1)\nreplayer.registerworkflow(myworkflowfunc2, workflow.registeroptions{\n\tname: workflowname,\n})\n\n\n# step 3: prepare workflow histories\n\nreplayer can read workflow history from a local json file or fetch it directly from the cadence server. if you would like to use the first method, you can use the following cli command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nthe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# step 4: call the replay method\n\nonce you have the workflow history or have the connection to cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nerr := replayer.replayworkflowhistory(logger, history)\n\n// if workflow history is stored in a json file\nerr = replayer.replayworkflowhistoryfromjsonfile(logger, jsonfilename)\n\n// if workflow history is stored in a json file and you only want to replay part of it\n// note: lasteventid can't be set arbitrarily. it must be the end of of a history events batch\n// when in doubt, set to the eventid of decisiontaskstarted events.\nerr = replayer.replaypartialworkflowhistoryfromjsonfile(logger, jsonfilename, lasteventid)\n\n// if you want to fetch workflow history directly from cadence server\n// please check the worker service page for how to create a cadence service client\nerr = replayer.replayworkflowexecution(ctx, cadenceserviceclient, logger, domain, execution)\n\n\n# step 5: check returned error\n\nif an error is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\nnote: currently an error will be returned if there are less than 3 events in the history. it is because the first 3 events in the history has nothing to do with the workflow code, so replayer can't tell if there's a incompatible change or not.\n\n\n# sample replay test\n\nthis sample is also available in our samples repo at here.\n\nfunc testreplayworkflowhistoryfromfile(t *testing.t) {\n\treplayer := worker.newworkflowreplayer()\n\treplayer.registerworkflow(helloworldworkflow)\n\terr := replayer.replayworkflowhistoryfromjsonfile(zaptest.newlogger(t), \"helloworld.json\")\n\trequire.noerror(t, err)\n}\n\n\n\n# workflow shadower\n\nworkflow replayer works well when verifying the compatibility against a small number of workflow histories. if there are lots of workflows in production need to be verified, dumping all histories manually clearly won't work. directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nworkflow shadower is built on top of workflow replayer to address this problem. the basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each of workflow in the scan result from cadence server and run the replay test. it can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# shadow options\n\ncomplete documentation on shadow options which includes default values, accepted values, etc. can be found here. the following sections are just a brief description of each option.\n\n# scan filters\n\n * workflowquery: if you are familiar with our advanced visibility query syntax, you can specify a query directly. if specified, all other scan filters must be left empty.\n * workflowtypes: a list of workflow type names.\n * workflowstatus: a list of workflow status.\n * workflowstarttimefilter: min and max timestamp for workflow start time.\n * samplingrate: sampling workflows from the scan result before executing the replay test.\n\n# shadow exit condition\n\n * expirationinterval: shadowing will exit when the specified interval has passed.\n * shadowcount: shadowing will exit after this number of workflow has been replayed. note: replay maybe skipped due to errors like can't fetch history, history too short, etc. skipped workflows won't be taken account into shadowcount.\n\n# shadow mode\n\n * normal: shadowing will complete after all workflows matches workflowquery (after sampling) have been replayed or when exit condition is met.\n * continuous: a new round of shadowing will be started after all workflows matches workflowquery have been replayed. there will be a 5 min wait period between each round, and currently this wait period is not configurable. shadowing will complete only when exitcondition is met. exitcondition must be specified when using this mode.\n\n# shadow concurrency\n\n * concurrency: workflow replay concurrency. if not specified, will be default to 1. for local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# local shadowing test\n\nlocal shadowing test is similar to the replay test. first create a workflow shadower with optional shadow and replay options, then register the workflow that need to be shadowed. finally, call the run method to start the shadowing. the method will return if shadowing has finished or any non-deterministic error is found.\n\nhere's a simple example. the example is also available here.\n\nfunc testshadowworkflow(t *testing.t) {\n\toptions := worker.shadowoptions{\n\t\tworkflowstarttimefilter: worker.timefilter{\n\t\t\tmintimestamp: time.now().add(-time.hour),\n\t\t},\n\t\texitcondition: worker.shadowexitcondition{\n\t\t\tshadowcount: 10,\n\t\t},\n\t}\n\n // please check the worker service page for how to create a cadence service client\n\tservice := buildcadenceclient()\n\tshadower, err := worker.newworkflowshadower(service, \"samples-domain\", options, worker.replayoptions{}, zaptest.newlogger(t))\n\tassert.noerror(t, err)\n\n\tshadower.registerworkflowwithoptions(helloworldworkflow, workflow.registeroptions{name: \"helloworld\"})\n\tassert.noerror(t, shadower.run())\n}\n\n\n\n# shadowing worker\n\nnote:\n\n * all shadow workflows are running in one cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * the cadence server used for scanning and getting workflow history will also be the cadence server for running your shadow workflow. currently, there's no way to specify different cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nyour worker can also be configured to run in shadow mode to run shadow tests as a workflow. this is useful if there's a number of workflows need to be replayed. using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. it can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nwhen running in shadow mode, the normal decision, activity and session worker will be disabled so that it won't update any production workflows. a special shadow activity worker will be started to execute activities for scanning and replaying workflows. the actual shadow workflow logic is controlled by cadence server and your worker is only responsible for scanning and replaying workflows.\n\nreplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nto enable the shadow mode, the only change needed is setting the enableshadowworker field in worker.options to true, and then specify the shadowoptions.\n\nregistered workflows will be forwarded to the underlying workflowreplayer. dataconverter, workflowinterceptorchainfactories, contextpropagators, and tracer specified in the worker.options will also be used as replayoptions. since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.\n\na sample setup can be found here.",charsets:{}},{title:"Workflow Non-deterministic errors",frontmatter:{layout:"default",title:"Workflow Non-deterministic errors",permalink:"/docs/go-client/workflow-non-deterministic-errors",readingShow:"top"},regularPath:"/docs/05-go-client/19-workflow-non-deterministic-error.html",relativePath:"docs/05-go-client/19-workflow-non-deterministic-error.md",key:"v-5df8103c",path:"/docs/go-client/workflow-non-deterministic-errors/",headers:[{level:2,title:"Root cause of non-deterministic errors",slug:"root-cause-of-non-deterministic-errors",normalizedTitle:"root cause of non-deterministic errors",charIndex:40},{level:2,title:"Decision tasks of workflow",slug:"decision-tasks-of-workflow",normalizedTitle:"decision tasks of workflow",charIndex:1533},{level:2,title:"Categories of non-deterministic errors",slug:"categories-of-non-deterministic-errors",normalizedTitle:"categories of non-deterministic errors",charIndex:5698},{level:3,title:"1. Missing decisions",slug:"_1-missing-decisions",normalizedTitle:"1. missing decisions",charIndex:6002},{level:3,title:"2. Extra decisions",slug:"_2-extra-decisions",normalizedTitle:"2. extra decisions",charIndex:6618},{level:3,title:"3. Mismatched decisions",slug:"_3-mismatched-decisions",normalizedTitle:"3. mismatched decisions",charIndex:7562},{level:3,title:"4. Decision state machine panic",slug:"_4-decision-state-machine-panic",normalizedTitle:"4. decision state machine panic",charIndex:8294},{level:2,title:"Common Q&A",slug:"common-q-a",normalizedTitle:"common q&a",charIndex:null},{level:3,title:"I want to change my workflow implementation. What code changes may produce non-deterministic errors?",slug:"i-want-to-change-my-workflow-implementation-what-code-changes-may-produce-non-deterministic-errors",normalizedTitle:"i want to change my workflow implementation. what code changes may produce non-deterministic errors?",charIndex:8843},{level:3,title:"What are some changes that will NOT trigger non-deterministic errors?",slug:"what-are-some-changes-that-will-not-trigger-non-deterministic-errors",normalizedTitle:"what are some changes that will not trigger non-deterministic errors?",charIndex:9548},{level:3,title:"I want to check if my code change will produce non-deterministic errors, how can I debug?",slug:"i-want-to-check-if-my-code-change-will-produce-non-deterministic-errors-how-can-i-debug",normalizedTitle:"i want to check if my code change will produce non-deterministic errors, how can i debug?",charIndex:10476}],codeSwitcherOptions:{},headersStr:"Root cause of non-deterministic errors Decision tasks of workflow Categories of non-deterministic errors 1. Missing decisions 2. Extra decisions 3. Mismatched decisions 4. Decision state machine panic Common Q&A I want to change my workflow implementation. What code changes may produce non-deterministic errors? What are some changes that will NOT trigger non-deterministic errors? I want to check if my code change will produce non-deterministic errors, how can I debug?",content:'# Workflow Non-deterministic errors\n\n\n# Root cause of non-deterministic errors\n\nCadence workflows are designed as long-running operations, and therefore the workflow code you write must be deterministic so that no matter how many time it is executed it always produce the same results.\n\nIn production environment, your workflow code will run on a distributed system orchestrated by clusters of machines. However, machine failures are inevitable and can happen anytime to your workflow host. If you have a workflow running for long period of time, maybe months even years, and it fails due to loss of a host, it will be resumed on another machine and continue the rest of its execution.\n\nConsider the following diagram where Workflow A is running on Host A but suddenly it crashes.\n\n\n\nWorkflow A then will be picked up by Host B and continues its execution. This process is called change of workflow ownership. However, after Host B gains ownership of the Workflow A, it does not have any information about its historical executions. For example, Workflow A may have executed many activities and it fails. Host B needs to redo all its history until the moment of failure. The process of reconstructing history of a workflow is called history replay.\n\nIn general, any errors occurs during the replay process are called non-deterministic errors. We will explore different types of non-deterministic errors in sections below but first let\'s try to understand how Cadence is able to perform the replay of workflow in case of failure.\n\n\n# Decision tasks of workflow\n\nIn the previous section, we learned that Cadence is able to replay workflow histories in case of failure. We will learn exactly how Cadence keeps track of histories and how they get replayed when necessary.\n\nWorkflow histories are built based on event-sourcing, and each history event are persisted in Cadence storage. In Cadence, we call these history events decision tasks, the foundation of history replay. Most decision tasks have three status - Scheduled, Started, Completed and we will go over decision tasks produced by each Cadence operation in section below.\n\nWhen changing a workflow ownership of host and replaying a workflow, the decision tasks are downloaded from database and persisted in memory. Then during the workflow replaying process, if Cadence finds a decision task already exists for a particular step, it will immediately return the value of a decision task instead of rerunning the whole workflow logic. Let\'s take a look at the following simple workflow implementation and explicitly list all decision tasks produced by this workflow.\n\nfunc SimpleWorkflow(ctx workflow.Context) error {\n\tao := workflow.ActivityOptions{\n\t\t...\n\t}\n\tctx = workflow.WithActivityOptions(ctx, ao)\n\n\tvar a int\n\terr := workflow.ExecuteActivity(ctx, ActivityA).Get(ctx, &a)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.Sleep(time.Minute)\n\n\terr = workflow.ExecuteActivity(ctx, ActivityB, a).Get(ctx, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.Sleep(time.Hour)\n\treturn nil\n}\n\n\nIn this workflow, when it starts, it first execute ActivityA and then assign the result to an integer. It sleeps for one minute and then use the integer as an input argument to execute ActivityB. Finally it sleeps for one hour and completes.\n\nThe following table lists the decision tasks stack produced by this workflow. It may look overwhelming first but if you associate each decision task with its corresponding Cadence operation, it becomes self-explanatory.\n\nID DECISION TASK TYPE EXPLANATION\n1 WorkflowStarted the recorded StartWorkflow call\'s data, which usually\n schedules a new decision task immediately\n2 DecisionTaskScheduled workflow worker polling for work\n3 DecisionTaskStarted worker gets the type SimpleWorkflow, lookup registred funcs,\n deserialize input, call it\n4 DecisionTaskCompleted worker finishes\n5 ActivityTaskScheduled activity available for a worker\n6 ActivityTaskStarted activity worker polls and gets type ActivityA and do the job\n7 ActivityTaskCompleted activity work completed with result of var a\n8 DecisionTaskScheduled triggered by ActivityCompleted. server schedule next task\n9 DecisionTaskStarted \n10 DecisionTaskCompleted \n11 TimerStarted decision scheduled a timer for 1 minute\n12 TimerFired fired after 1 minute\n13 DecisionTaskScheduled triggered by TimerFired\n14 DecisionTaskStarted \n15 DecisionTaskCompleted \n16 ActivityTaskScheduled ActivityB scheduled by decision with param a\n17 ActivityTaskStarted started by worker\n18 ActivityTaskCompleted completed with nil\n19 DecisionTaskScheduled triggered by ActivityCompleted\n20 DecisionTaskStarted \n21 DecisionTaskCompleted \n22 TimerStarted decision scheduled a timer for 1 hour\n23 TimerFired fired after 1 hour\n24 DecisionTaskScheduled triggered by TimerFired\n25 DecisionTaskStarted \n26 DecisionTaskCompleted \n27 WorkflowCompleted completed by decision (the function call returned)\n\nAs you may observe that this stack has strict orders. The whole point of the table above is that if the code you write involves some orchestration by Cadence, either your worker or Cadence server, they produce decision tasks. When your workflow gets replayed, it will strive to reconstruct this stack. Therefore, code changes to your workflow needs to make sure that they do not mess up with these decision tasks, which trigger non-deterministic errors. Then let\'s explore different types of non-deterministic errors and their root causes.\n\n\n# Categories of non-deterministic errors\n\nProgrammatically, Cadence surfaces 4 categories of non-deterministic errors. With understanding of decision tasks in the previous section and combining the error messages, you should be able to pinpoint what code changes may yield to non-deterministic errors.\n\n\n# 1. Missing decisions\n\nfmt.Errorf("nondeterministic workflow: missing replay decision for %s", util.HistoryEventToString(e))\n\n\nFor source code click here\n\nThis means after replay code, the decision is scheduled less than history events. Using the previous history as an example, when the workflow is waiting at the one hour timer(event ID 22), if we delete the line of :\n\nworkflow.Sleep(time.Hour)\n\n\nand restart worker, then it will run into this error. Because in the history, the workflow has a timer event that is supposed to fire in one hour. However, during replay, there is no logic to schedule that timer.\n\n\n# 2. Extra decisions\n\nfmt.Errorf("nondeterministic workflow: extra replay decision for %s", util.DecisionToString(d))\n\n\nFor source code click here\n\nThis is basically the opposite of the previous case, which means that during replay, Cadence generates more decisions than those in history events. Using the previous history as an example, when the workflow is waiting at the one hour timer(event ID 22), if we change the line of:\n\nerr = workflow.ExecuteActivity(ctx, activityB, a).Get(ctx, nil)\n\n\nto\n\nfb := workflow.ExecuteActivity(ctx, activityB, a)\nfc := workflow.ExecuteActivity(ctx, activityC, a)\nerr = fb.Get(ctx,nil)\nif err != nil {\n\treturn err\n}\nerr = fc.Get(ctx,nil)\nif err != nil {\n\treturn err\n}\n\n\nAnd restart worker, then it will run into this error. Because in the history, the workflow has scheduled only activityB after the one minute timer, however, during replay, there are two activities scheduled in a decision (in parallel).\n\n\n# 3. Mismatched decisions\n\nfmt.Errorf("nondeterministic workflow: history event is %s, replay decision is %s",util.HistoryEventToString(e), util.DecisionToString(d))\n\n\nFor source code click here\n\nThis means after replay code, the decision scheduled is different than the one in history. Using the previous history as an example, when the workflow is waiting at the one hour timer(event ID 22), if we change the line of :\n\nerr = workflow.ExecuteActivity(ctx, ActivityB, a).Get(ctx, nil)\n\n\nto\n\nerr = workflow.ExecuteActivity(ctx, ActivityC, a).Get(ctx, nil)\n\n\nAnd restart worker, then it will run into this error. Because in the history, the workflow has scheduled ActivityB with input a, but during replay, it schedules ActivityC.\n\n\n# 4. Decision state machine panic\n\nfmt.Sprintf("unknown decision %v, possible causes are nondeterministic workflow definition code"+" or incompatible change in the workflow definition", id)\n\n\nFor source code click here\n\nThis usually means workflow history is corrupted due to some bug. For example, the same activity can be scheduled and differentiated by activityID. So ActivityIDs for different activities are supposed to be unique in workflow history. If however we have an ActivityID collision, replay will run into this error.\n\n\n# Common Q&A\n\n\n# I want to change my workflow implementation. What code changes may produce non-deterministic errors?\n\nAs we discussed in previous sections, if your changes change decision tasks, then they will probably lead to non-deterministic errors. These are some common changes that can be categorized by these previous 4 types mentioned above.\n\n 1. Changing the order of executing Cadence defined operations, such as activities, timer, child workflows, signals, cancelRequest.\n 2. Change the duration of a timer\n 3. Use build-in goroutine of golang instead of using workflow.Go\n 4. Use build-in channel of golang instead of using workflow.Channel\n 5. Use build-in sleep function instead of using workflow.Sleep\n\n\n# What are some changes that will NOT trigger non-deterministic errors?\n\nCode changes that are free of non-deterministic erorrs normally do not involve decision tasks in Cadence.\n\n 1. Activity input and output changes do not directly cause non-deterministic errors because the contents are not checked. However, changes may produce serialization errors based on your data converter implementation (type or number-of-arg changes are particularly prone to problems, so we recommend you always use a single struct). Cadence uses json.Marshal and json.Unmarshal (with Decoder.UseNumber()) by default.\n 2. Code changes that does not modify history events are safe to be checked in. For example, logging or metrics implementations.\n 3. Change of retry policies, as these are not compared. Adding or removing retry policies is also safe. Changes will only take effect on new calls however, not ones that have already been scheduled.\n\n\n# I want to check if my code change will produce non-deterministic errors, how can I debug?\n\nCadence provides replayer test, which functions as an unit test on your local machine to replay your workflow history comparing to your potential code change. If you introduce a non-deterministic change and your history triggers it, the test should fail. Check out this page for more details.',normalizedContent:'# workflow non-deterministic errors\n\n\n# root cause of non-deterministic errors\n\ncadence workflows are designed as long-running operations, and therefore the workflow code you write must be deterministic so that no matter how many time it is executed it always produce the same results.\n\nin production environment, your workflow code will run on a distributed system orchestrated by clusters of machines. however, machine failures are inevitable and can happen anytime to your workflow host. if you have a workflow running for long period of time, maybe months even years, and it fails due to loss of a host, it will be resumed on another machine and continue the rest of its execution.\n\nconsider the following diagram where workflow a is running on host a but suddenly it crashes.\n\n\n\nworkflow a then will be picked up by host b and continues its execution. this process is called change of workflow ownership. however, after host b gains ownership of the workflow a, it does not have any information about its historical executions. for example, workflow a may have executed many activities and it fails. host b needs to redo all its history until the moment of failure. the process of reconstructing history of a workflow is called history replay.\n\nin general, any errors occurs during the replay process are called non-deterministic errors. we will explore different types of non-deterministic errors in sections below but first let\'s try to understand how cadence is able to perform the replay of workflow in case of failure.\n\n\n# decision tasks of workflow\n\nin the previous section, we learned that cadence is able to replay workflow histories in case of failure. we will learn exactly how cadence keeps track of histories and how they get replayed when necessary.\n\nworkflow histories are built based on event-sourcing, and each history event are persisted in cadence storage. in cadence, we call these history events decision tasks, the foundation of history replay. most decision tasks have three status - scheduled, started, completed and we will go over decision tasks produced by each cadence operation in section below.\n\nwhen changing a workflow ownership of host and replaying a workflow, the decision tasks are downloaded from database and persisted in memory. then during the workflow replaying process, if cadence finds a decision task already exists for a particular step, it will immediately return the value of a decision task instead of rerunning the whole workflow logic. let\'s take a look at the following simple workflow implementation and explicitly list all decision tasks produced by this workflow.\n\nfunc simpleworkflow(ctx workflow.context) error {\n\tao := workflow.activityoptions{\n\t\t...\n\t}\n\tctx = workflow.withactivityoptions(ctx, ao)\n\n\tvar a int\n\terr := workflow.executeactivity(ctx, activitya).get(ctx, &a)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.sleep(time.minute)\n\n\terr = workflow.executeactivity(ctx, activityb, a).get(ctx, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.sleep(time.hour)\n\treturn nil\n}\n\n\nin this workflow, when it starts, it first execute activitya and then assign the result to an integer. it sleeps for one minute and then use the integer as an input argument to execute activityb. finally it sleeps for one hour and completes.\n\nthe following table lists the decision tasks stack produced by this workflow. it may look overwhelming first but if you associate each decision task with its corresponding cadence operation, it becomes self-explanatory.\n\nid decision task type explanation\n1 workflowstarted the recorded startworkflow call\'s data, which usually\n schedules a new decision task immediately\n2 decisiontaskscheduled workflow worker polling for work\n3 decisiontaskstarted worker gets the type simpleworkflow, lookup registred funcs,\n deserialize input, call it\n4 decisiontaskcompleted worker finishes\n5 activitytaskscheduled activity available for a worker\n6 activitytaskstarted activity worker polls and gets type activitya and do the job\n7 activitytaskcompleted activity work completed with result of var a\n8 decisiontaskscheduled triggered by activitycompleted. server schedule next task\n9 decisiontaskstarted \n10 decisiontaskcompleted \n11 timerstarted decision scheduled a timer for 1 minute\n12 timerfired fired after 1 minute\n13 decisiontaskscheduled triggered by timerfired\n14 decisiontaskstarted \n15 decisiontaskcompleted \n16 activitytaskscheduled activityb scheduled by decision with param a\n17 activitytaskstarted started by worker\n18 activitytaskcompleted completed with nil\n19 decisiontaskscheduled triggered by activitycompleted\n20 decisiontaskstarted \n21 decisiontaskcompleted \n22 timerstarted decision scheduled a timer for 1 hour\n23 timerfired fired after 1 hour\n24 decisiontaskscheduled triggered by timerfired\n25 decisiontaskstarted \n26 decisiontaskcompleted \n27 workflowcompleted completed by decision (the function call returned)\n\nas you may observe that this stack has strict orders. the whole point of the table above is that if the code you write involves some orchestration by cadence, either your worker or cadence server, they produce decision tasks. when your workflow gets replayed, it will strive to reconstruct this stack. therefore, code changes to your workflow needs to make sure that they do not mess up with these decision tasks, which trigger non-deterministic errors. then let\'s explore different types of non-deterministic errors and their root causes.\n\n\n# categories of non-deterministic errors\n\nprogrammatically, cadence surfaces 4 categories of non-deterministic errors. with understanding of decision tasks in the previous section and combining the error messages, you should be able to pinpoint what code changes may yield to non-deterministic errors.\n\n\n# 1. missing decisions\n\nfmt.errorf("nondeterministic workflow: missing replay decision for %s", util.historyeventtostring(e))\n\n\nfor source code click here\n\nthis means after replay code, the decision is scheduled less than history events. using the previous history as an example, when the workflow is waiting at the one hour timer(event id 22), if we delete the line of :\n\nworkflow.sleep(time.hour)\n\n\nand restart worker, then it will run into this error. because in the history, the workflow has a timer event that is supposed to fire in one hour. however, during replay, there is no logic to schedule that timer.\n\n\n# 2. extra decisions\n\nfmt.errorf("nondeterministic workflow: extra replay decision for %s", util.decisiontostring(d))\n\n\nfor source code click here\n\nthis is basically the opposite of the previous case, which means that during replay, cadence generates more decisions than those in history events. using the previous history as an example, when the workflow is waiting at the one hour timer(event id 22), if we change the line of:\n\nerr = workflow.executeactivity(ctx, activityb, a).get(ctx, nil)\n\n\nto\n\nfb := workflow.executeactivity(ctx, activityb, a)\nfc := workflow.executeactivity(ctx, activityc, a)\nerr = fb.get(ctx,nil)\nif err != nil {\n\treturn err\n}\nerr = fc.get(ctx,nil)\nif err != nil {\n\treturn err\n}\n\n\nand restart worker, then it will run into this error. because in the history, the workflow has scheduled only activityb after the one minute timer, however, during replay, there are two activities scheduled in a decision (in parallel).\n\n\n# 3. mismatched decisions\n\nfmt.errorf("nondeterministic workflow: history event is %s, replay decision is %s",util.historyeventtostring(e), util.decisiontostring(d))\n\n\nfor source code click here\n\nthis means after replay code, the decision scheduled is different than the one in history. using the previous history as an example, when the workflow is waiting at the one hour timer(event id 22), if we change the line of :\n\nerr = workflow.executeactivity(ctx, activityb, a).get(ctx, nil)\n\n\nto\n\nerr = workflow.executeactivity(ctx, activityc, a).get(ctx, nil)\n\n\nand restart worker, then it will run into this error. because in the history, the workflow has scheduled activityb with input a, but during replay, it schedules activityc.\n\n\n# 4. decision state machine panic\n\nfmt.sprintf("unknown decision %v, possible causes are nondeterministic workflow definition code"+" or incompatible change in the workflow definition", id)\n\n\nfor source code click here\n\nthis usually means workflow history is corrupted due to some bug. for example, the same activity can be scheduled and differentiated by activityid. so activityids for different activities are supposed to be unique in workflow history. if however we have an activityid collision, replay will run into this error.\n\n\n# common q&a\n\n\n# i want to change my workflow implementation. what code changes may produce non-deterministic errors?\n\nas we discussed in previous sections, if your changes change decision tasks, then they will probably lead to non-deterministic errors. these are some common changes that can be categorized by these previous 4 types mentioned above.\n\n 1. changing the order of executing cadence defined operations, such as activities, timer, child workflows, signals, cancelrequest.\n 2. change the duration of a timer\n 3. use build-in goroutine of golang instead of using workflow.go\n 4. use build-in channel of golang instead of using workflow.channel\n 5. use build-in sleep function instead of using workflow.sleep\n\n\n# what are some changes that will not trigger non-deterministic errors?\n\ncode changes that are free of non-deterministic erorrs normally do not involve decision tasks in cadence.\n\n 1. activity input and output changes do not directly cause non-deterministic errors because the contents are not checked. however, changes may produce serialization errors based on your data converter implementation (type or number-of-arg changes are particularly prone to problems, so we recommend you always use a single struct). cadence uses json.marshal and json.unmarshal (with decoder.usenumber()) by default.\n 2. code changes that does not modify history events are safe to be checked in. for example, logging or metrics implementations.\n 3. change of retry policies, as these are not compared. adding or removing retry policies is also safe. changes will only take effect on new calls however, not ones that have already been scheduled.\n\n\n# i want to check if my code change will produce non-deterministic errors, how can i debug?\n\ncadence provides replayer test, which functions as an unit test on your local machine to replay your workflow history comparing to your potential code change. if you introduce a non-deterministic change and your history triggers it, the test should fail. check out this page for more details.',charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/cli",readingShow:"top"},regularPath:"/docs/06-cli/",relativePath:"docs/06-cli/index.md",key:"v-6fa6d57b",path:"/docs/cli/",headers:[{level:2,title:"Using the CLI",slug:"using-the-cli",normalizedTitle:"using the cli",charIndex:237},{level:3,title:"Homebrew",slug:"homebrew",normalizedTitle:"homebrew",charIndex:255},{level:3,title:"Docker",slug:"docker",normalizedTitle:"docker",charIndex:492},{level:3,title:"Build it yourself",slug:"build-it-yourself",normalizedTitle:"build it yourself",charIndex:2034},{level:2,title:"Documentation",slug:"documentation",normalizedTitle:"documentation",charIndex:2418},{level:2,title:"Environment variables",slug:"environment-variables",normalizedTitle:"environment variables",charIndex:6296},{level:2,title:"Quick Start",slug:"quick-start",normalizedTitle:"quick start",charIndex:6577},{level:3,title:"Domain operation examples",slug:"domain-operation-examples",normalizedTitle:"domain operation examples",charIndex:6936},{level:3,title:"Workflow operation examples",slug:"workflow-operation-examples",normalizedTitle:"workflow operation examples",charIndex:7463}],codeSwitcherOptions:{},headersStr:"Using the CLI Homebrew Docker Build it yourself Documentation Environment variables Quick Start Domain operation examples Workflow operation examples",content:'# Command Line Interface\n\nThe Cadence is a command-line tool you can use to perform various on a Cadence server. It can perform operations such as register, update, and describe as well as operations like start , show history, and .\n\n\n# Using the CLI\n\n\n# Homebrew\n\nbrew install cadence-workflow\n\n\nAfter the installation is done, you can use CLI:\n\ncadence --help\n\n\nThis will always install the latest version. Follow this instructions if you need to install older versions of Cadence CLI.\n\n\n# Docker\n\nThe Cadence can be used directly from the Docker Hub image ubercadence/cli or by building the tool locally.\n\nExample of using the docker image to describe a\n\ndocker run -it --rm ubercadence/cli:master --address --domain samples-domain domain describe\n\n\nmaster will be the latest CLI binary from the project. But you can specify a version to best match your server version:\n\ndocker run -it --rm ubercadence/cli: --address --domain samples-domain domain describe\n\n\nFor example docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe will be the CLI that is released as part of the v0.21.3 release. See docker hub page for all the CLI image tags. Note that CLI versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. That\'s because the CLI version doesn\'t change in those versions.\n\nNOTE: On Docker versions 18.03 and later, you may get a "connection refused" error when connecting to local server. You can work around this by setting the host to "host.docker.internal" (see here for more info).\n\ndocker run -it --rm ubercadence/cli:master --address host.docker.internal:7933 --domain samples-domain domain describe\n\n\nNOTE: Be sure to update your image when you want to try new features: docker pull ubercadence/cli:master\n\nNOTE: If you are running docker-compose Cadence server, you can also logon to the container to execute CLI:\n\ndocker exec -it docker_cadence_1 /bin/bash\n\n# cadence --address $(hostname -i):7933 --do samples domain register\n\n\n\n# Build it yourself\n\nTo build the tool locally, clone the Cadence server repo, check out the version tag (e.g. git checkout v0.21.3) and run make tools. This produces an executable called cadence. With a local build, the same command to describe a would look like this:\n\ncadence --domain samples-domain domain describe\n\n\nAlternatively, you can build the CLI image, see instructions\n\n\n# Documentation\n\nCLI are documented by --help or -h in ANY tab of all levels:\n\n$cadence --help\nNAME:\n cadence - A command-line tool for cadence users\n\nUSAGE:\n cadence [global options] command [command options] [arguments...]\n\nVERSION:\n 0.18.4\n\nCOMMANDS:\n domain, d Operate cadence domain\n workflow, wf Operate cadence workflow\n tasklist, tl Operate cadence tasklist\n admin, adm Run admin operation\n cluster, cl Operate cadence cluster\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --address value, --ad value host:port for cadence frontend service [$CADENCE_CLI_ADDRESS]\n --domain value, --do value cadence workflow domain [$CADENCE_CLI_DOMAIN]\n --context_timeout value, --ct value optional timeout for context of RPC call in seconds (default: 5) [$CADENCE_CONTEXT_TIMEOUT]\n --help, -h show help\n --version, -v print the version\n\n\nAnd\n\n$cadence workflow -h\nNAME:\n cadence workflow - Operate cadence workflow\n\nUSAGE:\n cadence workflow command [command options] [arguments...]\n\nCOMMANDS:\n activity, act operate activities of workflow\n show show workflow history\n showid show workflow history with given workflow_id and run_id (a shortcut of `show -w -r `). run_id is only required for archived history\n start start a new workflow execution\n run start a new workflow execution and get workflow progress\n cancel, c cancel a workflow execution\n signal, s signal a workflow execution\n signalwithstart signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n terminate, term terminate a new workflow execution\n list, l list open or closed workflow executions\n listall, la list all open or closed workflow executions\n listarchived list archived workflow executions\n scan, sc, scanall scan workflow executions (need to enable Cadence server on ElasticSearch). It will be faster than listall, but result are not sorted.\n count, cnt count number of workflow executions (need to enable Cadence server on ElasticSearch)\n query query workflow execution\n stack query workflow execution with __stack_trace as query type\n describe, desc show information of workflow execution\n describeid, descid show information of workflow execution with given workflow_id and optional run_id (a shortcut of `describe -w -r `)\n observe, ob show the progress of workflow history\n observeid, obid show the progress of workflow history with given workflow_id and optional run_id (a shortcut of `observe -w -r `)\n reset, rs reset the workflow, by either eventID or resetType.\n reset-batch reset workflow in batch by resetType: LastDecisionCompleted,LastContinuedAsNew,BadBinary,DecisionCompletedTime,FirstDecisionScheduled,LastDecisionScheduled,FirstDecisionCompletedTo get base workflowIDs/runIDs to reset, source is from input file or visibility query.\n batch batch operation on a list of workflows from query.\n\nOPTIONS:\n --help, -h show help\n\n\n$cadence wf signal -h\nNAME:\n cadence workflow signal - signal a workflow execution\n\nUSAGE:\n cadence workflow signal [command options] [arguments...]\n\nOPTIONS:\n --workflow_id value, --wid value, -w value WorkflowID\n --run_id value, --rid value, -r value RunID\n --name value, -n value SignalName\n --input value, -i value Input for the signal, in JSON format.\n --input_file value, --if value Input for the signal from JSON file.\n\n\n\nAnd etc.\n\nThe example commands below will use cadence for brevity.\n\n\n# Environment variables\n\nSetting environment variables for repeated parameters can shorten the commands.\n\n * CADENCE_CLI_ADDRESS - host:port for Cadence frontend service, the default is for the local server\n * CADENCE_CLI_DOMAIN - default , so you don\'t need to specify --domain\n\n\n# Quick Start\n\nRun cadence for help on top level commands and global options Run cadence domain for help on operations Run cadence workflow for help on operations Run cadence tasklist for help on tasklist operations (cadence help, cadence help [domain|workflow] will also print help messages)\n\nNote: make sure you have a Cadence server running before using\n\n\n# Domain operation examples\n\n * Register a new named "samples-domain":\n\ncadence --domain samples-domain domain register\n# OR using short alias\ncadence --do samples-domain d re \n\n\nIf your Cadence cluster has enable global domain(XDC replication), then you have to specify the replicaiton settings when registering a domain:\n\ncadence --domains amples-domain domain register --active_cluster clusterNameA --clusters clusterNameA clusterNameB\n\n\n * View "samples-domain" details:\n\ncadence --domain samples-domain domain describe\n\n\n\n# Workflow operation examples\n\nThe following examples assume the CADENCE_CLI_DOMAIN environment variable is set.\n\n# Run workflow\n\nStart a and see its progress. This command doesn\'t finish until completes.\n\ncadence workflow run --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow run\ncadence workflow run -h\n\n\nBrief explanation: To run a , the user must specify the following:\n\n 1. Tasklist name (--tl)\n 2. Workflow type (--wt)\n 3. Execution start to close timeout in seconds (--et)\n 4. Input in JSON format (--i) (optional)\n\ns example uses this cadence-samples workflow and takes a string as input with the -i \'"cadence"\' parameter. Single quotes (\'\') are used to wrap input as JSON.\n\nNote: You need to start the so that the can make progress. (Run make && ./bin/helloworld -m worker in cadence-samples to start the )\n\n# Show running workers of a tasklist\n\ncadence tasklist desc --tl helloWorldGroup\n\n\n# Start workflow\n\ncadence workflow start --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow start\ncadence workflow start -h\n\n# for a workflow with multiple inputs, separate each json with space/newline like\ncadence workflow start --tl helloWorldGroup --wt main.WorkflowWith3Args --et 60 -i \'"your_input_string" 123 {"Name":"my-string", "Age":12345}\'\n\n\nThe start command is similar to the run command, but immediately returns the workflow_id and run_id after starting the . Use the show command to view the \'s history/progress.\n\n# Reuse the same workflow id when starting/running a workflow\n\nUse option --workflowidreusepolicy or --wrp to configure the reuse policy. Option 0 AllowDuplicateFailedOnly: Allow starting a using the same when a with the same is not already running and the last execution close state is one of [terminated, cancelled, timedout, failed]. Option 1 AllowDuplicate: Allow starting a using the same when a with the same is not already running. Option 2 RejectDuplicate: Do not allow starting a using the same as a previous .\n\n# use AllowDuplicateFailedOnly option to start a workflow\ncadence workflow start --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\' --wid "" --wrp 0\n\n# use AllowDuplicate option to run a workflow\ncadence workflow run --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\' --wid "" --wrp 1\n\n\n# Start a workflow with a memo\n\nMemos are immutable key/value pairs that can be attached to a run when starting the . These are visible when listing . More information on memos can be found here.\n\ncadence wf start -tl helloWorldGroup -wt main.Workflow -et 60 -i \'"cadence"\' -memo_key ‘“Service” “Env” “Instance”’ -memo ‘“serverName1” “test” 5’\n\n\n# Show workflow history\n\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest run history of that workflow_id\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# Show workflow execution information\n\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest workflow execution of that workflow_id\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# List closed or open workflow executions\n\ncadence workflow list\n\n# default will only show one page, to view more items, use --more flag\ncadence workflow list -m\n\n\nUse --query to list with SQL like\n\ncadence workflow list --query "WorkflowType=\'main.SampleParentWorkflow\' AND CloseTime = missing "\n\n\nThis will return all open with workflowType as "main.SampleParentWorkflow".\n\n# Query workflow execution\n\n# use custom query type\ncadence workflow query -w -r --qt \n\n# use build-in query type "__stack_trace" which is supported by Cadence client library\ncadence workflow query -w -r --qt __stack_trace\n# a shortcut to query using __stack_trace is (without --qt flag)\ncadence workflow stack -w -r \n\n\n# Signal, cancel, terminate workflow\n\n# signal\ncadence workflow signal -w -r -n -i \'"signal-value"\'\n\n# cancel\ncadence workflow cancel -w -r \n\n# terminate\ncadence workflow terminate -w -r --reason\n\n\nTerminating a running will record a WorkflowExecutionTerminated as the closing in the history. No more will be scheduled for a terminated . Canceling a running will record a WorkflowExecutionCancelRequested in the history, and a new will be scheduled. The has a chance to do some clean up work after cancellation.\n\n# Signal, cancel, terminate workflows as a batch job\n\nBatch job is based on List Workflow Query(--query). It supports , cancel and terminate as batch job type. For terminating as batch job, it will terminte the children recursively.\n\nStart a batch job(using as batch type):\n\ncadence --do samples-domain wf batch start --query "WorkflowType=\'main.SampleParentWorkflow\' AND CloseTime=missing" --reason "test" --bt signal --sig testname\nThis batch job will be operating on 5 workflows.\nPlease confirm[Yes/No]:yes\n{\n "jobID": "",\n "msg": "batch job is started"\n}\n\n\n\nYou need to remember the JobID or use List command to get all your batch jobs:\n\ncadence --do samples-domain wf batch list\n\n\nDescribe the progress of a batch job:\n\ncadence --do samples-domain wf batch desc -jid \n\n\nTerminate a batch job:\n\ncadence --do samples-domain wf batch terminate -jid \n\n\nNote that the operation performed by a batch will not be rolled back by terminating the batch. However, you can use reset to rollback your .\n\n# Restart, reset workflow\n\nThe Reset command allows resetting a to a particular point and continue running from there. There are a lot of use cases:\n\n * Rerun a failed from the beginning with the same start parameters.\n * Rerun a failed from the failing point without losing the achieved progress(history).\n * After deploying new code, reset an open to let the run to different flows.\n\nYou can reset to some predefined types:\n\ncadence workflow reset -w -r --reset_type --reason "some_reason"\n\n\n * FirstDecisionCompleted: reset to the beginning of the history.\n * LastDecisionCompleted: reset to the end of the history.\n * LastContinuedAsNew: reset to the end of the history for the previous run.\n\nIf you are familiar with the Cadence history , You can also reset to any finish by using:\n\ncadence workflow reset -w -r --event_id --reason "some_reason"\n\n\nSome things to note:\n\n * When reset, a new run will be kicked off with the same workflowID. But if there is a running execution for the workflow(workflowID), the current run will be terminated.\n * decision_finish_event_id is the ID of of the type: DecisionTaskComplete/DecisionTaskFailed/DecisionTaskTimeout.\n * To restart a from the beginning, reset to the first finish .\n\nTo reset multiple , you can use batch reset command:\n\ncadence workflow reset-batch --input_file --reset_type --reason "some_reason"\n\n\n# Recovery from bad deployment -- auto-reset workflow\n\nIf a bad deployment lets a run into a wrong state, you might want to reset the to the point that the bad deployment started to run. But usually it is not easy to find out all the impacted, and every reset point for each . In this case, auto-reset will automatically reset all the given a bad deployment identifier.\n\nLet\'s get familiar with some concepts. Each deployment will have an identifier, we call it "Binary Checksum" as it is usually generated by the md5sum of a binary file. For a , each binary checksum will be associated with an auto-reset point, which contains a runID, an eventID, and the created_time that binary/deployment made the first for the .\n\nTo find out which binary checksum of the bad deployment to reset, you should be aware of at least one running into a bad state. Use the describe command with --reset_points_only option to show all the reset points:\n\ncadence wf desc -w --reset_points_only\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| BINARY CHECKSUM | CREATE TIME | RUNID | EVENTID |\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| c84c5afa552613a83294793f4e664a7f | 2019-05-24 10:01:00.398455019 | 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 | 4 |\n| aae748fdc557a3f873adbe1dd066713f | 2019-05-24 11:01:00.067691445 | d42d21b8-2adb-4313-b069-3837d44d6ce6 | 4 |\n...\n...\n\n\nThen use this command to tell Cadence to auto-reset all impacted by the bad deployment. The command will store the bad binary checksum into info and trigger a process to reset all your .\n\ncadence --do domain update --add_bad_binary aae748fdc557a3f873adbe1dd066713f --reason "rollback bad deployment"\n\n\nAs you add the bad binary checksum to your , Cadence will not dispatch any to the bad binary. So make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). Otherwise your can\'t make any progress after auto-reset.',normalizedContent:'# command line interface\n\nthe cadence is a command-line tool you can use to perform various on a cadence server. it can perform operations such as register, update, and describe as well as operations like start , show history, and .\n\n\n# using the cli\n\n\n# homebrew\n\nbrew install cadence-workflow\n\n\nafter the installation is done, you can use cli:\n\ncadence --help\n\n\nthis will always install the latest version. follow this instructions if you need to install older versions of cadence cli.\n\n\n# docker\n\nthe cadence can be used directly from the docker hub image ubercadence/cli or by building the tool locally.\n\nexample of using the docker image to describe a\n\ndocker run -it --rm ubercadence/cli:master --address --domain samples-domain domain describe\n\n\nmaster will be the latest cli binary from the project. but you can specify a version to best match your server version:\n\ndocker run -it --rm ubercadence/cli: --address --domain samples-domain domain describe\n\n\nfor example docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe will be the cli that is released as part of the v0.21.3 release. see docker hub page for all the cli image tags. note that cli versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. that\'s because the cli version doesn\'t change in those versions.\n\nnote: on docker versions 18.03 and later, you may get a "connection refused" error when connecting to local server. you can work around this by setting the host to "host.docker.internal" (see here for more info).\n\ndocker run -it --rm ubercadence/cli:master --address host.docker.internal:7933 --domain samples-domain domain describe\n\n\nnote: be sure to update your image when you want to try new features: docker pull ubercadence/cli:master\n\nnote: if you are running docker-compose cadence server, you can also logon to the container to execute cli:\n\ndocker exec -it docker_cadence_1 /bin/bash\n\n# cadence --address $(hostname -i):7933 --do samples domain register\n\n\n\n# build it yourself\n\nto build the tool locally, clone the cadence server repo, check out the version tag (e.g. git checkout v0.21.3) and run make tools. this produces an executable called cadence. with a local build, the same command to describe a would look like this:\n\ncadence --domain samples-domain domain describe\n\n\nalternatively, you can build the cli image, see instructions\n\n\n# documentation\n\ncli are documented by --help or -h in any tab of all levels:\n\n$cadence --help\nname:\n cadence - a command-line tool for cadence users\n\nusage:\n cadence [global options] command [command options] [arguments...]\n\nversion:\n 0.18.4\n\ncommands:\n domain, d operate cadence domain\n workflow, wf operate cadence workflow\n tasklist, tl operate cadence tasklist\n admin, adm run admin operation\n cluster, cl operate cadence cluster\n help, h shows a list of commands or help for one command\n\nglobal options:\n --address value, --ad value host:port for cadence frontend service [$cadence_cli_address]\n --domain value, --do value cadence workflow domain [$cadence_cli_domain]\n --context_timeout value, --ct value optional timeout for context of rpc call in seconds (default: 5) [$cadence_context_timeout]\n --help, -h show help\n --version, -v print the version\n\n\nand\n\n$cadence workflow -h\nname:\n cadence workflow - operate cadence workflow\n\nusage:\n cadence workflow command [command options] [arguments...]\n\ncommands:\n activity, act operate activities of workflow\n show show workflow history\n showid show workflow history with given workflow_id and run_id (a shortcut of `show -w -r `). run_id is only required for archived history\n start start a new workflow execution\n run start a new workflow execution and get workflow progress\n cancel, c cancel a workflow execution\n signal, s signal a workflow execution\n signalwithstart signal the current open workflow if exists, or attempt to start a new run based on idresuepolicy and signals it\n terminate, term terminate a new workflow execution\n list, l list open or closed workflow executions\n listall, la list all open or closed workflow executions\n listarchived list archived workflow executions\n scan, sc, scanall scan workflow executions (need to enable cadence server on elasticsearch). it will be faster than listall, but result are not sorted.\n count, cnt count number of workflow executions (need to enable cadence server on elasticsearch)\n query query workflow execution\n stack query workflow execution with __stack_trace as query type\n describe, desc show information of workflow execution\n describeid, descid show information of workflow execution with given workflow_id and optional run_id (a shortcut of `describe -w -r `)\n observe, ob show the progress of workflow history\n observeid, obid show the progress of workflow history with given workflow_id and optional run_id (a shortcut of `observe -w -r `)\n reset, rs reset the workflow, by either eventid or resettype.\n reset-batch reset workflow in batch by resettype: lastdecisioncompleted,lastcontinuedasnew,badbinary,decisioncompletedtime,firstdecisionscheduled,lastdecisionscheduled,firstdecisioncompletedto get base workflowids/runids to reset, source is from input file or visibility query.\n batch batch operation on a list of workflows from query.\n\noptions:\n --help, -h show help\n\n\n$cadence wf signal -h\nname:\n cadence workflow signal - signal a workflow execution\n\nusage:\n cadence workflow signal [command options] [arguments...]\n\noptions:\n --workflow_id value, --wid value, -w value workflowid\n --run_id value, --rid value, -r value runid\n --name value, -n value signalname\n --input value, -i value input for the signal, in json format.\n --input_file value, --if value input for the signal from json file.\n\n\n\nand etc.\n\nthe example commands below will use cadence for brevity.\n\n\n# environment variables\n\nsetting environment variables for repeated parameters can shorten the commands.\n\n * cadence_cli_address - host:port for cadence frontend service, the default is for the local server\n * cadence_cli_domain - default , so you don\'t need to specify --domain\n\n\n# quick start\n\nrun cadence for help on top level commands and global options run cadence domain for help on operations run cadence workflow for help on operations run cadence tasklist for help on tasklist operations (cadence help, cadence help [domain|workflow] will also print help messages)\n\nnote: make sure you have a cadence server running before using\n\n\n# domain operation examples\n\n * register a new named "samples-domain":\n\ncadence --domain samples-domain domain register\n# or using short alias\ncadence --do samples-domain d re \n\n\nif your cadence cluster has enable global domain(xdc replication), then you have to specify the replicaiton settings when registering a domain:\n\ncadence --domains amples-domain domain register --active_cluster clusternamea --clusters clusternamea clusternameb\n\n\n * view "samples-domain" details:\n\ncadence --domain samples-domain domain describe\n\n\n\n# workflow operation examples\n\nthe following examples assume the cadence_cli_domain environment variable is set.\n\n# run workflow\n\nstart a and see its progress. this command doesn\'t finish until completes.\n\ncadence workflow run --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow run\ncadence workflow run -h\n\n\nbrief explanation: to run a , the user must specify the following:\n\n 1. tasklist name (--tl)\n 2. workflow type (--wt)\n 3. execution start to close timeout in seconds (--et)\n 4. input in json format (--i) (optional)\n\ns example uses this cadence-samples workflow and takes a string as input with the -i \'"cadence"\' parameter. single quotes (\'\') are used to wrap input as json.\n\nnote: you need to start the so that the can make progress. (run make && ./bin/helloworld -m worker in cadence-samples to start the )\n\n# show running workers of a tasklist\n\ncadence tasklist desc --tl helloworldgroup\n\n\n# start workflow\n\ncadence workflow start --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow start\ncadence workflow start -h\n\n# for a workflow with multiple inputs, separate each json with space/newline like\ncadence workflow start --tl helloworldgroup --wt main.workflowwith3args --et 60 -i \'"your_input_string" 123 {"name":"my-string", "age":12345}\'\n\n\nthe start command is similar to the run command, but immediately returns the workflow_id and run_id after starting the . use the show command to view the \'s history/progress.\n\n# reuse the same workflow id when starting/running a workflow\n\nuse option --workflowidreusepolicy or --wrp to configure the reuse policy. option 0 allowduplicatefailedonly: allow starting a using the same when a with the same is not already running and the last execution close state is one of [terminated, cancelled, timedout, failed]. option 1 allowduplicate: allow starting a using the same when a with the same is not already running. option 2 rejectduplicate: do not allow starting a using the same as a previous .\n\n# use allowduplicatefailedonly option to start a workflow\ncadence workflow start --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\' --wid "" --wrp 0\n\n# use allowduplicate option to run a workflow\ncadence workflow run --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\' --wid "" --wrp 1\n\n\n# start a workflow with a memo\n\nmemos are immutable key/value pairs that can be attached to a run when starting the . these are visible when listing . more information on memos can be found here.\n\ncadence wf start -tl helloworldgroup -wt main.workflow -et 60 -i \'"cadence"\' -memo_key ‘“service” “env” “instance”’ -memo ‘“servername1” “test” 5’\n\n\n# show workflow history\n\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest run history of that workflow_id\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# show workflow execution information\n\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest workflow execution of that workflow_id\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# list closed or open workflow executions\n\ncadence workflow list\n\n# default will only show one page, to view more items, use --more flag\ncadence workflow list -m\n\n\nuse --query to list with sql like\n\ncadence workflow list --query "workflowtype=\'main.sampleparentworkflow\' and closetime = missing "\n\n\nthis will return all open with workflowtype as "main.sampleparentworkflow".\n\n# query workflow execution\n\n# use custom query type\ncadence workflow query -w -r --qt \n\n# use build-in query type "__stack_trace" which is supported by cadence client library\ncadence workflow query -w -r --qt __stack_trace\n# a shortcut to query using __stack_trace is (without --qt flag)\ncadence workflow stack -w -r \n\n\n# signal, cancel, terminate workflow\n\n# signal\ncadence workflow signal -w -r -n -i \'"signal-value"\'\n\n# cancel\ncadence workflow cancel -w -r \n\n# terminate\ncadence workflow terminate -w -r --reason\n\n\nterminating a running will record a workflowexecutionterminated as the closing in the history. no more will be scheduled for a terminated . canceling a running will record a workflowexecutioncancelrequested in the history, and a new will be scheduled. the has a chance to do some clean up work after cancellation.\n\n# signal, cancel, terminate workflows as a batch job\n\nbatch job is based on list workflow query(--query). it supports , cancel and terminate as batch job type. for terminating as batch job, it will terminte the children recursively.\n\nstart a batch job(using as batch type):\n\ncadence --do samples-domain wf batch start --query "workflowtype=\'main.sampleparentworkflow\' and closetime=missing" --reason "test" --bt signal --sig testname\nthis batch job will be operating on 5 workflows.\nplease confirm[yes/no]:yes\n{\n "jobid": "",\n "msg": "batch job is started"\n}\n\n\n\nyou need to remember the jobid or use list command to get all your batch jobs:\n\ncadence --do samples-domain wf batch list\n\n\ndescribe the progress of a batch job:\n\ncadence --do samples-domain wf batch desc -jid \n\n\nterminate a batch job:\n\ncadence --do samples-domain wf batch terminate -jid \n\n\nnote that the operation performed by a batch will not be rolled back by terminating the batch. however, you can use reset to rollback your .\n\n# restart, reset workflow\n\nthe reset command allows resetting a to a particular point and continue running from there. there are a lot of use cases:\n\n * rerun a failed from the beginning with the same start parameters.\n * rerun a failed from the failing point without losing the achieved progress(history).\n * after deploying new code, reset an open to let the run to different flows.\n\nyou can reset to some predefined types:\n\ncadence workflow reset -w -r --reset_type --reason "some_reason"\n\n\n * firstdecisioncompleted: reset to the beginning of the history.\n * lastdecisioncompleted: reset to the end of the history.\n * lastcontinuedasnew: reset to the end of the history for the previous run.\n\nif you are familiar with the cadence history , you can also reset to any finish by using:\n\ncadence workflow reset -w -r --event_id --reason "some_reason"\n\n\nsome things to note:\n\n * when reset, a new run will be kicked off with the same workflowid. but if there is a running execution for the workflow(workflowid), the current run will be terminated.\n * decision_finish_event_id is the id of of the type: decisiontaskcomplete/decisiontaskfailed/decisiontasktimeout.\n * to restart a from the beginning, reset to the first finish .\n\nto reset multiple , you can use batch reset command:\n\ncadence workflow reset-batch --input_file --reset_type --reason "some_reason"\n\n\n# recovery from bad deployment -- auto-reset workflow\n\nif a bad deployment lets a run into a wrong state, you might want to reset the to the point that the bad deployment started to run. but usually it is not easy to find out all the impacted, and every reset point for each . in this case, auto-reset will automatically reset all the given a bad deployment identifier.\n\nlet\'s get familiar with some concepts. each deployment will have an identifier, we call it "binary checksum" as it is usually generated by the md5sum of a binary file. for a , each binary checksum will be associated with an auto-reset point, which contains a runid, an eventid, and the created_time that binary/deployment made the first for the .\n\nto find out which binary checksum of the bad deployment to reset, you should be aware of at least one running into a bad state. use the describe command with --reset_points_only option to show all the reset points:\n\ncadence wf desc -w --reset_points_only\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| binary checksum | create time | runid | eventid |\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| c84c5afa552613a83294793f4e664a7f | 2019-05-24 10:01:00.398455019 | 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 | 4 |\n| aae748fdc557a3f873adbe1dd066713f | 2019-05-24 11:01:00.067691445 | d42d21b8-2adb-4313-b069-3837d44d6ce6 | 4 |\n...\n...\n\n\nthen use this command to tell cadence to auto-reset all impacted by the bad deployment. the command will store the bad binary checksum into info and trigger a process to reset all your .\n\ncadence --do domain update --add_bad_binary aae748fdc557a3f873adbe1dd066713f --reason "rollback bad deployment"\n\n\nas you add the bad binary checksum to your , cadence will not dispatch any to the bad binary. so make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). otherwise your can\'t make any progress after auto-reset.',charsets:{cjk:!0}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/go-client",readingShow:"top"},regularPath:"/docs/05-go-client/",relativePath:"docs/05-go-client/index.md",key:"v-740be4db",path:"/docs/go-client/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:16},{level:2,title:"Links",slug:"links",normalizedTitle:"links",charIndex:712}],codeSwitcherOptions:{},headersStr:"Overview Links",content:"# Go client\n\n\n# Overview\n\nGo client attempts to follow Go language conventions. The conversion of a Go program to the fault-oblivious function is expected to be pretty mechanical.\n\nCadence requires determinism of the code. It supports deterministic execution of the multithreaded code and constructs like select that are non-deterministic by Go design. The Cadence solution is to provide corresponding constructs in the form of interfaces that have similar capability but support deterministic execution.\n\nFor example, instead of native Go channels, code must use the workflow.Channel interface. Instead of select, the workflow.Selector interface must be used.\n\nFor more information, see Creating Workflows.\n\n\n# Links\n\n * GitHub project: https://github.com/uber-go/cadence-client\n * Samples: https://github.com/uber-common/cadence-samples\n * GoDoc documentation: https://godoc.org/go.uber.org/cadence",normalizedContent:"# go client\n\n\n# overview\n\ngo client attempts to follow go language conventions. the conversion of a go program to the fault-oblivious function is expected to be pretty mechanical.\n\ncadence requires determinism of the code. it supports deterministic execution of the multithreaded code and constructs like select that are non-deterministic by go design. the cadence solution is to provide corresponding constructs in the form of interfaces that have similar capability but support deterministic execution.\n\nfor example, instead of native go channels, code must use the workflow.channel interface. instead of select, the workflow.selector interface must be used.\n\nfor more information, see creating workflows.\n\n\n# links\n\n * github project: https://github.com/uber-go/cadence-client\n * samples: https://github.com/uber-common/cadence-samples\n * godoc documentation: https://godoc.org/go.uber.org/cadence",charsets:{}},{title:"Cluster Configuration",frontmatter:{layout:"default",title:"Cluster Configuration",permalink:"/docs/operation-guide/setup",readingShow:"top"},regularPath:"/docs/07-operation-guide/01-setup.html",relativePath:"docs/07-operation-guide/01-setup.md",key:"v-6be5daf6",path:"/docs/operation-guide/setup/",headers:[{level:2,title:"Static configuration",slug:"static-configuration",normalizedTitle:"static configuration",charIndex:818},{level:3,title:"Configuration Directory and Files",slug:"configuration-directory-and-files",normalizedTitle:"configuration directory and files",charIndex:843},{level:3,title:"Understand the basic static configuration",slug:"understand-the-basic-static-configuration",normalizedTitle:"understand the basic static configuration",charIndex:2745},{level:3,title:"The full list of static configuration",slug:"the-full-list-of-static-configuration",normalizedTitle:"the full list of static configuration",charIndex:17568},{level:2,title:"Dynamic Configuration",slug:"dynamic-configuration",normalizedTitle:"dynamic configuration",charIndex:234},{level:3,title:"How to update Dynamic Configuration",slug:"how-to-update-dynamic-configuration",normalizedTitle:"how to update dynamic configuration",charIndex:21945},{level:2,title:"Other Advanced Features",slug:"other-advanced-features",normalizedTitle:"other advanced features",charIndex:25391},{level:2,title:"Deployment & Release",slug:"deployment-release",normalizedTitle:"deployment & release",charIndex:null},{level:2,title:"Stress/Bench Test a cluster",slug:"stress-bench-test-a-cluster",normalizedTitle:"stress/bench test a cluster",charIndex:26347}],codeSwitcherOptions:{},headersStr:"Static configuration Configuration Directory and Files Understand the basic static configuration The full list of static configuration Dynamic Configuration How to update Dynamic Configuration Other Advanced Features Deployment & Release Stress/Bench Test a cluster",content:'# Cluster Configuration\n\nThis section will help to understand what you need for setting up a Cadence cluster.\n\nYou should understand some basic static configuration of Cadence cluster.\n\nThere are also many other configuration called "Dynamic Configuration" for fine tuning the cluster. The default values are good to go for small clusters.\n\nCadence’s minimum dependency is a database(Cassandra or SQL based like MySQL/Postgres). Cadence uses it for persistence. All instances of Cadence clusters are stateless.\n\nFor production you also need a metric server(Prometheus/Statsd/M3/etc).\n\nFor advanced features Cadence depends on others like Elastisearch/OpenSearch+Kafka if you need Advanced visibility feature to search workflows. Cadence will depends on a blob store like S3 if you need to enable archival feature.\n\n\n# Static configuration\n\n\n# Configuration Directory and Files\n\nThe default directory for configuration files is named config/. This directory contains various configuration files, but not all files will necessarily be used in every scenario.\n\n# Combining Configuration Files\n\n * Base Configuration: The base.yaml file is always loaded first, providing a common configuration that applies to all environments.\n * Runtime Environment File: The second file to be loaded is specific to the runtime environment. The environment name can be specified through the $CADENCE_ENVIRONMENT environment variable or passed as a command-line argument. If neither option is specified, development.yaml is used by default.\n * Availability Zone File: If an availability zone is specified (either through the $CADENCE_AVAILABILITY_ZONE environment variable or as a command-line argument), a file named after the zone will be merged. For example, if you specify "az1" as the zone, production_az1.yaml will be used as well.\n\nTo merge base.yaml, production.yaml, and production_az1.yaml files, you need to specify "production" as the runtime environment and "az1" as the zone.\n\n// base.yaml -> production.yaml -> production_az1.yaml = final configuration\n\n\n# Using Environment Variables\n\nConfiguration values can be provided using environment variables with a specific syntax. $VAR: This notation will be replaced with the value of the specified environment variable. If the environment variable is not set, the value will be left blank. You can declare a default value using the syntax {$VAR:default}. This means that if the environment variable VAR is not set, the default value will be used instead.\n\nNote: If you want to include the $ symbol literally in your configuration file (without interpreting it as an environment variable substitution), escape it by using $$. This will prevent it from being replaced by an environment variable value.\n\n\n# Understand the basic static configuration\n\nThere are quite many configs in Cadence. Here are the most basic configuration that you should understand.\n\nCONFIG NAME EXPLANATION RECOMMENDED VALUE\nnumHistoryShards This is the most important one in Cadence config.It will be 1K~16K depending on the size ranges of the cluster you\n a fixed number in the cluster forever. The only way to expect to run, and the instance size. Typically 2K for SQL\n change it is to migrate to another cluster. Refer to Migrate based persistence, and 8K for Cassandra based.\n cluster section.\n \n Some facts about it:\n 1. Each workflow will be mapped to a single shard. Within a\n shard, all the workflow creation/updates are serialized.\n 2. Each shard will be assigned to only one History node to\n own the shard, using a Consistent Hashing Ring. Each shard\n will consume a small amount of memory/CPU to do background\n processing. Therefore, a single History node cannot own too\n many shards. You may need to figure out a good number range\n based on your instance size(memory/CPU).\n 3. Also, you can’t add an infinite number of nodes to a\n cluster because this config is fixed. When the number of\n History nodes is closed or equal to numHistoryShards, there\n will be some History nodes that have no shards assigned to\n it. This will be wasting resources.\n \n Based on above, you don’t want to have a small number of\n shards which will limit the maximum size of your cluster.\n You also don’t want to have a too big number, which will\n require you to have a quite big initial size of the cluster.\n Also, typically a production cluster will start with a\n smaller number and then we add more nodes/hosts to it. But\n to keep high availability, it’s recommended to use at least\n 4 nodes for each service(Frontend/History/Matching) at the\n beginning.\nringpop This is the config to let all nodes of all services For dns mode: Recommended to put the DNS of Frontend service\n connected to each other. ALL the bootstrap nodes MUST be \n reachable by ringpop when a service is starting up, within a For hosts or hostfile mode: A list of Frontend service node\n MaxJoinDuration. defaultMaxJoinDuration is 2 minutes. addresses if using hosts mode. Make sure all the bootstrap\n nodes are reachable at startup.\n It’s not required that bootstrap nodes need to be\n Frontend/History or Matching. In fact, it can be running\n none of them as long as it runs Ringpop protocol.\npublicClient The Cadence Frontend service addresses that internal Cadence Recommended be DNS of Frontend service, so that requests\n system(like system workflows) need to talk to. will be distributed to all Frontend nodes.\n \n After connected, all nodes in Ringpop will form a ring with Using localhost+Port or local container IP address+Port will\n identifiers of what service they serve. Ideally Cadence not work if the IP/container is not running frontend\n should be able to get Frontend address from there. But\n Ringpop doesn’t expose this API yet.\nservices.NAME.rpc Configuration of how to listen to network ports and serve Name: Use as recommended in development.yaml. bindOnIP : an\n traffic. IP address that the container will serve the traffic with\n \n bindOnLocalHost:true will bind on 127.0.0.1. It’s mostly for\n local development. In production usually you have to specify\n the IP that containers will use by using bindOnIP\n \n NAME is the matter for the “--services” option in the server\n startup command.\nservices.NAME.pprof Golang profiling service , will bind on the same IP as RPC a port that you want to serve pprof request\nservices.Name.metrics See Metrics&Logging section cc\nclusterMetadata Cadence cluster configuration. As explanation.\n \n enableGlobalDomain:true will enable Cadence Cross datacenter\n replication(aka XDC) feature.\n \n failoverVersionIncrement: This decides the maximum clusters\n that you will have replicated to each other at the same\n time. For example 10 is sufficient for most cases.\n \n masterClusterName: a master cluster must be one of the\n enabled clusters, usually the very first cluster to start.\n It is only meaningful for internal purposes.\n \n currentClusterName: current cluster name using this config\n file.\n \n clusterInformation is a map from clusterName to the cluster\n configure\n \n initialFailoverVersion: each cluster must use a different\n value from 0 to failoverVersionIncrement-1.\n \n rpcName: must be “cadence-frontend”. Can be improved in this\n issue.\n \n rpcAddress: the address to talk to the Frontend of the\n cluster for inter-cluster replication.\n \n Note that even if you don’t need XDC replication right now,\n if you want to migrate data stores in the future, you should\n enable xdc from every beginning. You just need to use the\n same name of cluster for both masterClusterName and\n currentClusterName.\n \n Go to cross dc replication for how to configure replication\n in production\ndcRedirectionPolicy For allowing forwarding frontend requests from passive “selected-apis-forwarding”\n cluster to active clusters.\narchival This is for archival history feature, skip if you don’t need N/A\n it. Go to workflow archival for how to configure archival in\n production\nblobstore This is also for archival history feature Default cadence N/A\n server is using file based blob store implementation.\ndomainDefaults default config for each domain. Right now only being used N/A\n for Archival feature.\ndynamicconfig (previously known as dynamicConfigClient) Dynamic config is a config manager that enables you to Same as the sample development config\n change configs without restarting servers. It’s a good way\n for Cadence to keep high availability and make things easy\n to configure.\n \n By default Cadence server uses filebased client which allows\n you to override default configs using a YAML file. However,\n this approach can be cumbersome in production environment\n because it\'s the operator\'s responsibility to sync the YAML\n files across Cadence nodes.\n \n Therefore, we provide another option, configstore client,\n that stores config changes in the persistent data store for\n Cadence (e.g., Cassandra database) rather than the YAML\n file. This approach shifts the responsibility of syncing\n config changes from the operator to Cadence service. You can\n use Cadence CLI commands to list/get/update/restore config\n changes.\n \n You can also implement the dynamic config interface if you\n have a better way to manage configs.\npersistence Configuration for data store / persistence layer. As explanation\n \n Values of DefaultStore VisibilityStore\n AdvancedVisibilityStore should be keys of map DataStores.\n \n DefaultStore is for core Cadence functionality.\n \n VisibilityStore is for basic visibility feature\n \n AdvancedVisibilityStore is for advanced visibility\n \n Go to advanced visibility for detailed configuration of\n advanced visibility. See persistence documentation about\n using different database for Cadence\n\n\n# The full list of static configuration\n\nStarting from v0.21.0, all the static configuration are defined by GoDocs in details.\n\nVERSION GODOCS LINK GITHUB LINK\nv0.21.0 Configuration Docs Configuration\n...other higher versions ...Replace the version in the URL of v0.21.0 ...Replace the version in the URL of v0.21.0\n\nFor earlier versions, you can find all the configurations similarly:\n\nVERSION GODOCS LINK GITHUB LINK\nv0.20.0 Configuration Docs Configuration\nv0.19.2 Configuration Docs Configuration\nv0.18.2 Configuration Docs Configuration\nv0.17.0 Configuration Docs Configuration\n...other lower versions ...Replace the version in the URL of v0.20.0 ...Replace the version in the URL of v0.20.0\n\n\n# Dynamic Configuration\n\nDynamic configuration is for fine tuning a Cadence cluster.\n\nThere are a lot more dynamic configurations than static configurations. Most of the default values are good for small clusters. As a cluster is scaled up, you may look for tuning it for the optimal performance.\n\nStarting from v0.21.0 with this change, all the dynamic configuration are well defined by GoDocs.\n\nVERSION GODOCS LINK GITHUB LINK\nv0.21.0 Dynamic Configuration Docs Dynamic Configuration\n...other higher versions ...Replace the version in the URL of v0.21.0 ...Replace the version in the URL of v0.21.0\n\nFor earlier versions, you can find all the configurations similarly:\n\nVERSION GODOCS LINK GITHUB LINK\nv0.20.0 Dynamic Configuration Docs Dynamic Configuration\nv0.19.2 Dynamic Configuration Docs Dynamic Configuration\nv0.18.2 Dynamic Configuration Docs Dynamic Configuration\nv0.17.0 Dynamic Configuration Docs Dynamic Configuration\n...other lower versions ...Replace the version in the URL of v0.20.0 ...Replace the version in the URL of v0.20.0\n\nHowever, the GoDocs in earlier versions don\'t contain detailed information. You need to look it up the newer version of GoDocs.\nFor example, search for "EnableGlobalDomain" in Dynamic Configuration Comments in v0.21.0 or Docs of v0.21.0, as the usage of DynamicConfiguration never changes.\n\n * KeyName is the key that you will use in the dynamicconfig yaml content\n * Default value is the default value\n * Value type indicates the type that you should change the yaml value of:\n * Int should be integer like 123\n * Float should be number like 123.4\n * Duration should be Golang duration like: 10s, 2m, 5h for 10 seconds, 2 minutes and 5 hours.\n * Bool should be true or false\n * Map should be map of yaml\n * Allowed filters indicates what kinds of filters you can set as constraints with the dynamic configuration.\n * DomainName can be used with domainName\n * N/A means no filters can be set. The config will be global.\n\nFor example, if you want to change the ratelimiting for List API, below is the config:\n\n// FrontendVisibilityListMaxQPS is max qps frontend can list open/close workflows\n// KeyName: frontend.visibilityListMaxQPS\n// Value type: Int\n// Default value: 10\n// Allowed filters: DomainName\nFrontendVisibilityListMaxQPS\n\n\nThen you can add the config like:\n\nfrontend.visibilityListMaxQPS:\n - value: 1000\n constraints:\n domainName: "domainA"\n - value: 2000\n constraints:\n domainName: "domainB" \n\n\nYou will expect to see domainA will be able to perform 1K List operation per second, while domainB can perform 2K per second.\n\nNOTE 1: the size related configuration numbers are based on byte.\n\nNOTE 2: for .persistenceMaxQPS versus .persistenceGlobalMaxQPS --- persistenceMaxQPS is local for single node while persistenceGlobalMaxQPS is global for all node. persistenceGlobalMaxQPS is preferred if set as greater than zero. But by default it is zero so persistenceMaxQPS is being used.\n\n\n# How to update Dynamic Configuration\n\n# File-based client\n\nBy default, Cadence uses file-based client to manage dynamic configurations. Following are the approaches to changing dynamic configs using a yaml file.\n\n * Local docker-compose by mounting volume: 1. Change the dynamic configs in cadence/config/dynamicconfig/development.yaml. 2. Update the cadence section in the docker compose file and mount dynamicconfig folder to host machine like the following:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n ...(don\'t change anything here)\n environment:\n ...(don\'t change anything here)\n - "DYNAMIC_CONFIG_FILE_PATH=/etc/custom-dynamicconfig/development.yaml"\n volumes:\n - "/Users//cadence/config/dynamicconfig:/etc/custom-dynamicconfig"\n\n\n * Local docker-compose by logging into the container: run docker exec -it docker_cadence_1 /bin/bash to login your container. Then vi config/dynamicconfig/development.yaml to make any change. After you changed the config, use docker restart docker_cadence_1 to restart the cadence instance. Note that you can also use this approach to change static config, but it must be changed through config/config_template.yaml instead of config/docker.yaml because config/docker.yaml is generated on startup.\n\n * In production cluster: Follow this example of Helm Chart to deploy Cadence, update dynamic config here and restart the cluster.\n\n * DEBUG: How to make sure your updates on dynamicconfig is loaded? for example, if you added the following to development.yaml\n\nfrontend.visibilityListMaxQPS:\n - value: 10000\n\n\nAfter restarting Cadence instances, execute a command like this to let Cadence load the config(it\'s lazy loading when using it). cadence --domain <> workflow list\n\nThen you should see the logs like below\n\ncadence_1 | {"level":"info","ts":"2021-05-07T18:43:07.869Z","msg":"First loading dynamic config","service":"cadence-frontend","key":"frontend.visibilityListMaxQPS,domainName:sample,clusterName:primary","value":"10000","default-value":"10","logging-call-at":"config.go:93"}\n\n\n# Config store client\n\nYou can set the dynamicconfig client in the static configuration to configstore in order to store config changes in a database, as shown below.\n\ndynamicconfig:\n client: configstore\n configstore:\n pollInterval: "10s"\n updateRetryAttempts: 2\n FetchTimeout: "2s"\n UpdateTimeout: "2s"\n\n\nIf you are still using the deprecated config dynamicConfigClient like below, you need to replace it with the new dynamicconfig as shown above to use configstore client.\n\ndynamicConfigClient:\n filepath: "/etc/cadence/config/dynamicconfig/config.yaml"\n pollInterval: "10s"\n\n\nAfter changing the client to configstore and restarting Cadence, you can manage dynamic configs using cadence admin config CLI commands. You may need to set your custom dynamic configs again as the previous configs are not automatically migrated from the YAML file to the database.\n\n * cadence admin config listdc lists all dynamic config overrides\n * cadence admin config getdc --dynamic_config_name gets the value of a specific dynamic config\n * cadence admin config updc --dynamic_config_name --dynamic_config_value \'{"Value": }\' updates the value of a specific dynamic config\n * cadence admin config resdc --dynamic_config_name restores a specific dynamic config to its default value\n\n\n# Other Advanced Features\n\n * Go to advanced visibility for how to configure advanced visibility in production.\n\n * Go to workflow archival for how to configure archival in production.\n\n * Go to cross dc replication for how to configure replication in production.\n\n\n# Deployment & Release\n\nKubernetes is the most popular way to deploy Cadence cluster. And easiest way is to use Cadence Helm Charts that maintained by a community project.\n\nIf you are looking for deploying Cadence using other technologies, then it\'s reccomended to use Cadence docker images. You can use offical ones, or you may customize it based on what you need. See Cadence docker package for how to run the images.\n\nIt\'s always recommended to use the latest release. See Cadence release pages.\n\nPlease subscribe the release of project by :\n\nGo to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".\n\nAnd see how to upgrade a Cadence cluster\n\n\n# Stress/Bench Test a cluster\n\nIt\'s recommended to run bench test on your cluster following this package to see the maximum throughput that it can take, whenever you change some setup.',normalizedContent:'# cluster configuration\n\nthis section will help to understand what you need for setting up a cadence cluster.\n\nyou should understand some basic static configuration of cadence cluster.\n\nthere are also many other configuration called "dynamic configuration" for fine tuning the cluster. the default values are good to go for small clusters.\n\ncadence’s minimum dependency is a database(cassandra or sql based like mysql/postgres). cadence uses it for persistence. all instances of cadence clusters are stateless.\n\nfor production you also need a metric server(prometheus/statsd/m3/etc).\n\nfor advanced features cadence depends on others like elastisearch/opensearch+kafka if you need advanced visibility feature to search workflows. cadence will depends on a blob store like s3 if you need to enable archival feature.\n\n\n# static configuration\n\n\n# configuration directory and files\n\nthe default directory for configuration files is named config/. this directory contains various configuration files, but not all files will necessarily be used in every scenario.\n\n# combining configuration files\n\n * base configuration: the base.yaml file is always loaded first, providing a common configuration that applies to all environments.\n * runtime environment file: the second file to be loaded is specific to the runtime environment. the environment name can be specified through the $cadence_environment environment variable or passed as a command-line argument. if neither option is specified, development.yaml is used by default.\n * availability zone file: if an availability zone is specified (either through the $cadence_availability_zone environment variable or as a command-line argument), a file named after the zone will be merged. for example, if you specify "az1" as the zone, production_az1.yaml will be used as well.\n\nto merge base.yaml, production.yaml, and production_az1.yaml files, you need to specify "production" as the runtime environment and "az1" as the zone.\n\n// base.yaml -> production.yaml -> production_az1.yaml = final configuration\n\n\n# using environment variables\n\nconfiguration values can be provided using environment variables with a specific syntax. $var: this notation will be replaced with the value of the specified environment variable. if the environment variable is not set, the value will be left blank. you can declare a default value using the syntax {$var:default}. this means that if the environment variable var is not set, the default value will be used instead.\n\nnote: if you want to include the $ symbol literally in your configuration file (without interpreting it as an environment variable substitution), escape it by using $$. this will prevent it from being replaced by an environment variable value.\n\n\n# understand the basic static configuration\n\nthere are quite many configs in cadence. here are the most basic configuration that you should understand.\n\nconfig name explanation recommended value\nnumhistoryshards this is the most important one in cadence config.it will be 1k~16k depending on the size ranges of the cluster you\n a fixed number in the cluster forever. the only way to expect to run, and the instance size. typically 2k for sql\n change it is to migrate to another cluster. refer to migrate based persistence, and 8k for cassandra based.\n cluster section.\n \n some facts about it:\n 1. each workflow will be mapped to a single shard. within a\n shard, all the workflow creation/updates are serialized.\n 2. each shard will be assigned to only one history node to\n own the shard, using a consistent hashing ring. each shard\n will consume a small amount of memory/cpu to do background\n processing. therefore, a single history node cannot own too\n many shards. you may need to figure out a good number range\n based on your instance size(memory/cpu).\n 3. also, you can’t add an infinite number of nodes to a\n cluster because this config is fixed. when the number of\n history nodes is closed or equal to numhistoryshards, there\n will be some history nodes that have no shards assigned to\n it. this will be wasting resources.\n \n based on above, you don’t want to have a small number of\n shards which will limit the maximum size of your cluster.\n you also don’t want to have a too big number, which will\n require you to have a quite big initial size of the cluster.\n also, typically a production cluster will start with a\n smaller number and then we add more nodes/hosts to it. but\n to keep high availability, it’s recommended to use at least\n 4 nodes for each service(frontend/history/matching) at the\n beginning.\nringpop this is the config to let all nodes of all services for dns mode: recommended to put the dns of frontend service\n connected to each other. all the bootstrap nodes must be \n reachable by ringpop when a service is starting up, within a for hosts or hostfile mode: a list of frontend service node\n maxjoinduration. defaultmaxjoinduration is 2 minutes. addresses if using hosts mode. make sure all the bootstrap\n nodes are reachable at startup.\n it’s not required that bootstrap nodes need to be\n frontend/history or matching. in fact, it can be running\n none of them as long as it runs ringpop protocol.\npublicclient the cadence frontend service addresses that internal cadence recommended be dns of frontend service, so that requests\n system(like system workflows) need to talk to. will be distributed to all frontend nodes.\n \n after connected, all nodes in ringpop will form a ring with using localhost+port or local container ip address+port will\n identifiers of what service they serve. ideally cadence not work if the ip/container is not running frontend\n should be able to get frontend address from there. but\n ringpop doesn’t expose this api yet.\nservices.name.rpc configuration of how to listen to network ports and serve name: use as recommended in development.yaml. bindonip : an\n traffic. ip address that the container will serve the traffic with\n \n bindonlocalhost:true will bind on 127.0.0.1. it’s mostly for\n local development. in production usually you have to specify\n the ip that containers will use by using bindonip\n \n name is the matter for the “--services” option in the server\n startup command.\nservices.name.pprof golang profiling service , will bind on the same ip as rpc a port that you want to serve pprof request\nservices.name.metrics see metrics&logging section cc\nclustermetadata cadence cluster configuration. as explanation.\n \n enableglobaldomain:true will enable cadence cross datacenter\n replication(aka xdc) feature.\n \n failoverversionincrement: this decides the maximum clusters\n that you will have replicated to each other at the same\n time. for example 10 is sufficient for most cases.\n \n masterclustername: a master cluster must be one of the\n enabled clusters, usually the very first cluster to start.\n it is only meaningful for internal purposes.\n \n currentclustername: current cluster name using this config\n file.\n \n clusterinformation is a map from clustername to the cluster\n configure\n \n initialfailoverversion: each cluster must use a different\n value from 0 to failoverversionincrement-1.\n \n rpcname: must be “cadence-frontend”. can be improved in this\n issue.\n \n rpcaddress: the address to talk to the frontend of the\n cluster for inter-cluster replication.\n \n note that even if you don’t need xdc replication right now,\n if you want to migrate data stores in the future, you should\n enable xdc from every beginning. you just need to use the\n same name of cluster for both masterclustername and\n currentclustername.\n \n go to cross dc replication for how to configure replication\n in production\ndcredirectionpolicy for allowing forwarding frontend requests from passive “selected-apis-forwarding”\n cluster to active clusters.\narchival this is for archival history feature, skip if you don’t need n/a\n it. go to workflow archival for how to configure archival in\n production\nblobstore this is also for archival history feature default cadence n/a\n server is using file based blob store implementation.\ndomaindefaults default config for each domain. right now only being used n/a\n for archival feature.\ndynamicconfig (previously known as dynamicconfigclient) dynamic config is a config manager that enables you to same as the sample development config\n change configs without restarting servers. it’s a good way\n for cadence to keep high availability and make things easy\n to configure.\n \n by default cadence server uses filebased client which allows\n you to override default configs using a yaml file. however,\n this approach can be cumbersome in production environment\n because it\'s the operator\'s responsibility to sync the yaml\n files across cadence nodes.\n \n therefore, we provide another option, configstore client,\n that stores config changes in the persistent data store for\n cadence (e.g., cassandra database) rather than the yaml\n file. this approach shifts the responsibility of syncing\n config changes from the operator to cadence service. you can\n use cadence cli commands to list/get/update/restore config\n changes.\n \n you can also implement the dynamic config interface if you\n have a better way to manage configs.\npersistence configuration for data store / persistence layer. as explanation\n \n values of defaultstore visibilitystore\n advancedvisibilitystore should be keys of map datastores.\n \n defaultstore is for core cadence functionality.\n \n visibilitystore is for basic visibility feature\n \n advancedvisibilitystore is for advanced visibility\n \n go to advanced visibility for detailed configuration of\n advanced visibility. see persistence documentation about\n using different database for cadence\n\n\n# the full list of static configuration\n\nstarting from v0.21.0, all the static configuration are defined by godocs in details.\n\nversion godocs link github link\nv0.21.0 configuration docs configuration\n...other higher versions ...replace the version in the url of v0.21.0 ...replace the version in the url of v0.21.0\n\nfor earlier versions, you can find all the configurations similarly:\n\nversion godocs link github link\nv0.20.0 configuration docs configuration\nv0.19.2 configuration docs configuration\nv0.18.2 configuration docs configuration\nv0.17.0 configuration docs configuration\n...other lower versions ...replace the version in the url of v0.20.0 ...replace the version in the url of v0.20.0\n\n\n# dynamic configuration\n\ndynamic configuration is for fine tuning a cadence cluster.\n\nthere are a lot more dynamic configurations than static configurations. most of the default values are good for small clusters. as a cluster is scaled up, you may look for tuning it for the optimal performance.\n\nstarting from v0.21.0 with this change, all the dynamic configuration are well defined by godocs.\n\nversion godocs link github link\nv0.21.0 dynamic configuration docs dynamic configuration\n...other higher versions ...replace the version in the url of v0.21.0 ...replace the version in the url of v0.21.0\n\nfor earlier versions, you can find all the configurations similarly:\n\nversion godocs link github link\nv0.20.0 dynamic configuration docs dynamic configuration\nv0.19.2 dynamic configuration docs dynamic configuration\nv0.18.2 dynamic configuration docs dynamic configuration\nv0.17.0 dynamic configuration docs dynamic configuration\n...other lower versions ...replace the version in the url of v0.20.0 ...replace the version in the url of v0.20.0\n\nhowever, the godocs in earlier versions don\'t contain detailed information. you need to look it up the newer version of godocs.\nfor example, search for "enableglobaldomain" in dynamic configuration comments in v0.21.0 or docs of v0.21.0, as the usage of dynamicconfiguration never changes.\n\n * keyname is the key that you will use in the dynamicconfig yaml content\n * default value is the default value\n * value type indicates the type that you should change the yaml value of:\n * int should be integer like 123\n * float should be number like 123.4\n * duration should be golang duration like: 10s, 2m, 5h for 10 seconds, 2 minutes and 5 hours.\n * bool should be true or false\n * map should be map of yaml\n * allowed filters indicates what kinds of filters you can set as constraints with the dynamic configuration.\n * domainname can be used with domainname\n * n/a means no filters can be set. the config will be global.\n\nfor example, if you want to change the ratelimiting for list api, below is the config:\n\n// frontendvisibilitylistmaxqps is max qps frontend can list open/close workflows\n// keyname: frontend.visibilitylistmaxqps\n// value type: int\n// default value: 10\n// allowed filters: domainname\nfrontendvisibilitylistmaxqps\n\n\nthen you can add the config like:\n\nfrontend.visibilitylistmaxqps:\n - value: 1000\n constraints:\n domainname: "domaina"\n - value: 2000\n constraints:\n domainname: "domainb" \n\n\nyou will expect to see domaina will be able to perform 1k list operation per second, while domainb can perform 2k per second.\n\nnote 1: the size related configuration numbers are based on byte.\n\nnote 2: for .persistencemaxqps versus .persistenceglobalmaxqps --- persistencemaxqps is local for single node while persistenceglobalmaxqps is global for all node. persistenceglobalmaxqps is preferred if set as greater than zero. but by default it is zero so persistencemaxqps is being used.\n\n\n# how to update dynamic configuration\n\n# file-based client\n\nby default, cadence uses file-based client to manage dynamic configurations. following are the approaches to changing dynamic configs using a yaml file.\n\n * local docker-compose by mounting volume: 1. change the dynamic configs in cadence/config/dynamicconfig/development.yaml. 2. update the cadence section in the docker compose file and mount dynamicconfig folder to host machine like the following:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n ...(don\'t change anything here)\n environment:\n ...(don\'t change anything here)\n - "dynamic_config_file_path=/etc/custom-dynamicconfig/development.yaml"\n volumes:\n - "/users//cadence/config/dynamicconfig:/etc/custom-dynamicconfig"\n\n\n * local docker-compose by logging into the container: run docker exec -it docker_cadence_1 /bin/bash to login your container. then vi config/dynamicconfig/development.yaml to make any change. after you changed the config, use docker restart docker_cadence_1 to restart the cadence instance. note that you can also use this approach to change static config, but it must be changed through config/config_template.yaml instead of config/docker.yaml because config/docker.yaml is generated on startup.\n\n * in production cluster: follow this example of helm chart to deploy cadence, update dynamic config here and restart the cluster.\n\n * debug: how to make sure your updates on dynamicconfig is loaded? for example, if you added the following to development.yaml\n\nfrontend.visibilitylistmaxqps:\n - value: 10000\n\n\nafter restarting cadence instances, execute a command like this to let cadence load the config(it\'s lazy loading when using it). cadence --domain <> workflow list\n\nthen you should see the logs like below\n\ncadence_1 | {"level":"info","ts":"2021-05-07t18:43:07.869z","msg":"first loading dynamic config","service":"cadence-frontend","key":"frontend.visibilitylistmaxqps,domainname:sample,clustername:primary","value":"10000","default-value":"10","logging-call-at":"config.go:93"}\n\n\n# config store client\n\nyou can set the dynamicconfig client in the static configuration to configstore in order to store config changes in a database, as shown below.\n\ndynamicconfig:\n client: configstore\n configstore:\n pollinterval: "10s"\n updateretryattempts: 2\n fetchtimeout: "2s"\n updatetimeout: "2s"\n\n\nif you are still using the deprecated config dynamicconfigclient like below, you need to replace it with the new dynamicconfig as shown above to use configstore client.\n\ndynamicconfigclient:\n filepath: "/etc/cadence/config/dynamicconfig/config.yaml"\n pollinterval: "10s"\n\n\nafter changing the client to configstore and restarting cadence, you can manage dynamic configs using cadence admin config cli commands. you may need to set your custom dynamic configs again as the previous configs are not automatically migrated from the yaml file to the database.\n\n * cadence admin config listdc lists all dynamic config overrides\n * cadence admin config getdc --dynamic_config_name gets the value of a specific dynamic config\n * cadence admin config updc --dynamic_config_name --dynamic_config_value \'{"value": }\' updates the value of a specific dynamic config\n * cadence admin config resdc --dynamic_config_name restores a specific dynamic config to its default value\n\n\n# other advanced features\n\n * go to advanced visibility for how to configure advanced visibility in production.\n\n * go to workflow archival for how to configure archival in production.\n\n * go to cross dc replication for how to configure replication in production.\n\n\n# deployment & release\n\nkubernetes is the most popular way to deploy cadence cluster. and easiest way is to use cadence helm charts that maintained by a community project.\n\nif you are looking for deploying cadence using other technologies, then it\'s reccomended to use cadence docker images. you can use offical ones, or you may customize it based on what you need. see cadence docker package for how to run the images.\n\nit\'s always recommended to use the latest release. see cadence release pages.\n\nplease subscribe the release of project by :\n\ngo to https://github.com/uber/cadence -> click the right top "watch" button -> custom -> "release".\n\nand see how to upgrade a cadence cluster\n\n\n# stress/bench test a cluster\n\nit\'s recommended to run bench test on your cluster following this package to see the maximum throughput that it can take, whenever you change some setup.',charsets:{cjk:!0}},{title:"Cluster Maintenance",frontmatter:{layout:"default",title:"Cluster Maintenance",permalink:"/docs/operation-guide/maintain",readingShow:"top"},regularPath:"/docs/07-operation-guide/02-maintain.html",relativePath:"docs/07-operation-guide/02-maintain.md",key:"v-c3677d3c",path:"/docs/operation-guide/maintain/",headers:[{level:2,title:"Scale up & down Cluster",slug:"scale-up-down-cluster",normalizedTitle:"scale up & down cluster",charIndex:null},{level:2,title:"Scale up a tasklist using Scalable tasklist feature",slug:"scale-up-a-tasklist-using-scalable-tasklist-feature",normalizedTitle:"scale up a tasklist using scalable tasklist feature",charIndex:674},{level:2,title:"Restarting Cluster",slug:"restarting-cluster",normalizedTitle:"restarting cluster",charIndex:2978},{level:2,title:"Optimize SQL Persistence",slug:"optimize-sql-persistence",normalizedTitle:"optimize sql persistence",charIndex:3055},{level:2,title:"Upgrading Server",slug:"upgrading-server",normalizedTitle:"upgrading server",charIndex:4289},{level:3,title:"How to upgrade:",slug:"how-to-upgrade",normalizedTitle:"how to upgrade:",charIndex:5029},{level:3,title:"How to apply DB schema changes",slug:"how-to-apply-db-schema-changes",normalizedTitle:"how to apply db schema changes",charIndex:6295}],codeSwitcherOptions:{},headersStr:"Scale up & down Cluster Scale up a tasklist using Scalable tasklist feature Restarting Cluster Optimize SQL Persistence Upgrading Server How to upgrade: How to apply DB schema changes",content:'# Cluster Maintenance\n\nThis includes how to use and maintain a Cadence cluster for both clients and server clusters.\n\n\n# Scale up & down Cluster\n\n * When CPU/Memory is getting bottleneck on Cadence instances, you may scale up or add more instances.\n * Watch Cadence metrics\n * See if the external traffic to frontend is normal\n * If the slowness is due to too many tasks on a tasklist, you may need to scale up the tasklist\n * If persistence latency is getting too high, try scale up your DB instance\n * Never change the numOfShards of a cluster. If you need that because the current one is too small, follow the instructions to migrate your cluster to a new one.\n\n\n# Scale up a tasklist using Scalable tasklist feature\n\nBy default a tasklist is not scalable enough to support hundreds of tasks per second. That’s mainly because each tasklist is assigned to a Matching service node, and dispatching tasks in a tasklist is in sequence.\n\nIn the past, Cadence recommended using multiple tasklists to start workflow/activity. You need to make a list of tasklists and randomly pick one when starting workflows. And then when starting workers, let them listen to all the tasklists.\n\nNowadays, Cadence has a feature called “Scalable tasklist”. It will divide a tasklist into multiple logical partitions, which can distribute tasks to multiple Matching service nodes. By default this feature is not enabled because there is some performance penalty on the server side, plus it’s not common that a tasklist needs to support more than hundreds tasks per second.\n\nYou must make a dynamic configuration change in Cadence server to use this feature:\n\nmatching.numTasklistWritePartitions\n\nand\n\nmatching.numTasklistReadPartitions\n\nmatching.numTasklistWritePartitions is the number of partitions when a Cadence server sends a task to the tasklist. matching.numTasklistReadPartitions is the number of partitions when your worker accepts a task from the tasklist.\n\nThere are a few things to know when using this feature:\n\n * Always make sure matching.numTasklistWritePartitions <= matching.numTasklistReadPartitions . Otherwise there may be some tasks that are sent to a tasklist partition but no poller(worker) will be able to pick up.\n * Because of above, when scaling down the number of partitions, you must decrease the WritePartitions first, to wait for a certain time to ensure that tasks are drained, and then decrease ReadPartitions.\n * Both domain names and taskListName should be specified in the dynamic config. An example of using this feature. See more details about dynamic config format using file based dynamic config.\n\nmatching.numTasklistWritePartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\nmatching.numTasklistReadPartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\n\n\nNOTE: the value must be integer without double quotes.\n\n\n# Restarting Cluster\n\nMake sure rolling restart to keep high availability.\n\n\n# Optimize SQL Persistence\n\n * Connection is shared within a Cadence server host\n * For each host, The max number of connections it will consume is maxConn of defaultStore + maxConn of visibilityStore.\n * The total max number of connections your Cadence cluster will consume is the summary from all hosts(from Frontend/Matching/History/SysWorker services)\n * Frontend and history nodes need both default and visibility Stores, but matching and sys workers only need default Stores, they don\'t need to talk to visibility DBs.\n * For default Stores, history service will take the most connection, then Frontend/Matching. SysWorker will use much less than others\n * Default Stores is for Cadence’ core data model, which requires strong consistency. So it cannot use replicas. VisibilityStore is not for core data models. It’s recommended to use a separate DB for visibility store if using DB based visibility.\n * Visibility Stores usually take much less connection as the workload is much lightweight(less QPS and no explicit transactions).\n * Visibility Stores require eventual consistency for read. So it can use replicas.\n * MaxIdelConns should be less than MaxConns, so that the connections can be distributed better across hosts.\n\n\n# Upgrading Server\n\nTo get notified about release, please subscribe the release of project by : Go to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".\n\nIt\'s recommended to upgrade one minor version at a time. E.g, if you are at 0.10, you should upgrade to 0.11, stabilize it with running some normal workload to make sure that the upgraded server is happy with the schema changes. After ~1 hour, then upgrade to 0.12. then 0.13. etc.\n\nThe reason is that for each minor upgrade, you should be able to follow the release notes about what you should do for upgrading. The release notes may require you to run some commands. This will also help to narrow down the cause when something goes wrong.\n\n\n# How to upgrade:\n\nThings that you may need to do for upgrading a minor version(patch version upgrades should not need it):\n\n * Schema(DB/ElasticSearch) changes\n * Configuration format/layout changes\n * Data migration -- this is very rare. For example, upgrading from 0.15.x to 0.16.0 requires a data migration.\n\nYou should read through the release instruction for each minor release to understand what needs to be done.\n\n * Schema changes need to be applied before upgrading server\n * Upgrade MySQL/Postgres schema if applicable\n * Upgrade Cassandra schema if applicable\n * Upgrade ElasticSearch schema if applicable\n * Usually schema change is backward compatible. So rolling back usually is not a problem. It also means that Cadence allows running a mixed version of schema, as long as they are all greater than or equal to the required version of the server. Other requirements for upgrading should be found in the release notes. It may contain information about config changes, or special rollback instructions if normal rollback may cause problems.\n * Similarly, data migration should be done before upgrading the server binary.\n\nNOTE: Do not use “auto-setup” images to upgrade your schema. It\'s mainly for development. At most for initial setup only.\n\n\n# How to apply DB schema changes\n\nFor how to apply database schema, refer to this doc: SQL tool README Cassandra tool README\n\nThe tool makes use of a table called “schema_versions” to keep track of upgrading History. But there is no transaction guarantee for cross table operations. So in case of some error, you may need to fix or apply schema change manually. Also, the schema tool by default will upgrade schema to the latest, so no manual is required. ( you can also specify to let it upgrade to any place, like 0.14).\n\nDatabase schema changes are versioned in the folders: Versioned Schema Changes for Default Store and Versioned Schema Changes for Visibility Store if you use database for basic visibility instead of ElasticSearch.\n\nIf you use homebrew, the schema files are located at /usr/local/etc/cadence/schema/.\n\nAlternatively, you can checkout the repo and the release tag. E.g. git checkout v0.21.0 and then the schema files is at ./schema/',normalizedContent:'# cluster maintenance\n\nthis includes how to use and maintain a cadence cluster for both clients and server clusters.\n\n\n# scale up & down cluster\n\n * when cpu/memory is getting bottleneck on cadence instances, you may scale up or add more instances.\n * watch cadence metrics\n * see if the external traffic to frontend is normal\n * if the slowness is due to too many tasks on a tasklist, you may need to scale up the tasklist\n * if persistence latency is getting too high, try scale up your db instance\n * never change the numofshards of a cluster. if you need that because the current one is too small, follow the instructions to migrate your cluster to a new one.\n\n\n# scale up a tasklist using scalable tasklist feature\n\nby default a tasklist is not scalable enough to support hundreds of tasks per second. that’s mainly because each tasklist is assigned to a matching service node, and dispatching tasks in a tasklist is in sequence.\n\nin the past, cadence recommended using multiple tasklists to start workflow/activity. you need to make a list of tasklists and randomly pick one when starting workflows. and then when starting workers, let them listen to all the tasklists.\n\nnowadays, cadence has a feature called “scalable tasklist”. it will divide a tasklist into multiple logical partitions, which can distribute tasks to multiple matching service nodes. by default this feature is not enabled because there is some performance penalty on the server side, plus it’s not common that a tasklist needs to support more than hundreds tasks per second.\n\nyou must make a dynamic configuration change in cadence server to use this feature:\n\nmatching.numtasklistwritepartitions\n\nand\n\nmatching.numtasklistreadpartitions\n\nmatching.numtasklistwritepartitions is the number of partitions when a cadence server sends a task to the tasklist. matching.numtasklistreadpartitions is the number of partitions when your worker accepts a task from the tasklist.\n\nthere are a few things to know when using this feature:\n\n * always make sure matching.numtasklistwritepartitions <= matching.numtasklistreadpartitions . otherwise there may be some tasks that are sent to a tasklist partition but no poller(worker) will be able to pick up.\n * because of above, when scaling down the number of partitions, you must decrease the writepartitions first, to wait for a certain time to ensure that tasks are drained, and then decrease readpartitions.\n * both domain names and tasklistname should be specified in the dynamic config. an example of using this feature. see more details about dynamic config format using file based dynamic config.\n\nmatching.numtasklistwritepartitions:\n - value: 10\n constraints:\n domainname: "samples-domain"\n tasklistname: "ascalabletasklistname"\nmatching.numtasklistreadpartitions:\n - value: 10\n constraints:\n domainname: "samples-domain"\n tasklistname: "ascalabletasklistname"\n\n\nnote: the value must be integer without double quotes.\n\n\n# restarting cluster\n\nmake sure rolling restart to keep high availability.\n\n\n# optimize sql persistence\n\n * connection is shared within a cadence server host\n * for each host, the max number of connections it will consume is maxconn of defaultstore + maxconn of visibilitystore.\n * the total max number of connections your cadence cluster will consume is the summary from all hosts(from frontend/matching/history/sysworker services)\n * frontend and history nodes need both default and visibility stores, but matching and sys workers only need default stores, they don\'t need to talk to visibility dbs.\n * for default stores, history service will take the most connection, then frontend/matching. sysworker will use much less than others\n * default stores is for cadence’ core data model, which requires strong consistency. so it cannot use replicas. visibilitystore is not for core data models. it’s recommended to use a separate db for visibility store if using db based visibility.\n * visibility stores usually take much less connection as the workload is much lightweight(less qps and no explicit transactions).\n * visibility stores require eventual consistency for read. so it can use replicas.\n * maxidelconns should be less than maxconns, so that the connections can be distributed better across hosts.\n\n\n# upgrading server\n\nto get notified about release, please subscribe the release of project by : go to https://github.com/uber/cadence -> click the right top "watch" button -> custom -> "release".\n\nit\'s recommended to upgrade one minor version at a time. e.g, if you are at 0.10, you should upgrade to 0.11, stabilize it with running some normal workload to make sure that the upgraded server is happy with the schema changes. after ~1 hour, then upgrade to 0.12. then 0.13. etc.\n\nthe reason is that for each minor upgrade, you should be able to follow the release notes about what you should do for upgrading. the release notes may require you to run some commands. this will also help to narrow down the cause when something goes wrong.\n\n\n# how to upgrade:\n\nthings that you may need to do for upgrading a minor version(patch version upgrades should not need it):\n\n * schema(db/elasticsearch) changes\n * configuration format/layout changes\n * data migration -- this is very rare. for example, upgrading from 0.15.x to 0.16.0 requires a data migration.\n\nyou should read through the release instruction for each minor release to understand what needs to be done.\n\n * schema changes need to be applied before upgrading server\n * upgrade mysql/postgres schema if applicable\n * upgrade cassandra schema if applicable\n * upgrade elasticsearch schema if applicable\n * usually schema change is backward compatible. so rolling back usually is not a problem. it also means that cadence allows running a mixed version of schema, as long as they are all greater than or equal to the required version of the server. other requirements for upgrading should be found in the release notes. it may contain information about config changes, or special rollback instructions if normal rollback may cause problems.\n * similarly, data migration should be done before upgrading the server binary.\n\nnote: do not use “auto-setup” images to upgrade your schema. it\'s mainly for development. at most for initial setup only.\n\n\n# how to apply db schema changes\n\nfor how to apply database schema, refer to this doc: sql tool readme cassandra tool readme\n\nthe tool makes use of a table called “schema_versions” to keep track of upgrading history. but there is no transaction guarantee for cross table operations. so in case of some error, you may need to fix or apply schema change manually. also, the schema tool by default will upgrade schema to the latest, so no manual is required. ( you can also specify to let it upgrade to any place, like 0.14).\n\ndatabase schema changes are versioned in the folders: versioned schema changes for default store and versioned schema changes for visibility store if you use database for basic visibility instead of elasticsearch.\n\nif you use homebrew, the schema files are located at /usr/local/etc/cadence/schema/.\n\nalternatively, you can checkout the repo and the release tag. e.g. git checkout v0.21.0 and then the schema files is at ./schema/',charsets:{}},{title:"Cluster Troubleshooting",frontmatter:{layout:"default",title:"Cluster Troubleshooting",permalink:"/docs/operation-guide/troubleshooting",readingShow:"top"},regularPath:"/docs/07-operation-guide/04-troubleshooting.html",relativePath:"docs/07-operation-guide/04-troubleshooting.md",key:"v-6f38e6b6",path:"/docs/operation-guide/troubleshooting/",headers:[{level:2,title:"Errors",slug:"errors",normalizedTitle:"errors",charIndex:292},{level:2,title:"API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts",slug:"api-high-latency-timeout-task-disptaching-slowness-or-too-many-operations-onto-db-and-timeouts",normalizedTitle:"api high latency, timeout, task disptaching slowness or too many operations onto db and timeouts",charIndex:1003}],codeSwitcherOptions:{},headersStr:"Errors API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts",content:'# Cluster Troubleshooting\n\nThis section is to cover some common operation issues as a RunBook. Feel free to add more, or raise issues in the to ask for more in cadence-docs project.Or talk to us in Slack support channel!\n\nWe will keep adding more stuff. Any contribution is very welcome.\n\n\n# Errors\n\n * Persistence Max QPS Reached for List Operations\n * Check metrics to see how many List operations are performed per second on the domain. Alternatively you can enable debug log level to see more details of how a List request is ratelimited, if it\'s a staging/QA cluster.\n * Raise the ratelimiting for the domain if you believe the default ratelimit is too low\n * Failed to lock shard. Previous range ID: 132; new range ID: 133 and Failed to update shard. Previous range ID: 210; new range ID: 212\n * When this keep happening, it\'s very likely a critical configuration error. Either there are two clusters using the same database, or two clusters are using the same ringpop(bootstrap hosts).\n\n\n# API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts\n\n * If it happens after you attemped to truncate tables inorder to reuse the same database/keyspace for a new cluster, it\'s possible that the data is not deleted completely. You should make sure to shutdown the Cadence when trucating, and make sure the database is cleaned. Alternatively, use a different keyspace/database is a safer way.\n\n * Timeout pushing task to matching engine, e.g. "Fail to process task","service":"cadence-history","shard-id":431,"address":"172.31.48.64:7934","component":"transfer-queue-processor","cluster-name":"active","shard-id":431,"queue-task-id":590357768,"queue-task-visibility-timestamp":1637356594382077880,"xdc-failover-version":-24,"queue-task-type":0,"wf-domain-id":"f4d6824f-9d24-4a82-81e0-e0e080be4c21","wf-id":"55d64d58-e398-4bf5-88bc-a4696a2ba87f:63ed7cda-afcf-41cd-9d5a-ee5e1b0f2844","wf-run-id":"53b52ee0-3218-418e-a9bf-7768e671f9c1","error":"code:deadline-exceeded message:timeout","lifecycle":"ProcessingFailed","logging-call-at":"task.go:331"\n \n * If this happens after traffic increased for a certain domain, it\'s likely that a tasklist is overloaded. Consider scale up the tasklist\n\n * If the request volume aligned with the traffic increased on all domain, consider scale up the cluster',normalizedContent:'# cluster troubleshooting\n\nthis section is to cover some common operation issues as a runbook. feel free to add more, or raise issues in the to ask for more in cadence-docs project.or talk to us in slack support channel!\n\nwe will keep adding more stuff. any contribution is very welcome.\n\n\n# errors\n\n * persistence max qps reached for list operations\n * check metrics to see how many list operations are performed per second on the domain. alternatively you can enable debug log level to see more details of how a list request is ratelimited, if it\'s a staging/qa cluster.\n * raise the ratelimiting for the domain if you believe the default ratelimit is too low\n * failed to lock shard. previous range id: 132; new range id: 133 and failed to update shard. previous range id: 210; new range id: 212\n * when this keep happening, it\'s very likely a critical configuration error. either there are two clusters using the same database, or two clusters are using the same ringpop(bootstrap hosts).\n\n\n# api high latency, timeout, task disptaching slowness or too many operations onto db and timeouts\n\n * if it happens after you attemped to truncate tables inorder to reuse the same database/keyspace for a new cluster, it\'s possible that the data is not deleted completely. you should make sure to shutdown the cadence when trucating, and make sure the database is cleaned. alternatively, use a different keyspace/database is a safer way.\n\n * timeout pushing task to matching engine, e.g. "fail to process task","service":"cadence-history","shard-id":431,"address":"172.31.48.64:7934","component":"transfer-queue-processor","cluster-name":"active","shard-id":431,"queue-task-id":590357768,"queue-task-visibility-timestamp":1637356594382077880,"xdc-failover-version":-24,"queue-task-type":0,"wf-domain-id":"f4d6824f-9d24-4a82-81e0-e0e080be4c21","wf-id":"55d64d58-e398-4bf5-88bc-a4696a2ba87f:63ed7cda-afcf-41cd-9d5a-ee5e1b0f2844","wf-run-id":"53b52ee0-3218-418e-a9bf-7768e671f9c1","error":"code:deadline-exceeded message:timeout","lifecycle":"processingfailed","logging-call-at":"task.go:331"\n \n * if this happens after traffic increased for a certain domain, it\'s likely that a tasklist is overloaded. consider scale up the tasklist\n\n * if the request volume aligned with the traffic increased on all domain, consider scale up the cluster',charsets:{}},{title:"Cluster Migration",frontmatter:{layout:"default",title:"Cluster Migration",permalink:"/docs/operation-guide/migration",readingShow:"top"},regularPath:"/docs/07-operation-guide/05-migration.html",relativePath:"docs/07-operation-guide/05-migration.md",key:"v-3569388c",path:"/docs/operation-guide/migration/",headers:[{level:2,title:"Migrate with naive approach",slug:"migrate-with-naive-approach",normalizedTitle:"migrate with naive approach",charIndex:397},{level:2,title:"Migrate with Global Domain Replication feature",slug:"migrate-with-global-domain-replication-feature",normalizedTitle:"migrate with global domain replication feature",charIndex:1261},{level:3,title:"Step 0 - Verify clusters' setup is correct",slug:"step-0-verify-clusters-setup-is-correct",normalizedTitle:"step 0 - verify clusters' setup is correct",charIndex:1513},{level:3,title:"Step 1 - Connect the two clusters using global domain(replication) feature",slug:"step-1-connect-the-two-clusters-using-global-domain-replication-feature",normalizedTitle:"step 1 - connect the two clusters using global domain(replication) feature",charIndex:2793},{level:3,title:"Step 2 - Test Replicating one domain",slug:"step-2-test-replicating-one-domain",normalizedTitle:"step 2 - test replicating one domain",charIndex:5051},{level:3,title:"Step 3 - Start to replicate all domains",slug:"step-3-start-to-replicate-all-domains",normalizedTitle:"step 3 - start to replicate all domains",charIndex:7266},{level:3,title:"Step 4 - Complete the migration",slug:"step-4-complete-the-migration",normalizedTitle:"step 4 - complete the migration",charIndex:8110}],codeSwitcherOptions:{},headersStr:"Migrate with naive approach Migrate with Global Domain Replication feature Step 0 - Verify clusters' setup is correct Step 1 - Connect the two clusters using global domain(replication) feature Step 2 - Test Replicating one domain Step 3 - Start to replicate all domains Step 4 - Complete the migration",content:'# Migrate Cadence cluster.\n\nThere could be some reasons that you need to migrate Cadence clusters:\n\n * Migrate to different storage, for example from Postgres/MySQL to Cassandra, or using multiple SQL database as a sharded SQL cluster for Cadence\n * Split traffic\n * Datacenter migration\n * Scale up -- to change numOfHistoryShards.\n\nBelow is two different approaches for migrating a cluster.\n\n\n# Migrate with naive approach\n\n 1. Set up a new Cadence cluster\n 2. Connect client workers to both old and new clusters\n 3. Change workflow code to start new workflows only in the new cluster\n 4. Wait for all old workflows to finish in the old cluster\n 5. Shutdown the old Cadence cluster and stop the client workers from connecting to it.\n\nNOTE 1: With this approach, workflow history/visibility will not be migrated to new cluster.\n\nNOTE 2: This is the only way to migrate a local domain, because a local domain cannot be converted to a global domain, even after a cluster enables XDC feature.\n\nNOTE 3: Starting from version 0.22.0, global domain is preferred/recommended. Please ensure you create and use global domains only. If you are using local domains, an easy way is to create a global domain and migrate to the new global domain using the above steps.\n\n\n# Migrate with Global Domain Replication feature\n\nNOTE 1: If a domain are NOT a global domain, you cannot use the XDC feature to migrate. The only way is to migrate in a naive approach\n\nNOTE 2: Only migrating to the same numHistoryShards is allowed.\n\n\n# Step 0 - Verify clusters\' setup is correct\n\n * Make sure the new cluster doesn’t already have the domain names that needs to be migrated (otherwise domain replication would fail).\n\nTo get all the domains from current cluster:\n\ncadence --address admin domain list\n\n\nThen For each global domain\n\ncadence --address --do domain describe\n\n\nto make sure it doesn\'t exist in the new cluster.\n\n * Target replication cluster should have numHistoryShards >= source cluster\n\n * Target cluster should have the same search attributes enabled in dynamic configuration and in ElasticSearch.\n \n * Check the dynamic configuration to see if they have the same list of frontend.validSearchAttributes. If any is missing in the new cluster, update the dynamic config for the new cluster.\n \n * Check results of the below command to make sure that the ES fields matched with the dynamic configuration\n\ncurl -u : -X GET https:///cadence-visibility-index -H \'Content-Type: application/json\'| jq .\n\n\nIf any search attribute is missing, add the missing search attributes to target cluster.\n\ncadence --address adm cluster add-search-attr --search_attr_key <> --search_attr_type <>\n\n\n\n# Step 1 - Connect the two clusters using global domain(replication) feature\n\nInclude the Cluster Information for both the old and new clusters in the ClusterMetadata config of both clusters. Example config for currentCluster\n\ndcRedirectionPolicy:\n policy: "all-domain-apis-forwarding" # use selected-apis-forwarding if using older versions don\'t support this policy\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: ""\n currentClusterName: ""\n clusterInformation:\n :\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: ""\n :\n enabled: true\n initialFailoverVersion: 0\n rpcName: "cadence-frontend"\n rpcAddress: ""\n\n\nfor newClusterName:\n\ndcRedirectionPolicy:\n policy: "all-domain-apis-forwarding"\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: ""\n currentClusterName: ""\n clusterInformation:\n :\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: ""\n :\n enabled: true\n initialFailoverVersion: 0\n rpcName: "cadence-frontend"\n rpcAddress: ""\n\n\nDeploy the config. In older versions(<= v0.22), only selected-apis-forwarding is supported. This would require you to deploy a different set of workflow/activity connected to the new Cadence cluster during migration, if high availability/seamless migration is required. Because selected-apis-forwarding only forwarding the non-worker APIs.\n\nWith all-domain-apis-forwarding policy, all worker + non-worker APIs are forwarded by Cadence cluster. You don\'t need to make any deployment change to your workflow/activity workers during migration. Once migration, let all workers connect to the new Cadence cluster before removing/shutdown the old cluster.\n\nTherefore, it\'s recommended to upgrade your Cadence cluster to a higher version with all-domain-apis-forwarding policy supported. The below steps assuming you are using this policy.\n\n\n# Step 2 - Test Replicating one domain\n\nFirst of all, try replicating a single domain to make sure everything work. Here uses domain update to failover, you can also use managed failover feature to failover. You may use some testing domains for this like cadence-canary.\n\n * 2.1 Assuming the domain only contain currentCluster in the cluster list, let\'s add the new cluster to the domain.\n\ncadence --address --do domain update --clusters \n\n\nRun the command below to refresh the domain after adding a new cluster to the cluster list; we need to update the active_cluster to the same value that it appears to be.\n\ncadence --address --do domain update --active_cluster \n\n\n * 2.2 failover the domain to be active in new cluster\n\ncadence --address --do workflow-prototype domain update --active_cluster \n\n\nUse the domain describe command to verify the entire domain is replicated to the new cluster.\n\ncadence --address --do domain describe\n\n\nFind an open workflowID that we want to replicate (you can get it from the UI). Use this command to describe it to make sure it’s open and running:\n\ncadence --address --do workflow describe --workflow_id \n\n\nRun a signal command against any workflow and check that it was replicated to the new cluster. Example:\n\ncadence --address --do workflow signal --workflow_id --name \n\n\nThis command will send a noop signal to workflows to trigger a decision, which will trigger history replication if needed.\n\nVerify the workflow is replicated in the new cluster\n\ncadence --address --st --do workflow describe --workflow_id \n\n\nAlso compare the history between the two clusters:\n\ncadence --address --do workflow show --workflow_id \n\n\ncadence --address --do workflow show --workflow_id \n\n\n\n# Step 3 - Start to replicate all domains\n\nYou can repeat Step 2 for all the domains. Or you can use the managed failover feature to failover all the domains in the cluster with a single command. See more details in the global domain documentation.\n\nBecause replication cannot be triggered without a decision. Again best way is to send a garbage signal to all the workflows.\n\nIf advanced visibility is enabled, then use batch signal command to start a batch job to trigger replication for all open workflows:\n\ncadence --address --do workflow batch start --batch_type signal --query “CloseTime = missing” --signal_name --reason --input --yes\n\n\nWatch metrics & dashboard while this is happening. Also observe the signal batch job to make sure it\'s completed.\n\n\n# Step 4 - Complete the migration\n\nAfter a few days, make sure everything is stable on the new cluster. The old cluster should only be forwarding requests to new cluster.\n\nA few things need to do in order to shutdown the old cluster.\n\n * Migrate all applications to connect to the frontend of new cluster instead of relying on the forwarding\n * Watch metric dashboard to make sure no any traffic is happening on the old cluster\n * Delete the old cluster from domain cluster list. This needs to be done for every domain.\n\ncadence --address --do domain update --clusters \n\n\n * Delete the old cluster from the configuration of the new cluster.\n\nOnce above is done, you can shutdown the old cluster safely.',normalizedContent:'# migrate cadence cluster.\n\nthere could be some reasons that you need to migrate cadence clusters:\n\n * migrate to different storage, for example from postgres/mysql to cassandra, or using multiple sql database as a sharded sql cluster for cadence\n * split traffic\n * datacenter migration\n * scale up -- to change numofhistoryshards.\n\nbelow is two different approaches for migrating a cluster.\n\n\n# migrate with naive approach\n\n 1. set up a new cadence cluster\n 2. connect client workers to both old and new clusters\n 3. change workflow code to start new workflows only in the new cluster\n 4. wait for all old workflows to finish in the old cluster\n 5. shutdown the old cadence cluster and stop the client workers from connecting to it.\n\nnote 1: with this approach, workflow history/visibility will not be migrated to new cluster.\n\nnote 2: this is the only way to migrate a local domain, because a local domain cannot be converted to a global domain, even after a cluster enables xdc feature.\n\nnote 3: starting from version 0.22.0, global domain is preferred/recommended. please ensure you create and use global domains only. if you are using local domains, an easy way is to create a global domain and migrate to the new global domain using the above steps.\n\n\n# migrate with global domain replication feature\n\nnote 1: if a domain are not a global domain, you cannot use the xdc feature to migrate. the only way is to migrate in a naive approach\n\nnote 2: only migrating to the same numhistoryshards is allowed.\n\n\n# step 0 - verify clusters\' setup is correct\n\n * make sure the new cluster doesn’t already have the domain names that needs to be migrated (otherwise domain replication would fail).\n\nto get all the domains from current cluster:\n\ncadence --address admin domain list\n\n\nthen for each global domain\n\ncadence --address --do domain describe\n\n\nto make sure it doesn\'t exist in the new cluster.\n\n * target replication cluster should have numhistoryshards >= source cluster\n\n * target cluster should have the same search attributes enabled in dynamic configuration and in elasticsearch.\n \n * check the dynamic configuration to see if they have the same list of frontend.validsearchattributes. if any is missing in the new cluster, update the dynamic config for the new cluster.\n \n * check results of the below command to make sure that the es fields matched with the dynamic configuration\n\ncurl -u : -x get https:///cadence-visibility-index -h \'content-type: application/json\'| jq .\n\n\nif any search attribute is missing, add the missing search attributes to target cluster.\n\ncadence --address adm cluster add-search-attr --search_attr_key <> --search_attr_type <>\n\n\n\n# step 1 - connect the two clusters using global domain(replication) feature\n\ninclude the cluster information for both the old and new clusters in the clustermetadata config of both clusters. example config for currentcluster\n\ndcredirectionpolicy:\n policy: "all-domain-apis-forwarding" # use selected-apis-forwarding if using older versions don\'t support this policy\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: ""\n currentclustername: ""\n clusterinformation:\n :\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: ""\n :\n enabled: true\n initialfailoverversion: 0\n rpcname: "cadence-frontend"\n rpcaddress: ""\n\n\nfor newclustername:\n\ndcredirectionpolicy:\n policy: "all-domain-apis-forwarding"\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: ""\n currentclustername: ""\n clusterinformation:\n :\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: ""\n :\n enabled: true\n initialfailoverversion: 0\n rpcname: "cadence-frontend"\n rpcaddress: ""\n\n\ndeploy the config. in older versions(<= v0.22), only selected-apis-forwarding is supported. this would require you to deploy a different set of workflow/activity connected to the new cadence cluster during migration, if high availability/seamless migration is required. because selected-apis-forwarding only forwarding the non-worker apis.\n\nwith all-domain-apis-forwarding policy, all worker + non-worker apis are forwarded by cadence cluster. you don\'t need to make any deployment change to your workflow/activity workers during migration. once migration, let all workers connect to the new cadence cluster before removing/shutdown the old cluster.\n\ntherefore, it\'s recommended to upgrade your cadence cluster to a higher version with all-domain-apis-forwarding policy supported. the below steps assuming you are using this policy.\n\n\n# step 2 - test replicating one domain\n\nfirst of all, try replicating a single domain to make sure everything work. here uses domain update to failover, you can also use managed failover feature to failover. you may use some testing domains for this like cadence-canary.\n\n * 2.1 assuming the domain only contain currentcluster in the cluster list, let\'s add the new cluster to the domain.\n\ncadence --address --do domain update --clusters \n\n\nrun the command below to refresh the domain after adding a new cluster to the cluster list; we need to update the active_cluster to the same value that it appears to be.\n\ncadence --address --do domain update --active_cluster \n\n\n * 2.2 failover the domain to be active in new cluster\n\ncadence --address --do workflow-prototype domain update --active_cluster \n\n\nuse the domain describe command to verify the entire domain is replicated to the new cluster.\n\ncadence --address --do domain describe\n\n\nfind an open workflowid that we want to replicate (you can get it from the ui). use this command to describe it to make sure it’s open and running:\n\ncadence --address --do workflow describe --workflow_id \n\n\nrun a signal command against any workflow and check that it was replicated to the new cluster. example:\n\ncadence --address --do workflow signal --workflow_id --name \n\n\nthis command will send a noop signal to workflows to trigger a decision, which will trigger history replication if needed.\n\nverify the workflow is replicated in the new cluster\n\ncadence --address --st --do workflow describe --workflow_id \n\n\nalso compare the history between the two clusters:\n\ncadence --address --do workflow show --workflow_id \n\n\ncadence --address --do workflow show --workflow_id \n\n\n\n# step 3 - start to replicate all domains\n\nyou can repeat step 2 for all the domains. or you can use the managed failover feature to failover all the domains in the cluster with a single command. see more details in the global domain documentation.\n\nbecause replication cannot be triggered without a decision. again best way is to send a garbage signal to all the workflows.\n\nif advanced visibility is enabled, then use batch signal command to start a batch job to trigger replication for all open workflows:\n\ncadence --address --do workflow batch start --batch_type signal --query “closetime = missing” --signal_name --reason --input --yes\n\n\nwatch metrics & dashboard while this is happening. also observe the signal batch job to make sure it\'s completed.\n\n\n# step 4 - complete the migration\n\nafter a few days, make sure everything is stable on the new cluster. the old cluster should only be forwarding requests to new cluster.\n\na few things need to do in order to shutdown the old cluster.\n\n * migrate all applications to connect to the frontend of new cluster instead of relying on the forwarding\n * watch metric dashboard to make sure no any traffic is happening on the old cluster\n * delete the old cluster from domain cluster list. this needs to be done for every domain.\n\ncadence --address --do domain update --clusters \n\n\n * delete the old cluster from the configuration of the new cluster.\n\nonce above is done, you can shutdown the old cluster safely.',charsets:{cjk:!0}},{title:"Cluster Monitoring",frontmatter:{layout:"default",title:"Cluster Monitoring",permalink:"/docs/operation-guide/monitor",readingShow:"top"},regularPath:"/docs/07-operation-guide/03-monitoring.html",relativePath:"docs/07-operation-guide/03-monitoring.md",key:"v-1a836dbc",path:"/docs/operation-guide/monitor/",headers:[{level:2,title:"Instructions",slug:"instructions",normalizedTitle:"instructions",charIndex:25},{level:2,title:"DataDog dashboard templates",slug:"datadog-dashboard-templates",normalizedTitle:"datadog dashboard templates",charIndex:2407},{level:2,title:"Grafana+Prometheus dashboard templates",slug:"grafana-prometheus-dashboard-templates",normalizedTitle:"grafana+prometheus dashboard templates",charIndex:3295},{level:2,title:"Periodic tests(Canary) for health check",slug:"periodic-tests-canary-for-health-check",normalizedTitle:"periodic tests(canary) for health check",charIndex:3981},{level:2,title:"Cadence Frontend Monitoring",slug:"cadence-frontend-monitoring",normalizedTitle:"cadence frontend monitoring",charIndex:4197},{level:3,title:"Service Availability(server metrics)",slug:"service-availability-server-metrics",normalizedTitle:"service availability(server metrics)",charIndex:4399},{level:3,title:"StartWorkflow Per Second",slug:"startworkflow-per-second",normalizedTitle:"startworkflow per second",charIndex:4917},{level:3,title:"Activities Started Per Second",slug:"activities-started-per-second",normalizedTitle:"activities started per second",charIndex:5291},{level:3,title:"Decisions Started Per Second",slug:"decisions-started-per-second",normalizedTitle:"decisions started per second",charIndex:5622},{level:3,title:"Periodical Test Suite Success(aka Canary)",slug:"periodical-test-suite-success-aka-canary",normalizedTitle:"periodical test suite success(aka canary)",charIndex:5960},{level:3,title:"Frontend all API per second",slug:"frontend-all-api-per-second",normalizedTitle:"frontend all api per second",charIndex:6306},{level:3,title:"Frontend API per second (breakdown per operation)",slug:"frontend-api-per-second-breakdown-per-operation",normalizedTitle:"frontend api per second (breakdown per operation)",charIndex:6553},{level:3,title:"Frontend API errors per second(breakdown per operation)",slug:"frontend-api-errors-per-second-breakdown-per-operation",normalizedTitle:"frontend api errors per second(breakdown per operation)",charIndex:6833},{level:3,title:"Frontend Regular API Latency",slug:"frontend-regular-api-latency",normalizedTitle:"frontend regular api latency",charIndex:9890},{level:3,title:"Frontend ListWorkflow API Latency",slug:"frontend-listworkflow-api-latency",normalizedTitle:"frontend listworkflow api latency",charIndex:10636},{level:3,title:"Frontend Long Poll API Latency",slug:"frontend-long-poll-api-latency",normalizedTitle:"frontend long poll api latency",charIndex:11243},{level:3,title:"Frontend Get History/Query Workflow API Latency",slug:"frontend-get-history-query-workflow-api-latency",normalizedTitle:"frontend get history/query workflow api latency",charIndex:11923},{level:3,title:"Frontend WorkflowClient API per seconds by domain",slug:"frontend-workflowclient-api-per-seconds-by-domain",normalizedTitle:"frontend workflowclient api per seconds by domain",charIndex:12700},{level:2,title:"Cadence Application Monitoring",slug:"cadence-application-monitoring",normalizedTitle:"cadence application monitoring",charIndex:13351},{level:3,title:"Workflow Start and Successful completion",slug:"workflow-start-and-successful-completion",normalizedTitle:"workflow start and successful completion",charIndex:13560},{level:3,title:"Workflow Failure",slug:"workflow-failure",normalizedTitle:"workflow failure",charIndex:14392},{level:3,title:"Decision Poll Counters",slug:"decision-poll-counters",normalizedTitle:"decision poll counters",charIndex:15449},{level:3,title:"DecisionTasks Scheduled per second",slug:"decisiontasks-scheduled-per-second",normalizedTitle:"decisiontasks scheduled per second",charIndex:16462},{level:3,title:"Decision Scheduled To Start Latency",slug:"decision-scheduled-to-start-latency",normalizedTitle:"decision scheduled to start latency",charIndex:16798},{level:3,title:"Decision Execution Failure",slug:"decision-execution-failure",normalizedTitle:"decision execution failure",charIndex:17930},{level:3,title:"Decision Execution Timeout",slug:"decision-execution-timeout",normalizedTitle:"decision execution timeout",charIndex:18452},{level:3,title:"Workflow End to End Latency",slug:"workflow-end-to-end-latency",normalizedTitle:"workflow end to end latency",charIndex:18962},{level:3,title:"Workflow Panic and NonDeterministicError",slug:"workflow-panic-and-nondeterministicerror",normalizedTitle:"workflow panic and nondeterministicerror",charIndex:19678},{level:3,title:"Workflow Sticky Cache Hit Rate and Miss Count",slug:"workflow-sticky-cache-hit-rate-and-miss-count",normalizedTitle:"workflow sticky cache hit rate and miss count",charIndex:20254},{level:3,title:"Activity Task Operations",slug:"activity-task-operations",normalizedTitle:"activity task operations",charIndex:21458},{level:3,title:"Local Activity Task Operations",slug:"local-activity-task-operations",normalizedTitle:"local activity task operations",charIndex:21873},{level:3,title:"Activity Execution Latency",slug:"activity-execution-latency",normalizedTitle:"activity execution latency",charIndex:22097},{level:3,title:"Activity Poll Counters",slug:"activity-poll-counters",normalizedTitle:"activity poll counters",charIndex:22715},{level:3,title:"ActivityTasks Scheduled per second",slug:"activitytasks-scheduled-per-second",normalizedTitle:"activitytasks scheduled per second",charIndex:23808},{level:3,title:"Activity Scheduled To Start Latency",slug:"activity-scheduled-to-start-latency",normalizedTitle:"activity scheduled to start latency",charIndex:24146},{level:3,title:"Activity Failure",slug:"activity-failure",normalizedTitle:"activity failure",charIndex:25061},{level:3,title:"Service API success rate",slug:"service-api-success-rate",normalizedTitle:"service api success rate",charIndex:26435},{level:3,title:"Service API Latency",slug:"service-api-latency",normalizedTitle:"service api latency",charIndex:27418},{level:3,title:"Service API Breakdown",slug:"service-api-breakdown",normalizedTitle:"service api breakdown",charIndex:27768},{level:3,title:"Service API Error Breakdown",slug:"service-api-error-breakdown",normalizedTitle:"service api error breakdown",charIndex:28087},{level:3,title:"Max Event Blob size",slug:"max-event-blob-size",normalizedTitle:"max event blob size",charIndex:28316},{level:3,title:"Max History Size",slug:"max-history-size",normalizedTitle:"max history size",charIndex:28917},{level:3,title:"Max History Length",slug:"max-history-length",normalizedTitle:"max history length",charIndex:29680},{level:2,title:"Cadence History Service Monitoring",slug:"cadence-history-service-monitoring",normalizedTitle:"cadence history service monitoring",charIndex:30220},{level:3,title:"History shard movements",slug:"history-shard-movements",normalizedTitle:"history shard movements",charIndex:30351},{level:3,title:"Transfer Tasks Per Second",slug:"transfer-tasks-per-second",normalizedTitle:"transfer tasks per second",charIndex:31134},{level:3,title:"Timer Tasks Per Second",slug:"timer-tasks-per-second",normalizedTitle:"timer tasks per second",charIndex:31491},{level:3,title:"Transfer Tasks Per Domain",slug:"transfer-tasks-per-domain",normalizedTitle:"transfer tasks per domain",charIndex:31844},{level:3,title:"Timer Tasks Per Domain",slug:"timer-tasks-per-domain",normalizedTitle:"timer tasks per domain",charIndex:32026},{level:3,title:"Transfer Latency by Type",slug:"transfer-latency-by-type",normalizedTitle:"transfer latency by type",charIndex:32202},{level:3,title:"Timer Task Latency by type",slug:"timer-task-latency-by-type",normalizedTitle:"timer task latency by type",charIndex:33084},{level:3,title:"NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics",slug:"note-task-queue-latency-vs-executing-latency-vs-processing-latency-in-transfer-timer-task-latency-metrics",normalizedTitle:"note: task queue latency vs executing latency vs processing latency in transfer & timer task latency metrics",charIndex:null},{level:3,title:"Transfer Task Latency Per Domain",slug:"transfer-task-latency-per-domain",normalizedTitle:"transfer task latency per domain",charIndex:34475},{level:3,title:"Timer Task Latency Per Domain",slug:"timer-task-latency-per-domain",normalizedTitle:"timer task latency per domain",charIndex:34632},{level:3,title:"History API per Second",slug:"history-api-per-second",normalizedTitle:"history api per second",charIndex:34786},{level:3,title:"History API Errors per Second",slug:"history-api-errors-per-second",normalizedTitle:"history api errors per second",charIndex:34933},{level:3,title:"Max History Size",slug:"max-history-size-2",normalizedTitle:"max history size",charIndex:28917},{level:3,title:"Max History Length",slug:"max-history-length-2",normalizedTitle:"max history length",charIndex:29680},{level:3,title:"Max Event Blob Size",slug:"max-event-blob-size-2",normalizedTitle:"max event blob size",charIndex:38417},{level:2,title:"Cadence Matching Service Monitoring",slug:"cadence-matching-service-monitoring",normalizedTitle:"cadence matching service monitoring",charIndex:38816},{level:3,title:"Matching APIs per Second",slug:"matching-apis-per-second",normalizedTitle:"matching apis per second",charIndex:39200},{level:3,title:"Matching API Errors per Second",slug:"matching-api-errors-per-second",normalizedTitle:"matching api errors per second",charIndex:39392},{level:3,title:"Matching Regular API Latency",slug:"matching-regular-api-latency",normalizedTitle:"matching regular api latency",charIndex:43179},{level:3,title:"Sync Match Latency:",slug:"sync-match-latency",normalizedTitle:"sync match latency:",charIndex:43446},{level:3,title:"Async match Latency",slug:"async-match-latency",normalizedTitle:"async match latency",charIndex:43936},{level:2,title:"Cadence Default Persistence Monitoring",slug:"cadence-default-persistence-monitoring",normalizedTitle:"cadence default persistence monitoring",charIndex:44299},{level:3,title:"Persistence Availability",slug:"persistence-availability",normalizedTitle:"persistence availability",charIndex:44408},{level:3,title:"Persistence By Service TPS",slug:"persistence-by-service-tps",normalizedTitle:"persistence by service tps",charIndex:45440},{level:3,title:"Persistence By Operation TPS",slug:"persistence-by-operation-tps",normalizedTitle:"persistence by operation tps",charIndex:45738},{level:3,title:"Persistence By Operation Latency",slug:"persistence-by-operation-latency",normalizedTitle:"persistence by operation latency",charIndex:46098},{level:3,title:"Persistence Error By Operation Count",slug:"persistence-error-by-operation-count",normalizedTitle:"persistence error by operation count",charIndex:46759},{level:2,title:"Cadence Advanced Visibility Persistence Monitoring(if applicable)",slug:"cadence-advanced-visibility-persistence-monitoring-if-applicable",normalizedTitle:"cadence advanced visibility persistence monitoring(if applicable)",charIndex:50700},{level:3,title:"Persistence Availability",slug:"persistence-availability-2",normalizedTitle:"persistence availability",charIndex:44408},{level:3,title:"Persistence By Service TPS",slug:"persistence-by-service-tps-2",normalizedTitle:"persistence by service tps",charIndex:45440},{level:3,title:"Persistence By Operation TPS(read: ES, write: Kafka)",slug:"persistence-by-operation-tps-read-es-write-kafka",normalizedTitle:"persistence by operation tps(read: es, write: kafka)",charIndex:51861},{level:3,title:"Persistence By Operation Latency(in seconds) (read: ES, write: Kafka)",slug:"persistence-by-operation-latency-in-seconds-read-es-write-kafka",normalizedTitle:"persistence by operation latency(in seconds) (read: es, write: kafka)",charIndex:52153},{level:3,title:"Persistence Error By Operation Count (read: ES, write: Kafka)",slug:"persistence-error-by-operation-count-read-es-write-kafka",normalizedTitle:"persistence error by operation count (read: es, write: kafka)",charIndex:52474},{level:3,title:"Kafka->ES processor counter",slug:"kafka-es-processor-counter",normalizedTitle:"kafka->es processor counter",charIndex:null},{level:3,title:"Kafka->ES processor error",slug:"kafka-es-processor-error",normalizedTitle:"kafka->es processor error",charIndex:null},{level:3,title:"Kafka->ES processor latency",slug:"kafka-es-processor-latency",normalizedTitle:"kafka->es processor latency",charIndex:null},{level:2,title:"Cadence Dependency Metrics Monitor suggestion",slug:"cadence-dependency-metrics-monitor-suggestion",normalizedTitle:"cadence dependency metrics monitor suggestion",charIndex:54250},{level:3,title:"Computing platform metrics for Cadence deployment",slug:"computing-platform-metrics-for-cadence-deployment",normalizedTitle:"computing platform metrics for cadence deployment",charIndex:54300},{level:3,title:"Database",slug:"database",normalizedTitle:"database",charIndex:54488},{level:3,title:"Kafka (if applicable)",slug:"kafka-if-applicable",normalizedTitle:"kafka (if applicable)",charIndex:54651},{level:3,title:"ElasticSearch (if applicable)",slug:"elasticsearch-if-applicable",normalizedTitle:"elasticsearch (if applicable)",charIndex:54709},{level:2,title:"Cadence Service SLO Recommendation",slug:"cadence-service-slo-recommendation",normalizedTitle:"cadence service slo recommendation",charIndex:54775}],codeSwitcherOptions:{},headersStr:"Instructions DataDog dashboard templates Grafana+Prometheus dashboard templates Periodic tests(Canary) for health check Cadence Frontend Monitoring Service Availability(server metrics) StartWorkflow Per Second Activities Started Per Second Decisions Started Per Second Periodical Test Suite Success(aka Canary) Frontend all API per second Frontend API per second (breakdown per operation) Frontend API errors per second(breakdown per operation) Frontend Regular API Latency Frontend ListWorkflow API Latency Frontend Long Poll API Latency Frontend Get History/Query Workflow API Latency Frontend WorkflowClient API per seconds by domain Cadence Application Monitoring Workflow Start and Successful completion Workflow Failure Decision Poll Counters DecisionTasks Scheduled per second Decision Scheduled To Start Latency Decision Execution Failure Decision Execution Timeout Workflow End to End Latency Workflow Panic and NonDeterministicError Workflow Sticky Cache Hit Rate and Miss Count Activity Task Operations Local Activity Task Operations Activity Execution Latency Activity Poll Counters ActivityTasks Scheduled per second Activity Scheduled To Start Latency Activity Failure Service API success rate Service API Latency Service API Breakdown Service API Error Breakdown Max Event Blob size Max History Size Max History Length Cadence History Service Monitoring History shard movements Transfer Tasks Per Second Timer Tasks Per Second Transfer Tasks Per Domain Timer Tasks Per Domain Transfer Latency by Type Timer Task Latency by type NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics Transfer Task Latency Per Domain Timer Task Latency Per Domain History API per Second History API Errors per Second Max History Size Max History Length Max Event Blob Size Cadence Matching Service Monitoring Matching APIs per Second Matching API Errors per Second Matching Regular API Latency Sync Match Latency: Async match Latency Cadence Default Persistence Monitoring Persistence Availability Persistence By Service TPS Persistence By Operation TPS Persistence By Operation Latency Persistence Error By Operation Count Cadence Advanced Visibility Persistence Monitoring(if applicable) Persistence Availability Persistence By Service TPS Persistence By Operation TPS(read: ES, write: Kafka) Persistence By Operation Latency(in seconds) (read: ES, write: Kafka) Persistence Error By Operation Count (read: ES, write: Kafka) Kafka->ES processor counter Kafka->ES processor error Kafka->ES processor latency Cadence Dependency Metrics Monitor suggestion Computing platform metrics for Cadence deployment Database Kafka (if applicable) ElasticSearch (if applicable) Cadence Service SLO Recommendation",content:"# Cluster Monitoring\n\n\n# Instructions\n\nCadence emits metrics for both Server and client libraries:\n\n * Follow this example to emit client side metrics for Golang client\n \n * You can use other metrics emitter like M3\n * Alternatively, you can implement the tally Reporter interface\n\n * Follow this example to emit client side metrics for Java client if using 3.x client, or this example if using 2.x client.\n \n * You can use other metrics emitter like M3\n * Alternatively, you can implement the tally Reporter interface\n\n * For running Cadence services in production, please follow this example of hemlchart to emit server side metrics. Or you can follow the example of local environment to Prometheus. All services need to expose a HTTP port to provide metircs like below\n\nmetrics:\n prometheus:\n timerType: \"histogram\"\n listenAddress: \"0.0.0.0:8001\"\n\n\nThe rest of the instruction uses local environment as an example.\n\nFor testing local server emitting metrics to Promethues, the easiest way is to use docker-compose to start a local Cadence instance.\n\nMake sure to update the prometheus_config.yml to add \"host.docker.internal:9098\" to the scrape list before starting the docker-compose:\n\nglobal:\n scrape_interval: 5s\n external_labels:\n monitor: 'cadence-monitor'\nscrape_configs:\n - job_name: 'prometheus'\n static_configs:\n - targets: # addresses to scrape\n - 'cadence:9090'\n - 'cadence:8000'\n - 'cadence:8001'\n - 'cadence:8002'\n - 'cadence:8003'\n - 'host.docker.internal:9098'\n\n\nNote: host.docker.internal may not work for some docker versions\n\n * After updating the prometheus_config.yaml as above, run docker-compose up to start the local Cadence instance\n\n * Go the the sample repo, build the helloworld sample make helloworld and run the worker ./bin/helloworld -m worker, and then in another Shell start a workflow ./bin/helloworld\n\n * Go to your local Prometheus dashboard, you should be able to check the metrics emitted by handler from client/frontend/matching/history/sysWorker and confirm your services are healthy through targets\n\n * Go to local Grafana , login as admin/admin.\n\n * Configure Prometheus as datasource: use http://host.docker.internal:9090 as URL of prometheus.\n\n * Import the Grafana dashboard tempalte as JSON files.\n\nClient side dashboard looks like this:\n\nAnd server basic dashboard:\n\n\n# DataDog dashboard templates\n\nThis package contains examples of Cadence dashboards with DataDog.\n\n * Cadence-Client is the dashboard that includes all the metrics to help you understand Cadence client behavior. Most of these metrics are emitted by the client SDKs, with a few exceptions from server side (for example, workflow timeout).\n\n * Cadence-Server is the the server dashboard that you can use to monitor and undertand the health and status of your Cadence cluster.\n\nTo use DataDog with Cadence, follow this instruction to collect Prometheus metrics using DataDog agent.\n\nNOTE1: don't forget to adjust max_returned_metrics to a higher number(e.g. 100000). Otherwise DataDog agent won't be able to collect all metrics(default is 2000).\n\nNOTE2: the template contains templating variables $App and $Availability_Zone. Feel free to remove them if you don't have them in your setup.\n\n\n# Grafana+Prometheus dashboard templates\n\nThis package contains examples of Cadence dashboards with Prometheus.\n\n * Cadence-Client is the dashboard of client metrics, and a few server side metrics that belong to client side but have to be emitted by server(for example, workflow timeout).\n\n * Cadence-Server-Basic is the the basic server dashboard to monitor/navigate the health/status of a Cadence cluster.\n\n * Apart from the basic server dashboard, it's recommended to set up dashboards on different components for Cadence server: Frontend, History, Matching, Worker, Persistence, Archival, etc. Any contribution is always welcome to enrich the existing templates or new templates!\n\n\n# Periodic tests(Canary) for health check\n\nIt's recommended that you run periodical test to get signals on the healthness of your cluster. Please following instructions in our canary package to set these tests up.\n\n\n# Cadence Frontend Monitoring\n\nThis section describes recommended dashboards for monitoring Cadence services in your cluster. The structure mostly follows the DataDog dashboard template listed above.\n\n\n# Service Availability(server metrics)\n\n * Meaning: the availability of Cadence server using server metrics.\n * Suggested monitor: below 95% > 5 min then alert, below 99% for > 5 min triggers a warning\n * Monitor action: When fired, check if there is any persistence errors. If so then check the healthness of the database(may need to restart or scale up). If not then check the error logs.\n * Datadog query example\n\nsum:cadence_frontend.cadence_errors{*}\nsum:cadence_frontend.cadence_requests{*}\n(1 - a / b) * 100\n\n\n\n# StartWorkflow Per Second\n\n * Meaning: how many workflows are started per second. This helps determine if your server is overloaded.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation IN (startworkflowexecution,signalwithstartworkflowexecution))} by {operation}.as_rate()\n\n\n\n# Activities Started Per Second\n\n * Meaning: How many activities are started per second. Helps determine if the server is overloaded.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollforactivitytask} by {operation}.as_rate()\n\n\n\n# Decisions Started Per Second\n\n * Meaning: How many workflow decisions are started per second. Helps determine if the server is overloaded.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollfordecisiontask} by {operation}.as_rate()\n\n\n\n# Periodical Test Suite Success(aka Canary)\n\n * Meaning: The success counter of canary test suite\n * Suggested monitor: Monitor needed. If fired, look at the failed canary test case and investigate the reason of failure.\n * Datadog query example\n\nsum:cadence_history.workflow_success{workflowtype:workflow_sanity} by {workflowtype}.as_count()\n\n\n\n# Frontend all API per second\n\n * Meaning: all API on frontend per second. Information only.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{*}.as_rate()\n\n\n\n# Frontend API per second (breakdown per operation)\n\n * Meaning: API on frontend per second. Information only.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# Frontend API errors per second(breakdown per operation)\n\n * Meaning: API error on frontend per second. Information only.\n * Suggested monitor: This is to facilitate investigation. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_errors{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Frontend Regular API Latency\n\n * Meaning: The latency of regular core API -- excluding long-poll/queryWorkflow/getHistory/ListWorkflow/CountWorkflow API.\n * Suggested monitor: 95% of all apis and of all operations that take over 1.5 seconds triggers a warning, over 2 seconds triggers an alert\n * Monitor action: If fired, investigate the database read/write latency. May need to throttle some spiky traffic from certain domains, or scale up the database\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation NOT IN (pollfordecisiontask,pollforactivitytask,getworkflowexecutionhistory,queryworkflow,listworkflowexecutions,listclosedworkflowexecutions,listopenworkflowexecutions)) AND $pXXLatency} by {operation}\n\n\n\n# Frontend ListWorkflow API Latency\n\n * Meaning: The latency of ListWorkflow API.\n * Monitor: 95% of all apis and of all operations that take over 2 seconds triggers a warning, over 3 seconds triggers an alert\n * Monitor action: If fired, investigate the ElasticSearch read latency. May need to throttle some spiky traffic from certain domains, or scale up ElasticSearch cluster.\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation IN (listclosedworkflowexecutions,listopenworkflowexecutions,listworkflowexecutions,countworkflowexecutions)) AND $pXXLatency} by {operation}\n\n\n\n# Frontend Long Poll API Latency\n\n * Meaning: Long poll means that the worker is waiting for a task. The latency is an Indicator for how busy the worker is. Poll for activity task and poll for decision task are the types of long poll requests.The api call times out at 50 seconds if no task can be picked up.A very low latency could mean that more workers need to be added.\n * Suggested monitor: No monitor needed as long latency is expected.\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollforactivitytask} by {operation}\navg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollfordecisiontask} by {operation}\n\n\n\n# Frontend Get History/Query Workflow API Latency\n\n * Meaning: GetHistory API acts like a long poll api, but there’s no explicit timeout. Long-poll of GetHistory is being used when WorkflowClient is waiting for the result of the workflow(essentially, WorkflowExecutionCompletedEvent). This latency depends on the time it takes for the workflow to complete. QueryWorkflow API latency is also unpredictable as it depends on the availability and performance of workflow workers, which are owned by the application and workflow implementation(may require replaying history).\n * Suggested monitor: No monitor needed\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation IN (getworkflowexecutionhistory,queryworkflow)) AND $pXXLatency} by {operation}\n\n\n\n# Frontend WorkflowClient API per seconds by domain\n\n * Meaning: Shows which domains are making the most requests using WorkflowClient(excluding worker API like PollForDecisionTask and RespondDecisionTaskCompleted). Used for troubleshooting. In the future it can be used to set some rate limiting per domain.\n * Suggested monitor: No monitor needed.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation IN (signalwithstartworkflowexecution,signalworkflowexecution,startworkflowexecution,terminateworkflowexecution,resetworkflowexecution,requestcancelworkflowexecution,listworkflowexecutions))} by {domain,operation}.as_rate()\n\n\n\n# Cadence Application Monitoring\n\nThis section describes the recommended dashboards for monitoring Cadence application using metrics emitted by SDK. See the setup section about how to collect those metrics.\n\n\n# Workflow Start and Successful completion\n\n * Workflow successfully started/signalWithStart and completed/canceled/continuedAsNew\n * Monitor: not recommended\n * Datadog query example\n\nsum:cadence_client.cadence_workflow_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_completed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_canceled{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_continue_as_new{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_signal_with_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\n\n\n\n# Workflow Failure\n\n * Metrics for all types of failures, including workflow failures(throw uncaught exceptions), workflow timeout and termination.\n * For timeout and termination, workflow worker doesn’t have a chance to emit metrics when it’s terminate, so the metric comes from the history service\n * Monitor: application should set monitor on timeout and failure to make sure workflow are not failing. Cancel/terminate are usually triggered by human intentionally.\n * When the metrics fire, go to Cadence UI to find the failed workflows and investigate the workflow history to understand the type of failure\n * Datadog query example\n\nsum:cadence_client.cadence_workflow_failed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env}.as_count()\nsum:cadence_history.workflow_failed{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_terminate{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_timeout{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\n\n\n\n# Decision Poll Counters\n\n * Indicates if the workflow worker is available and is polling tasks. If the worker is not available no counters will show. Can also check if the worker is using the right task list. “No task” poll type means that the worker exists and is idle. The timeout for this long poll api is 50 seconds. If no task is received within 50 seconds, then an empty response will be returned and another long poll request will be sent.\n * Monitor: application can should monitor on it to make sure workers are available\n * When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * Datadog query example\n\nsum:cadence_client.cadence_decision_poll_total{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_failed{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_no_task{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_succeed{$Domain,$Tasklist}.as_count()\n\n\n\n# DecisionTasks Scheduled per second\n\n * Indicate how many decision tasks are scheduled\n * Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded\n * Datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:adddecisiontask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n\n\n\n# Decision Scheduled To Start Latency\n\n * If this latency is too high then either: The worker is not available or too busy after the task has been scheduled. The task list is overloaded(confirmed by DecisionTaskScheduled per second widget). By default a task list only has one partition and a partition can only be owned by one host and so the throughput of a task list is limited. More task lists can be added to scale or a scalable task list can be used to add more partitions.\n * Monitor: application can set monitor on it to make sure latency is tolerable\n * When fired, check if worker capacity is enough, then check if tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist\n * Datadog query example\n\navg:cadence_client.cadence_decision_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist}\n\n\n\n# Decision Execution Failure\n\n * This means some critical bugs in workflow code causing decision task execution failure\n * Monitor: application should set monitor on it to make sure no consistent failure\n * When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover\n * Datadog query example\n\nsum:cadence_client.cadence_decision_execution_failed{$Domain,$Tasklist} by {tasklist,workflowtype}.as_count()\n\n\n\n# Decision Execution Timeout\n\n * This means some critical bugs in workflow code causing decision task execution timeout\n * Monitor: application should set monitor on it to make sure no consistent timeout\n * When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover\n * Datadog query example\n\nsum:cadence_history.start_to_close_timeout{operation:timeractivetaskdecision*,$Domain}.as_count()\n\n\n\n# Workflow End to End Latency\n\n * This is for the client application to track their SLOs For example, if you expect a workflow to take duration d to complete, you can use this latency to set a monitor.\n * Monitor: application can monitor this metrics if expecting workflow to complete within a certain duration.\n * When fired, investigate the workflow history to see the workflow takes longer than expected to complete\n * Datadog query example\n\navg:cadence_client.cadence_workflow_endtoend_latency.median{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\navg:cadence_client.cadence_workflow_endtoend_latency.95percentile{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\n\n\n\n# Workflow Panic and NonDeterministicError\n\n * These errors mean that there is a bug in the code and the deploy should be rolled back.\n * A monitor should be set on this metric\n * When fired, you may rollback the deployment to mitigate your issue. Usually this caused by bad (non-backward compatible) code change. After rollback, look at your worker error logs to see where the bug is.\n * Datadog query example\n\nsum:cadence_client.cadence_worker_panic{$Domain} by {env,domain}.as_rate()\nsum:cadence_client.cadence_non_deterministic_error{$Domain} by {env,domain}.as_rate()\n\n\n\n# Workflow Sticky Cache Hit Rate and Miss Count\n\n * This metric can be used for performance optimization. This can be improved by adding more worker instances, or adjust the workerOption(GoSDK) or WorkferFactoryOption(Java SDK). CacheHitRate too low means workers will have to replay history to rebuild the workflow stack when executing a decision task. Depending on the the history size\n * If less than 1MB, then it’s okay to be lower than 50%\n * If greater than 1MB, then it’s okay to be greater than 50%\n * If greater than 5MB, , then it’s okay to be greater than 60%\n * If greater than 10MB , then it’s okay to be greater than 70%\n * If greater than 20MB , then it’s okay to be greater than 80%\n * If greater than 30MB , then it’s okay to be greater than 90%\n * Workflow history size should never be greater than 50MB.\n * A monitor can be set on this metric, if performance is important.\n * When fired, adjust the stickyCacheSize in the WorkerFactoryOption, or add more workers\n * Datadog query example\n\nsum:cadence_client.cadence_sticky_cache_miss{$Domain} by {env,domain}.as_count()\nsum:cadence_client.cadence_sticky_cache_hit{$Domain} by {env,domain}.as_count()\n(b / (a+b)) * 100\n\n\n\n# Activity Task Operations\n\n * Activity started/completed counters\n * Monitor: not recommended\n * Datadog query example\n\nsum:cadence_client.cadence_activity_task_failed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_completed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_timeouted{$Domain,$Tasklist} by {activitytype}.as_rate()\n\n\n\n# Local Activity Task Operations\n\n * Local Activity execution counters\n * Monitor: not recommended\n * Datadog query example\n\nsum:cadence_client.cadence_local_activity_total{$Domain,$Tasklist} by {activitytype}.as_count()\n\n\n\n# Activity Execution Latency\n\n * If it’s expected that an activity will take x amount of time to complete, a monitor on this metric could be helpful to enforce that expectation.\n * Monitor: application can set monitor on it if expecting workflow start/complete activities with certain latency\n * When fired, investigate the activity code and its dependencies\n * Datadog query example\n\navg:cadence_client.cadence_activity_execution_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_execution_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# Activity Poll Counters\n\n * Indicates the activity worker is available and is polling tasks. If the worker is not available no counters will show. Can also check if the worker is using the right task list. “No task” poll type means that the worker exists and is idle. The timeout for this long poll api is 50 seconds. If within that 50 seconds, no task is received then an empty response will be returned and another long poll request will be sent.\n * Monitor: application can set monitor on it to make sure activity workers are available\n * When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * Datadog query example\n\nsum:cadence_client.cadence_activity_poll_total{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_failed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_succeed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_no_task{$Domain,$Tasklist} by {activitytype}.as_count()\n\n\n\n# ActivityTasks Scheduled per second\n\n * Indicate how many activities tasks are scheduled\n * Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded\n * Datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:addactivitytask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n\n\n\n# Activity Scheduled To Start Latency\n\n * If the latency is too high either: The worker is not available or too busy There are too many activities scheduled into the same tasklist and the tasklist is not scalable. Same as Decision Scheduled To Start Latency\n * Monitor: application Should set monitor on it\n * When fired, check if workers are enough, then check if the tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist\n * Datadog query example\n\navg:cadence_client.cadence_activity_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# Activity Failure\n\n * A monitor on this metric will alert the team that activities are failing The activity timeout metrics are emitted by the history service, because a timeout causes a hard stop and the client doesn’t have time to emit metrics.\n * Monitor: application can set monitor on it\n * When fired, investigate the activity code and its dependencies\n * cadence_activity_execution_failed vs cadence_activity_task_failed: Only have different when using RetryPolicy cadence_activity_task_failed counter increase per activity attempt cadence_activity_execution_failed counter increase when activity fails after all attempts\n * should only monitor on cadence_activity_execution_failed\n * Datadog query example\n\nsum:cadence_client.cadence_activity_execution_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_panic{$Domain} by {domain,env}.as_count()\nsum:cadence_client.cadence_activity_task_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_canceled{$Domain} by {domain,env}.as_count()\nsum:cadence_history.heartbeat_timeout{$Domain} by {domain,env}.as_count()\nsum:cadence_history.schedule_to_start_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.start_to_close_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.schedule_to_close_timeout{$Domain} by {domain,env}.as_count()\n\n\n\n# Service API success rate\n\n * The client’s experience of the service availability. It encompasses many apis. Things that could affect the service’s API success rate are:\n * Service availability\n * The network could have issues.\n * A required api is not available.\n * Client side errors like EntityNotExists, WorkflowAlreadyStarted etc. This means that application code has potential bugs of calling Cadence service.\n * Monitor: application can set monitor on it\n * When fired, check application logs to see if the error is Cadence server error or client side error. Error like EntityNotExists/ExecutionAlreadyStarted/QueryWorkflowFailed/etc are client side error, meaning that the application is misusing the APIs. If most errors are server side errors(internalServiceError), you can contact Cadence admin.\n * Datadog query example\n\nsum:cadence_client.cadence_error{*} by {domain}.as_count()\nsum:cadence_client.cadence_request{*} by {domain}.as_count()\n(1 - a / b) * 100\n\n\n\n# Service API Latency\n\n * The latency of the API, excluding long poll APIs.\n * Application can set monitor on certain APIs, if necessary.\n * Datadog query example\n\navg:cadence_client.cadence_latency.95percentile{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}\n\n\n\n# Service API Breakdown\n\n * A counter breakdown by API to help investigate availability\n * No monitor needed\n * Datadog query example\n\nsum:cadence_client.cadence_request{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}.as_count()\n\n\n\n# Service API Error Breakdown\n\n * A counter breakdown by API error to help investigate availability\n * No monitor needed\n * Datadog query example\n\nsum:cadence_client.cadence_error{$Domain} by {cadence_metric_scope}.as_count()\n\n\n\n# Max Event Blob size\n\n * By default the max size is 2 MB. If the input is greater than the max size the server will reject the request. The size of a single history event. This applies to any event input, like start workflow event, start activity event, or signal event. It should never be greater than 2MB.\n * A monitor should be set on this metric.\n * When fired, please review the design/code ASAP to reduce the blob size. Reducing the input/output of workflow/activity/signal will help.\n * Datadog query example\n\n​​max:cadence_history.event_blob_size.quantile{!domain:all,$Domain} by {domain}\n\n\n\n# Max History Size\n\n * Workflow history cannot grow indefinitely. It will cause replay issues. If the workflow exceeds the history’s max size the workflow will be terminate automatically. The max size by default is 200 megabytes. As a suggestion for workflow design, workflow history should never grow greater than 50MB. Use continueAsNew to break long workflows into multiple runs.\n * A monitor should be set on this metric.\n * When fired, please review the design/code ASAP to reduce the history size. Reducing the input/output of workflow/activity/signal will help. Also you may need to use ContinueAsNew to break a single execution into smaller pieces.\n * Datadog query example\n\n​​max:cadence_history.history_size.quantile{!domain:all,$Domain} by {domain}\n\n\n\n# Max History Length\n\n * The number of events of workflow history. It should never be greater than 50K(workflow exceeding 200K events will be terminated by server). Use continueAsNew to break long workflows into multiple runs.\n * A monitor should be set on this metric.\n * When fired, please review the design/code ASAP to reduce the history length. You may need to use ContinueAsNew to break a single execution into smaller pieces.\n * Datadog query example\n\n​​max:cadence_history.history_count.quantile{!domain:all,$Domain} by {domain}\n\n\n\n# Cadence History Service Monitoring\n\nHistory is the most critical/core service for cadence which implements the workflow logic.\n\n\n# History shard movements\n\n * Should only happen during deployment or when the node restarts. If there’s shard movement without deployments then that’s unexpected and there’s probably a performance issue. The shard ownership is assigned by a particular history host, so if the shard is moving it’ll be hard for the frontend service to route a request to a particular history shard and to find it.\n * A monitor can be set to be alerted on shard movements without deployment.\n * Datadog query example\n\nsum:cadence_history.membership_changed_count{operation:shardcontroller}\nsum:cadence_history.shard_closed_count{operation:shardcontroller}\nsum:cadence_history.sharditem_created_count{operation:shardcontroller}\nsum:cadence_history.sharditem_removed_count{operation:shardcontroller}\n\n\n\n# Transfer Tasks Per Second\n\n * TransferTask is an internal background task that moves workflow state and transfers an action task from the history engine to another service(e.g. Matching service, ElasticSearch, etc)\n * No monitor needed\n * Datadog query example\n\nsum:cadence_history.task_requests{operation:transferactivetask*} by {operation}.as_rate()\n\n\n\n# Timer Tasks Per Second\n\n * Timer tasks are tasks that are scheduled to be triggered at a given time in future. For example, workflow.sleep() will wait an x amount of time then the task will be pushed somewhere for a worker to pick up.\n * Datadog query example\n\nsum:cadence_history.task_requests{operation:timeractivetask*} by {operation}.as_rate()\n\n\n\n# Transfer Tasks Per Domain\n\n * Count breakdown by domain\n * Datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:transferactive*} by {domain}.as_count()\n\n\n\n# Timer Tasks Per Domain\n\n * Count breakdown by domain\n * Datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:timeractive*} by {domain}.as_count()\n\n\n\n# Transfer Latency by Type\n\n * If latency is too high then it’s an issue for a workflow. For example, if transfer task latency is 5 second, then it takes 5 second for activity/decision to actual receive the task.\n * Monitor should be set on diffeernt types of latency. Note that queue_latency can go very high during deployment and it's expected. See below NOTE for explanation.\n * When fired, check if it’s due to some persistence issue. If so then investigate the database(may need to scale up) If not then see if need to scale up Cadence deployment(K8s instance)\n * Datadog query example\n\navg:cadence_history.task_latency.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:transfer*} by {operation}\n\n\n\n# Timer Task Latency by type\n\n * If latency is too high then it’s an issue for a workflow. For example, if you set the workflow.sleep() for 10 seconds and the timer latency is 5 secs then the workflow will sleep for 15 seconds.\n * Monitor should be set on diffeernt types of latency.\n * When fired, check if it’s due to some persistence issue. If so then investigate the database(may need to scale up) [Mostly] If not then see if need to scale up Cadence deployment(K8s instance)\n * Datadog query example\n\navg:cadence_history.task_latency.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:timer*} by {operation}\n\n\n\n# NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics\n\n * task_latency_queue: “Queue Latency” is “end to end” latency for users. The latency could go to several minutes during deployment because of metrics being re-emitted (but the actual latency is not that high)\n * task_latency: “Executing latency” is the time from submission to executing pool to completion. It includes scheduling, retry and processing time of the task.\n * task_latency_processing: “Processing latency” is the processing time of the task of a single attempt(without retry)\n\n\n# Transfer Task Latency Per Domain\n\n * Latency breakdown by domain\n * No monitor needed.\n * Datadog query example: modify above queries to use domain tag.\n\n\n# Timer Task Latency Per Domain\n\n * Latency breakdown by domain\n * No monitor needed.\n * Datadog query example: modify above queries to use domain tag.\n\n\n# History API per Second\n\nInformation about history API Datadog query example\n\nsum:cadence_history.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# History API Errors per Second\n\n * Information about history API\n * No monitor needed\n * Datadog query example\n\nsum:cadence_history.cadence_errors{*} by {operation}.as_rate()\nsum:cadence_history.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Max History Size\n\nThe history size of the workflow cannot be too large otherwise it will cause performance issue during replay. The soft limit is 200MB. If exceeding workflow will be terminated by server.\n\n * No monitor needed\n * Datadog query is same as the client section\n\n\n# Max History Length\n\nSimilarly, the history length of the workflow cannot be too large otherwise it will cause performance issues during replay. The soft limit is 200K events. If exceeding, workflow will be terminated by server.\n\n * No monitor needed\n * Datadog query is same as the client section\n\n\n# Max Event Blob Size\n\n * The size of each event(e.g. Decided by input/output of workflow/activity/signal/chidlWorkflow/etc) cannot be too large otherwise it will also cause performance issue. The soft limit is 2MB. If exceeding, the requests will be rejected by server, meaning that workflow won’t be able to make any progress.\n * No monitor needed\n * Datadog query is same as the client section\n\n\n# Cadence Matching Service Monitoring\n\nMatching service is to match/assign tasks from cadence service to workers. Matching got the tasks from history service. If workers are active the task will be matched immediately , It’s called “sync match”. If workers are not available, matching will persist into database and then reload the tasks when workers are back(called “async match”)\n\n\n# Matching APIs per Second\n\n * API processed by matching service per second\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# Matching API Errors per Second\n\n * API errors by matching service per second\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.cadence_errors_per_tl{*} by {operation,domain,tasklist}.as_rate()\nsum:cadence_matching.cadence_errors_bad_request_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_request{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_shard_ownership_lost{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_event_already_started{*} by {operation,domain,tasklist}\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Matching Regular API Latency\n\n * Regular APIs are the APIs excluding long polls\n * No monitor needed\n * Datadog query example\n\navg:cadence_matching.cadence_latency_per_tl.quantile{$pXXLatency,!operation:pollfor*,!operation:queryworkflow} by {operation,tasklist}\n\n\n\n# Sync Match Latency:\n\n * If the latency is too high, probably the tasklist is overloaded. Consider using multiple tasklist, or enable scalable tasklist feature by adding more partition to the tasklist(default is one) To confirm if there are too many tasks being added to the tasklist, use “AddTasks per second - domain, tasklist breakdown”\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.syncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n\n\n\n# Async match Latency\n\n * If a match is done asynchronously it writes a match to the db to use later. Measures the time when the worker is not actively looking for tasks. If this is high, more workers are needed.\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.asyncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n\n\n\n# Cadence Default Persistence Monitoring\n\nThe following monotors should be set up for Cadence persistence.\n\n\n# Persistence Availability\n\n * The availability of the primary database for your Cadence server\n * Monitor required: Below 95% > 5min then alert, below 99% triggers a slack warning\n * When fired, check if it’s due to some persistence issue. If so then investigate the database(may need to scale up) [Mostly] If not then see if need to scale up Cadence deployment(K8s instance)\n * Datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_requests{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n(1 - e / f) * 100\n(1 - g / h) * 100\n\n\n\n# Persistence By Service TPS\n\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.persistence_requests{*}.as_rate()\nsum:cadence_history.persistence_requests{*}.as_rate()\nsum:cadence_worker.persistence_requests{*}.as_rate()\nsum:cadence_matching.persistence_requests{*}.as_rate()\n\n\n\n\n# Persistence By Operation TPS\n\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_history.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_rate()\n\n\n\n\n# Persistence By Operation Latency\n\n * Monitor required, alert if 95% of all operation latency is greater than 1 second for 5mins, warning if greater than 0.5 seconds\n * When fired, investigate the database(may need to scale up) [Mostly] If there’s a high latency, then there could be errors or something wrong with the db\n * Datadog query example\n\navg:cadence_matching.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_worker.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_frontend.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.persistence_latency.quantile{$pXXLatency} by {operation}\n\n\n\n# Persistence Error By Operation Count\n\n * It's to help investigate availability issue\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\n\nsum:cadence_frontend.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_history.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_matching.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_worker.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_bad_request{*} by {operation}.as_count()\n\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Cadence Advanced Visibility Persistence Monitoring(if applicable)\n\nKafka & ElasticSearch are only for visibility. Only applicable if using advanced visibility. For writing visibility records, Cadence history service will write down the records into Kafka, and then Cadence worker service will read from Kafka and write into ElasticSearch(in batch, for performance optimization) For reading visibility records, Frontend service will query ElasticSearch directly.\n\n\n# Persistence Availability\n\n * The availability of Cadence server using database\n * Monitor can be set\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n\n\n\n# Persistence By Service TPS\n\n * The error of persistence API call by service\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*}.as_rate()\nsum:cadence_history.elasticsearch_requests{*}.as_rate()\n\n\n\n# Persistence By Operation TPS(read: ES, write: Kafka)\n\n * The rate of persistence API call by API\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_rate()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_rate()\n\n\n\n# Persistence By Operation Latency(in seconds) (read: ES, write: Kafka)\n\n * The latency of persistence API call\n * No monitor needed\n * Datadog query example\n\navg:cadence_frontend.elasticsearch_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.elasticsearch_latency.quantile{$pXXLatency} by {operation}\n\n\n\n# Persistence Error By Operation Count (read: ES, write: Kafka)\n\n * The error of persistence API call\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\n\n\n\n# Kafka->ES processor counter\n\n * This is the metrics of a background processing: consuming Kafka messages and then populate to ElasticSearch in batch\n * Monitor on the running of the background processing(counter metrics is > 0)\n * When fired, restart Cadence service first to mitigate. Then look at logs to see why the process is stopped(process panic/error/etc). May consider add more pods (replicaCount) to sys-worker service for higher availability\n * Datadog query example\n\nsum:cadence_worker.es_processor_requests{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_retries{*} by {operation}.as_count()\n\n\n\n# Kafka->ES processor error\n\n * This is the error metrics of the above processing logic Almost all errors are retryable errors so it’s not a problem.\n * Need to monitor error\n * When fired, Go to Kibana to find logs about the error details. The most common error is missing the ElasticSearch index field -- an index field is added in dynamicconfig but not in ElasticSearch, or vice versa . If so, follow the runbook to add the field to ElasticSearch or dynamic config.\n * Datadog query example\n\nsum:cadence_worker.es_processor_error{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_corrupted_data{*} by {operation}.as_count()\n\n\n\n# Kafka->ES processor latency\n\n * The latency of the processing logic\n * No monitor needed\n * Datadog query example\n\nsum:cadence_worker.es_processor_process_msg_latency.quantile{$pXXLatency} by {operation}.as_count()\n\n\n\n# Cadence Dependency Metrics Monitor suggestion\n\n\n# Computing platform metrics for Cadence deployment\n\nCadence server being deployed on any computing platform(e.g. Kubernetese) should be monitored on the blow metrics:\n\n * CPU\n * Memory\n\n\n# Database\n\nDepends on which database, you should at least monitor on the below metrics\n\n * Disk Usage\n * CPU\n * Memory\n * Read API latency\n * Write API Latency\n\n\n# Kafka (if applicable)\n\n * Disk Usage\n * CPU\n * Memory\n\n\n# ElasticSearch (if applicable)\n\n * Disk Usage\n * CPU\n * Memory\n\n\n# Cadence Service SLO Recommendation\n\n * Core API availability: 99.9%\n * Core API latency: <1s\n * Overall task dispatch latency: <2s (queue_latency for transfer task and timer task)",normalizedContent:"# cluster monitoring\n\n\n# instructions\n\ncadence emits metrics for both server and client libraries:\n\n * follow this example to emit client side metrics for golang client\n \n * you can use other metrics emitter like m3\n * alternatively, you can implement the tally reporter interface\n\n * follow this example to emit client side metrics for java client if using 3.x client, or this example if using 2.x client.\n \n * you can use other metrics emitter like m3\n * alternatively, you can implement the tally reporter interface\n\n * for running cadence services in production, please follow this example of hemlchart to emit server side metrics. or you can follow the example of local environment to prometheus. all services need to expose a http port to provide metircs like below\n\nmetrics:\n prometheus:\n timertype: \"histogram\"\n listenaddress: \"0.0.0.0:8001\"\n\n\nthe rest of the instruction uses local environment as an example.\n\nfor testing local server emitting metrics to promethues, the easiest way is to use docker-compose to start a local cadence instance.\n\nmake sure to update the prometheus_config.yml to add \"host.docker.internal:9098\" to the scrape list before starting the docker-compose:\n\nglobal:\n scrape_interval: 5s\n external_labels:\n monitor: 'cadence-monitor'\nscrape_configs:\n - job_name: 'prometheus'\n static_configs:\n - targets: # addresses to scrape\n - 'cadence:9090'\n - 'cadence:8000'\n - 'cadence:8001'\n - 'cadence:8002'\n - 'cadence:8003'\n - 'host.docker.internal:9098'\n\n\nnote: host.docker.internal may not work for some docker versions\n\n * after updating the prometheus_config.yaml as above, run docker-compose up to start the local cadence instance\n\n * go the the sample repo, build the helloworld sample make helloworld and run the worker ./bin/helloworld -m worker, and then in another shell start a workflow ./bin/helloworld\n\n * go to your local prometheus dashboard, you should be able to check the metrics emitted by handler from client/frontend/matching/history/sysworker and confirm your services are healthy through targets\n\n * go to local grafana , login as admin/admin.\n\n * configure prometheus as datasource: use http://host.docker.internal:9090 as url of prometheus.\n\n * import the grafana dashboard tempalte as json files.\n\nclient side dashboard looks like this:\n\nand server basic dashboard:\n\n\n# datadog dashboard templates\n\nthis package contains examples of cadence dashboards with datadog.\n\n * cadence-client is the dashboard that includes all the metrics to help you understand cadence client behavior. most of these metrics are emitted by the client sdks, with a few exceptions from server side (for example, workflow timeout).\n\n * cadence-server is the the server dashboard that you can use to monitor and undertand the health and status of your cadence cluster.\n\nto use datadog with cadence, follow this instruction to collect prometheus metrics using datadog agent.\n\nnote1: don't forget to adjust max_returned_metrics to a higher number(e.g. 100000). otherwise datadog agent won't be able to collect all metrics(default is 2000).\n\nnote2: the template contains templating variables $app and $availability_zone. feel free to remove them if you don't have them in your setup.\n\n\n# grafana+prometheus dashboard templates\n\nthis package contains examples of cadence dashboards with prometheus.\n\n * cadence-client is the dashboard of client metrics, and a few server side metrics that belong to client side but have to be emitted by server(for example, workflow timeout).\n\n * cadence-server-basic is the the basic server dashboard to monitor/navigate the health/status of a cadence cluster.\n\n * apart from the basic server dashboard, it's recommended to set up dashboards on different components for cadence server: frontend, history, matching, worker, persistence, archival, etc. any contribution is always welcome to enrich the existing templates or new templates!\n\n\n# periodic tests(canary) for health check\n\nit's recommended that you run periodical test to get signals on the healthness of your cluster. please following instructions in our canary package to set these tests up.\n\n\n# cadence frontend monitoring\n\nthis section describes recommended dashboards for monitoring cadence services in your cluster. the structure mostly follows the datadog dashboard template listed above.\n\n\n# service availability(server metrics)\n\n * meaning: the availability of cadence server using server metrics.\n * suggested monitor: below 95% > 5 min then alert, below 99% for > 5 min triggers a warning\n * monitor action: when fired, check if there is any persistence errors. if so then check the healthness of the database(may need to restart or scale up). if not then check the error logs.\n * datadog query example\n\nsum:cadence_frontend.cadence_errors{*}\nsum:cadence_frontend.cadence_requests{*}\n(1 - a / b) * 100\n\n\n\n# startworkflow per second\n\n * meaning: how many workflows are started per second. this helps determine if your server is overloaded.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation in (startworkflowexecution,signalwithstartworkflowexecution))} by {operation}.as_rate()\n\n\n\n# activities started per second\n\n * meaning: how many activities are started per second. helps determine if the server is overloaded.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollforactivitytask} by {operation}.as_rate()\n\n\n\n# decisions started per second\n\n * meaning: how many workflow decisions are started per second. helps determine if the server is overloaded.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollfordecisiontask} by {operation}.as_rate()\n\n\n\n# periodical test suite success(aka canary)\n\n * meaning: the success counter of canary test suite\n * suggested monitor: monitor needed. if fired, look at the failed canary test case and investigate the reason of failure.\n * datadog query example\n\nsum:cadence_history.workflow_success{workflowtype:workflow_sanity} by {workflowtype}.as_count()\n\n\n\n# frontend all api per second\n\n * meaning: all api on frontend per second. information only.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{*}.as_rate()\n\n\n\n# frontend api per second (breakdown per operation)\n\n * meaning: api on frontend per second. information only.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# frontend api errors per second(breakdown per operation)\n\n * meaning: api error on frontend per second. information only.\n * suggested monitor: this is to facilitate investigation. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_errors{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# frontend regular api latency\n\n * meaning: the latency of regular core api -- excluding long-poll/queryworkflow/gethistory/listworkflow/countworkflow api.\n * suggested monitor: 95% of all apis and of all operations that take over 1.5 seconds triggers a warning, over 2 seconds triggers an alert\n * monitor action: if fired, investigate the database read/write latency. may need to throttle some spiky traffic from certain domains, or scale up the database\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation not in (pollfordecisiontask,pollforactivitytask,getworkflowexecutionhistory,queryworkflow,listworkflowexecutions,listclosedworkflowexecutions,listopenworkflowexecutions)) and $pxxlatency} by {operation}\n\n\n\n# frontend listworkflow api latency\n\n * meaning: the latency of listworkflow api.\n * monitor: 95% of all apis and of all operations that take over 2 seconds triggers a warning, over 3 seconds triggers an alert\n * monitor action: if fired, investigate the elasticsearch read latency. may need to throttle some spiky traffic from certain domains, or scale up elasticsearch cluster.\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation in (listclosedworkflowexecutions,listopenworkflowexecutions,listworkflowexecutions,countworkflowexecutions)) and $pxxlatency} by {operation}\n\n\n\n# frontend long poll api latency\n\n * meaning: long poll means that the worker is waiting for a task. the latency is an indicator for how busy the worker is. poll for activity task and poll for decision task are the types of long poll requests.the api call times out at 50 seconds if no task can be picked up.a very low latency could mean that more workers need to be added.\n * suggested monitor: no monitor needed as long latency is expected.\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{$pxxlatency,operation:pollforactivitytask} by {operation}\navg:cadence_frontend.cadence_latency.quantile{$pxxlatency,operation:pollfordecisiontask} by {operation}\n\n\n\n# frontend get history/query workflow api latency\n\n * meaning: gethistory api acts like a long poll api, but there’s no explicit timeout. long-poll of gethistory is being used when workflowclient is waiting for the result of the workflow(essentially, workflowexecutioncompletedevent). this latency depends on the time it takes for the workflow to complete. queryworkflow api latency is also unpredictable as it depends on the availability and performance of workflow workers, which are owned by the application and workflow implementation(may require replaying history).\n * suggested monitor: no monitor needed\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation in (getworkflowexecutionhistory,queryworkflow)) and $pxxlatency} by {operation}\n\n\n\n# frontend workflowclient api per seconds by domain\n\n * meaning: shows which domains are making the most requests using workflowclient(excluding worker api like pollfordecisiontask and responddecisiontaskcompleted). used for troubleshooting. in the future it can be used to set some rate limiting per domain.\n * suggested monitor: no monitor needed.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation in (signalwithstartworkflowexecution,signalworkflowexecution,startworkflowexecution,terminateworkflowexecution,resetworkflowexecution,requestcancelworkflowexecution,listworkflowexecutions))} by {domain,operation}.as_rate()\n\n\n\n# cadence application monitoring\n\nthis section describes the recommended dashboards for monitoring cadence application using metrics emitted by sdk. see the setup section about how to collect those metrics.\n\n\n# workflow start and successful completion\n\n * workflow successfully started/signalwithstart and completed/canceled/continuedasnew\n * monitor: not recommended\n * datadog query example\n\nsum:cadence_client.cadence_workflow_start{$domain,$tasklist,$workflowtype} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_completed{$domain,$tasklist,$workflowtype} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_canceled{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_continue_as_new{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_signal_with_start{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env,tasklist}.as_rate()\n\n\n\n# workflow failure\n\n * metrics for all types of failures, including workflow failures(throw uncaught exceptions), workflow timeout and termination.\n * for timeout and termination, workflow worker doesn’t have a chance to emit metrics when it’s terminate, so the metric comes from the history service\n * monitor: application should set monitor on timeout and failure to make sure workflow are not failing. cancel/terminate are usually triggered by human intentionally.\n * when the metrics fire, go to cadence ui to find the failed workflows and investigate the workflow history to understand the type of failure\n * datadog query example\n\nsum:cadence_client.cadence_workflow_failed{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env}.as_count()\nsum:cadence_history.workflow_failed{$domain,$workflowtype} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_terminate{$domain,$workflowtype} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_timeout{$domain,$workflowtype} by {domain,env,workflowtype}.as_count()\n\n\n\n# decision poll counters\n\n * indicates if the workflow worker is available and is polling tasks. if the worker is not available no counters will show. can also check if the worker is using the right task list. “no task” poll type means that the worker exists and is idle. the timeout for this long poll api is 50 seconds. if no task is received within 50 seconds, then an empty response will be returned and another long poll request will be sent.\n * monitor: application can should monitor on it to make sure workers are available\n * when fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * datadog query example\n\nsum:cadence_client.cadence_decision_poll_total{$domain,$tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_failed{$domain,$tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_no_task{$domain,$tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_succeed{$domain,$tasklist}.as_count()\n\n\n\n# decisiontasks scheduled per second\n\n * indicate how many decision tasks are scheduled\n * monitor: not recommended -- information only to know whether or not a tasklist is overloaded\n * datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:adddecisiontask,$tasklist,$domain} by {tasklist,domain}.as_rate()\n\n\n\n# decision scheduled to start latency\n\n * if this latency is too high then either: the worker is not available or too busy after the task has been scheduled. the task list is overloaded(confirmed by decisiontaskscheduled per second widget). by default a task list only has one partition and a partition can only be owned by one host and so the throughput of a task list is limited. more task lists can be added to scale or a scalable task list can be used to add more partitions.\n * monitor: application can set monitor on it to make sure latency is tolerable\n * when fired, check if worker capacity is enough, then check if tasklist is overloaded. if needed, contact the cadence cluster admin to enable scalable tasklist to add more partitions to the tasklist\n * datadog query example\n\navg:cadence_client.cadence_decision_scheduled_to_start_latency.avg{$domain,$tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.max{$domain,$tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.95percentile{$domain,$tasklist} by {env,domain,tasklist}\n\n\n\n# decision execution failure\n\n * this means some critical bugs in workflow code causing decision task execution failure\n * monitor: application should set monitor on it to make sure no consistent failure\n * when fired, you may need to terminate the problematic workflows to mitigate the issue. after you identify the bugs, you can fix the code and then reset the workflow to recover\n * datadog query example\n\nsum:cadence_client.cadence_decision_execution_failed{$domain,$tasklist} by {tasklist,workflowtype}.as_count()\n\n\n\n# decision execution timeout\n\n * this means some critical bugs in workflow code causing decision task execution timeout\n * monitor: application should set monitor on it to make sure no consistent timeout\n * when fired, you may need to terminate the problematic workflows to mitigate the issue. after you identify the bugs, you can fix the code and then reset the workflow to recover\n * datadog query example\n\nsum:cadence_history.start_to_close_timeout{operation:timeractivetaskdecision*,$domain}.as_count()\n\n\n\n# workflow end to end latency\n\n * this is for the client application to track their slos for example, if you expect a workflow to take duration d to complete, you can use this latency to set a monitor.\n * monitor: application can monitor this metrics if expecting workflow to complete within a certain duration.\n * when fired, investigate the workflow history to see the workflow takes longer than expected to complete\n * datadog query example\n\navg:cadence_client.cadence_workflow_endtoend_latency.median{$domain,$tasklist,$workflowtype} by {env,domain,tasklist,workflowtype}\navg:cadence_client.cadence_workflow_endtoend_latency.95percentile{$domain,$tasklist,$workflowtype} by {env,domain,tasklist,workflowtype}\n\n\n\n# workflow panic and nondeterministicerror\n\n * these errors mean that there is a bug in the code and the deploy should be rolled back.\n * a monitor should be set on this metric\n * when fired, you may rollback the deployment to mitigate your issue. usually this caused by bad (non-backward compatible) code change. after rollback, look at your worker error logs to see where the bug is.\n * datadog query example\n\nsum:cadence_client.cadence_worker_panic{$domain} by {env,domain}.as_rate()\nsum:cadence_client.cadence_non_deterministic_error{$domain} by {env,domain}.as_rate()\n\n\n\n# workflow sticky cache hit rate and miss count\n\n * this metric can be used for performance optimization. this can be improved by adding more worker instances, or adjust the workeroption(gosdk) or workferfactoryoption(java sdk). cachehitrate too low means workers will have to replay history to rebuild the workflow stack when executing a decision task. depending on the the history size\n * if less than 1mb, then it’s okay to be lower than 50%\n * if greater than 1mb, then it’s okay to be greater than 50%\n * if greater than 5mb, , then it’s okay to be greater than 60%\n * if greater than 10mb , then it’s okay to be greater than 70%\n * if greater than 20mb , then it’s okay to be greater than 80%\n * if greater than 30mb , then it’s okay to be greater than 90%\n * workflow history size should never be greater than 50mb.\n * a monitor can be set on this metric, if performance is important.\n * when fired, adjust the stickycachesize in the workerfactoryoption, or add more workers\n * datadog query example\n\nsum:cadence_client.cadence_sticky_cache_miss{$domain} by {env,domain}.as_count()\nsum:cadence_client.cadence_sticky_cache_hit{$domain} by {env,domain}.as_count()\n(b / (a+b)) * 100\n\n\n\n# activity task operations\n\n * activity started/completed counters\n * monitor: not recommended\n * datadog query example\n\nsum:cadence_client.cadence_activity_task_failed{$domain,$tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_completed{$domain,$tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_timeouted{$domain,$tasklist} by {activitytype}.as_rate()\n\n\n\n# local activity task operations\n\n * local activity execution counters\n * monitor: not recommended\n * datadog query example\n\nsum:cadence_client.cadence_local_activity_total{$domain,$tasklist} by {activitytype}.as_count()\n\n\n\n# activity execution latency\n\n * if it’s expected that an activity will take x amount of time to complete, a monitor on this metric could be helpful to enforce that expectation.\n * monitor: application can set monitor on it if expecting workflow start/complete activities with certain latency\n * when fired, investigate the activity code and its dependencies\n * datadog query example\n\navg:cadence_client.cadence_activity_execution_latency.avg{$domain,$tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_execution_latency.max{$domain,$tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# activity poll counters\n\n * indicates the activity worker is available and is polling tasks. if the worker is not available no counters will show. can also check if the worker is using the right task list. “no task” poll type means that the worker exists and is idle. the timeout for this long poll api is 50 seconds. if within that 50 seconds, no task is received then an empty response will be returned and another long poll request will be sent.\n * monitor: application can set monitor on it to make sure activity workers are available\n * when fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * datadog query example\n\nsum:cadence_client.cadence_activity_poll_total{$domain,$tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_failed{$domain,$tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_succeed{$domain,$tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_no_task{$domain,$tasklist} by {activitytype}.as_count()\n\n\n\n# activitytasks scheduled per second\n\n * indicate how many activities tasks are scheduled\n * monitor: not recommended -- information only to know whether or not a tasklist is overloaded\n * datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:addactivitytask,$tasklist,$domain} by {tasklist,domain}.as_rate()\n\n\n\n# activity scheduled to start latency\n\n * if the latency is too high either: the worker is not available or too busy there are too many activities scheduled into the same tasklist and the tasklist is not scalable. same as decision scheduled to start latency\n * monitor: application should set monitor on it\n * when fired, check if workers are enough, then check if the tasklist is overloaded. if needed, contact the cadence cluster admin to enable scalable tasklist to add more partitions to the tasklist\n * datadog query example\n\navg:cadence_client.cadence_activity_scheduled_to_start_latency.avg{$domain,$tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.max{$domain,$tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.95percentile{$domain,$tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# activity failure\n\n * a monitor on this metric will alert the team that activities are failing the activity timeout metrics are emitted by the history service, because a timeout causes a hard stop and the client doesn’t have time to emit metrics.\n * monitor: application can set monitor on it\n * when fired, investigate the activity code and its dependencies\n * cadence_activity_execution_failed vs cadence_activity_task_failed: only have different when using retrypolicy cadence_activity_task_failed counter increase per activity attempt cadence_activity_execution_failed counter increase when activity fails after all attempts\n * should only monitor on cadence_activity_execution_failed\n * datadog query example\n\nsum:cadence_client.cadence_activity_execution_failed{$domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_panic{$domain} by {domain,env}.as_count()\nsum:cadence_client.cadence_activity_task_failed{$domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_canceled{$domain} by {domain,env}.as_count()\nsum:cadence_history.heartbeat_timeout{$domain} by {domain,env}.as_count()\nsum:cadence_history.schedule_to_start_timeout{$domain} by {domain,env}.as_rate()\nsum:cadence_history.start_to_close_timeout{$domain} by {domain,env}.as_rate()\nsum:cadence_history.schedule_to_close_timeout{$domain} by {domain,env}.as_count()\n\n\n\n# service api success rate\n\n * the client’s experience of the service availability. it encompasses many apis. things that could affect the service’s api success rate are:\n * service availability\n * the network could have issues.\n * a required api is not available.\n * client side errors like entitynotexists, workflowalreadystarted etc. this means that application code has potential bugs of calling cadence service.\n * monitor: application can set monitor on it\n * when fired, check application logs to see if the error is cadence server error or client side error. error like entitynotexists/executionalreadystarted/queryworkflowfailed/etc are client side error, meaning that the application is misusing the apis. if most errors are server side errors(internalserviceerror), you can contact cadence admin.\n * datadog query example\n\nsum:cadence_client.cadence_error{*} by {domain}.as_count()\nsum:cadence_client.cadence_request{*} by {domain}.as_count()\n(1 - a / b) * 100\n\n\n\n# service api latency\n\n * the latency of the api, excluding long poll apis.\n * application can set monitor on certain apis, if necessary.\n * datadog query example\n\navg:cadence_client.cadence_latency.95percentile{$domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}\n\n\n\n# service api breakdown\n\n * a counter breakdown by api to help investigate availability\n * no monitor needed\n * datadog query example\n\nsum:cadence_client.cadence_request{$domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}.as_count()\n\n\n\n# service api error breakdown\n\n * a counter breakdown by api error to help investigate availability\n * no monitor needed\n * datadog query example\n\nsum:cadence_client.cadence_error{$domain} by {cadence_metric_scope}.as_count()\n\n\n\n# max event blob size\n\n * by default the max size is 2 mb. if the input is greater than the max size the server will reject the request. the size of a single history event. this applies to any event input, like start workflow event, start activity event, or signal event. it should never be greater than 2mb.\n * a monitor should be set on this metric.\n * when fired, please review the design/code asap to reduce the blob size. reducing the input/output of workflow/activity/signal will help.\n * datadog query example\n\n​​max:cadence_history.event_blob_size.quantile{!domain:all,$domain} by {domain}\n\n\n\n# max history size\n\n * workflow history cannot grow indefinitely. it will cause replay issues. if the workflow exceeds the history’s max size the workflow will be terminate automatically. the max size by default is 200 megabytes. as a suggestion for workflow design, workflow history should never grow greater than 50mb. use continueasnew to break long workflows into multiple runs.\n * a monitor should be set on this metric.\n * when fired, please review the design/code asap to reduce the history size. reducing the input/output of workflow/activity/signal will help. also you may need to use continueasnew to break a single execution into smaller pieces.\n * datadog query example\n\n​​max:cadence_history.history_size.quantile{!domain:all,$domain} by {domain}\n\n\n\n# max history length\n\n * the number of events of workflow history. it should never be greater than 50k(workflow exceeding 200k events will be terminated by server). use continueasnew to break long workflows into multiple runs.\n * a monitor should be set on this metric.\n * when fired, please review the design/code asap to reduce the history length. you may need to use continueasnew to break a single execution into smaller pieces.\n * datadog query example\n\n​​max:cadence_history.history_count.quantile{!domain:all,$domain} by {domain}\n\n\n\n# cadence history service monitoring\n\nhistory is the most critical/core service for cadence which implements the workflow logic.\n\n\n# history shard movements\n\n * should only happen during deployment or when the node restarts. if there’s shard movement without deployments then that’s unexpected and there’s probably a performance issue. the shard ownership is assigned by a particular history host, so if the shard is moving it’ll be hard for the frontend service to route a request to a particular history shard and to find it.\n * a monitor can be set to be alerted on shard movements without deployment.\n * datadog query example\n\nsum:cadence_history.membership_changed_count{operation:shardcontroller}\nsum:cadence_history.shard_closed_count{operation:shardcontroller}\nsum:cadence_history.sharditem_created_count{operation:shardcontroller}\nsum:cadence_history.sharditem_removed_count{operation:shardcontroller}\n\n\n\n# transfer tasks per second\n\n * transfertask is an internal background task that moves workflow state and transfers an action task from the history engine to another service(e.g. matching service, elasticsearch, etc)\n * no monitor needed\n * datadog query example\n\nsum:cadence_history.task_requests{operation:transferactivetask*} by {operation}.as_rate()\n\n\n\n# timer tasks per second\n\n * timer tasks are tasks that are scheduled to be triggered at a given time in future. for example, workflow.sleep() will wait an x amount of time then the task will be pushed somewhere for a worker to pick up.\n * datadog query example\n\nsum:cadence_history.task_requests{operation:timeractivetask*} by {operation}.as_rate()\n\n\n\n# transfer tasks per domain\n\n * count breakdown by domain\n * datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:transferactive*} by {domain}.as_count()\n\n\n\n# timer tasks per domain\n\n * count breakdown by domain\n * datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:timeractive*} by {domain}.as_count()\n\n\n\n# transfer latency by type\n\n * if latency is too high then it’s an issue for a workflow. for example, if transfer task latency is 5 second, then it takes 5 second for activity/decision to actual receive the task.\n * monitor should be set on diffeernt types of latency. note that queue_latency can go very high during deployment and it's expected. see below note for explanation.\n * when fired, check if it’s due to some persistence issue. if so then investigate the database(may need to scale up) if not then see if need to scale up cadence deployment(k8s instance)\n * datadog query example\n\navg:cadence_history.task_latency.quantile{$pxxlatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pxxlatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pxxlatency,operation:transfer*} by {operation}\n\n\n\n# timer task latency by type\n\n * if latency is too high then it’s an issue for a workflow. for example, if you set the workflow.sleep() for 10 seconds and the timer latency is 5 secs then the workflow will sleep for 15 seconds.\n * monitor should be set on diffeernt types of latency.\n * when fired, check if it’s due to some persistence issue. if so then investigate the database(may need to scale up) [mostly] if not then see if need to scale up cadence deployment(k8s instance)\n * datadog query example\n\navg:cadence_history.task_latency.quantile{$pxxlatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pxxlatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pxxlatency,operation:timer*} by {operation}\n\n\n\n# note: task queue latency vs executing latency vs processing latency in transfer & timer task latency metrics\n\n * task_latency_queue: “queue latency” is “end to end” latency for users. the latency could go to several minutes during deployment because of metrics being re-emitted (but the actual latency is not that high)\n * task_latency: “executing latency” is the time from submission to executing pool to completion. it includes scheduling, retry and processing time of the task.\n * task_latency_processing: “processing latency” is the processing time of the task of a single attempt(without retry)\n\n\n# transfer task latency per domain\n\n * latency breakdown by domain\n * no monitor needed.\n * datadog query example: modify above queries to use domain tag.\n\n\n# timer task latency per domain\n\n * latency breakdown by domain\n * no monitor needed.\n * datadog query example: modify above queries to use domain tag.\n\n\n# history api per second\n\ninformation about history api datadog query example\n\nsum:cadence_history.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# history api errors per second\n\n * information about history api\n * no monitor needed\n * datadog query example\n\nsum:cadence_history.cadence_errors{*} by {operation}.as_rate()\nsum:cadence_history.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# max history size\n\nthe history size of the workflow cannot be too large otherwise it will cause performance issue during replay. the soft limit is 200mb. if exceeding workflow will be terminated by server.\n\n * no monitor needed\n * datadog query is same as the client section\n\n\n# max history length\n\nsimilarly, the history length of the workflow cannot be too large otherwise it will cause performance issues during replay. the soft limit is 200k events. if exceeding, workflow will be terminated by server.\n\n * no monitor needed\n * datadog query is same as the client section\n\n\n# max event blob size\n\n * the size of each event(e.g. decided by input/output of workflow/activity/signal/chidlworkflow/etc) cannot be too large otherwise it will also cause performance issue. the soft limit is 2mb. if exceeding, the requests will be rejected by server, meaning that workflow won’t be able to make any progress.\n * no monitor needed\n * datadog query is same as the client section\n\n\n# cadence matching service monitoring\n\nmatching service is to match/assign tasks from cadence service to workers. matching got the tasks from history service. if workers are active the task will be matched immediately , it’s called “sync match”. if workers are not available, matching will persist into database and then reload the tasks when workers are back(called “async match”)\n\n\n# matching apis per second\n\n * api processed by matching service per second\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# matching api errors per second\n\n * api errors by matching service per second\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.cadence_errors_per_tl{*} by {operation,domain,tasklist}.as_rate()\nsum:cadence_matching.cadence_errors_bad_request_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_request{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_shard_ownership_lost{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_event_already_started{*} by {operation,domain,tasklist}\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# matching regular api latency\n\n * regular apis are the apis excluding long polls\n * no monitor needed\n * datadog query example\n\navg:cadence_matching.cadence_latency_per_tl.quantile{$pxxlatency,!operation:pollfor*,!operation:queryworkflow} by {operation,tasklist}\n\n\n\n# sync match latency:\n\n * if the latency is too high, probably the tasklist is overloaded. consider using multiple tasklist, or enable scalable tasklist feature by adding more partition to the tasklist(default is one) to confirm if there are too many tasks being added to the tasklist, use “addtasks per second - domain, tasklist breakdown”\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.syncmatch_latency_per_tl.quantile{$pxxlatency} by {operation,tasklist,domain}\n\n\n\n# async match latency\n\n * if a match is done asynchronously it writes a match to the db to use later. measures the time when the worker is not actively looking for tasks. if this is high, more workers are needed.\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.asyncmatch_latency_per_tl.quantile{$pxxlatency} by {operation,tasklist,domain}\n\n\n\n# cadence default persistence monitoring\n\nthe following monotors should be set up for cadence persistence.\n\n\n# persistence availability\n\n * the availability of the primary database for your cadence server\n * monitor required: below 95% > 5min then alert, below 99% triggers a slack warning\n * when fired, check if it’s due to some persistence issue. if so then investigate the database(may need to scale up) [mostly] if not then see if need to scale up cadence deployment(k8s instance)\n * datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_requests{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n(1 - e / f) * 100\n(1 - g / h) * 100\n\n\n\n# persistence by service tps\n\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.persistence_requests{*}.as_rate()\nsum:cadence_history.persistence_requests{*}.as_rate()\nsum:cadence_worker.persistence_requests{*}.as_rate()\nsum:cadence_matching.persistence_requests{*}.as_rate()\n\n\n\n\n# persistence by operation tps\n\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_history.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_rate()\n\n\n\n\n# persistence by operation latency\n\n * monitor required, alert if 95% of all operation latency is greater than 1 second for 5mins, warning if greater than 0.5 seconds\n * when fired, investigate the database(may need to scale up) [mostly] if there’s a high latency, then there could be errors or something wrong with the db\n * datadog query example\n\navg:cadence_matching.persistence_latency.quantile{$pxxlatency} by {operation}\navg:cadence_worker.persistence_latency.quantile{$pxxlatency} by {operation}\navg:cadence_frontend.persistence_latency.quantile{$pxxlatency} by {operation}\navg:cadence_history.persistence_latency.quantile{$pxxlatency} by {operation}\n\n\n\n# persistence error by operation count\n\n * it's to help investigate availability issue\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\n\nsum:cadence_frontend.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_history.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_matching.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_worker.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_bad_request{*} by {operation}.as_count()\n\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# cadence advanced visibility persistence monitoring(if applicable)\n\nkafka & elasticsearch are only for visibility. only applicable if using advanced visibility. for writing visibility records, cadence history service will write down the records into kafka, and then cadence worker service will read from kafka and write into elasticsearch(in batch, for performance optimization) for reading visibility records, frontend service will query elasticsearch directly.\n\n\n# persistence availability\n\n * the availability of cadence server using database\n * monitor can be set\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n\n\n\n# persistence by service tps\n\n * the error of persistence api call by service\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*}.as_rate()\nsum:cadence_history.elasticsearch_requests{*}.as_rate()\n\n\n\n# persistence by operation tps(read: es, write: kafka)\n\n * the rate of persistence api call by api\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_rate()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_rate()\n\n\n\n# persistence by operation latency(in seconds) (read: es, write: kafka)\n\n * the latency of persistence api call\n * no monitor needed\n * datadog query example\n\navg:cadence_frontend.elasticsearch_latency.quantile{$pxxlatency} by {operation}\navg:cadence_history.elasticsearch_latency.quantile{$pxxlatency} by {operation}\n\n\n\n# persistence error by operation count (read: es, write: kafka)\n\n * the error of persistence api call\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\n\n\n\n# kafka->es processor counter\n\n * this is the metrics of a background processing: consuming kafka messages and then populate to elasticsearch in batch\n * monitor on the running of the background processing(counter metrics is > 0)\n * when fired, restart cadence service first to mitigate. then look at logs to see why the process is stopped(process panic/error/etc). may consider add more pods (replicacount) to sys-worker service for higher availability\n * datadog query example\n\nsum:cadence_worker.es_processor_requests{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_retries{*} by {operation}.as_count()\n\n\n\n# kafka->es processor error\n\n * this is the error metrics of the above processing logic almost all errors are retryable errors so it’s not a problem.\n * need to monitor error\n * when fired, go to kibana to find logs about the error details. the most common error is missing the elasticsearch index field -- an index field is added in dynamicconfig but not in elasticsearch, or vice versa . if so, follow the runbook to add the field to elasticsearch or dynamic config.\n * datadog query example\n\nsum:cadence_worker.es_processor_error{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_corrupted_data{*} by {operation}.as_count()\n\n\n\n# kafka->es processor latency\n\n * the latency of the processing logic\n * no monitor needed\n * datadog query example\n\nsum:cadence_worker.es_processor_process_msg_latency.quantile{$pxxlatency} by {operation}.as_count()\n\n\n\n# cadence dependency metrics monitor suggestion\n\n\n# computing platform metrics for cadence deployment\n\ncadence server being deployed on any computing platform(e.g. kubernetese) should be monitored on the blow metrics:\n\n * cpu\n * memory\n\n\n# database\n\ndepends on which database, you should at least monitor on the below metrics\n\n * disk usage\n * cpu\n * memory\n * read api latency\n * write api latency\n\n\n# kafka (if applicable)\n\n * disk usage\n * cpu\n * memory\n\n\n# elasticsearch (if applicable)\n\n * disk usage\n * cpu\n * memory\n\n\n# cadence service slo recommendation\n\n * core api availability: 99.9%\n * core api latency: <1s\n * overall task dispatch latency: <2s (queue_latency for transfer task and timer task)",charsets:{}},{title:"Overview",frontmatter:{layout:"default",title:"Overview",permalink:"/docs/operation-guide",readingShow:"top"},regularPath:"/docs/07-operation-guide/",relativePath:"docs/07-operation-guide/index.md",key:"v-fc381aca",path:"/docs/operation-guide/",codeSwitcherOptions:{},headersStr:null,content:"# Operation Guide Overview\n\nThis document will cover things that you need to know to run a Cadence cluster in production. Topics including: setup, monitoring, maintenance and troubleshooting.",normalizedContent:"# operation guide overview\n\nthis document will cover things that you need to know to run a cadence cluster in production. topics including: setup, monitoring, maintenance and troubleshooting.",charsets:{}},{title:"Timeouts",frontmatter:{layout:"default",title:"Timeouts",permalink:"/docs/workflow-troubleshooting/timeouts",readingShow:"top"},regularPath:"/docs/08-workflow-troubleshooting/01-timeouts.html",relativePath:"docs/08-workflow-troubleshooting/01-timeouts.md",key:"v-3f3e4754",path:"/docs/workflow-troubleshooting/timeouts/",headers:[{level:2,title:"Missing Pollers",slug:"missing-pollers",normalizedTitle:"missing pollers",charIndex:312},{level:2,title:"Tasklist backlog despite having pollers",slug:"tasklist-backlog-despite-having-pollers",normalizedTitle:"tasklist backlog despite having pollers",charIndex:897},{level:2,title:"Timeouts without heartbeating enabled",slug:"timeouts-without-heartbeating-enabled",normalizedTitle:"timeouts without heartbeating enabled",charIndex:1480},{level:2,title:"Heartbeat Timeouts after enabling heartbeating",slug:"heartbeat-timeouts-after-enabling-heartbeating",normalizedTitle:"heartbeat timeouts after enabling heartbeating",charIndex:2143}],codeSwitcherOptions:{},headersStr:"Missing Pollers Tasklist backlog despite having pollers Timeouts without heartbeating enabled Heartbeat Timeouts after enabling heartbeating",content:"# Timeouts\n\nA workflow could fail if an activity times out and will timeout when the entire workflow execution times out. Workflows or activities time out when their time to execute or time to start has been longer than their configured timeout. Some of the common causes for timeouts have been listed here.\n\n\n# Missing Pollers\n\nCadence workers are part of the service that hosts and executes the workflow. They are of two types: activity worker and workflow worker. Each of these workers are responsible for having pollers which are go-routines that poll for activity tasks and decision tasks respectively from the Cadence server. Without pollers, the workflow cannot proceed with the execution.\n\nMitigation: Make sure these workers are configured with the task lists that are used in the workflow and activities so the server can dispatch tasks to the cadence workers.\n\nWorker setup example\n\n\n# Tasklist backlog despite having pollers\n\nIf a tasklist has pollers but the backlog continues to grow then it is a supply-demand issue. The workflow is growing faster than what the workers can handle. The server wants to dispatch more tasks to the workers but they are not able to keep up.\n\nMitigation: Increase the number of cadence workers by horizontally scaling up the instances where the workflow is running.\n\nOptionally you can also increase the number of pollers per worker by providing this via worker options.\n\nLink to options in go client Link to options in java client\n\n\n# Timeouts without heartbeating enabled\n\nActivities time out StartToClose or ScheduleToClose if the activity took longer than the configured timeout.\n\nLink to description of timeouts\n\nFor long running activities, while the activity is executing, the worker can die due to regular deployments or host restarts or failures. Cadence doesn't know about this and will wait for StartToClose or ScheduleToClose timeouts to kick in.\n\nMitigation: Consider enabling heartbeating\n\nConfiguring heartbeat timeout example\n\nFor short running activities, heart beating is not required but maybe consider increasing the timeout value to suit the actual activity execution time.\n\n\n# Heartbeat Timeouts after enabling heartbeating\n\nActivity has enabled heart beating but the activity timed out with heart beat timeout. This is because the server did not receive a heart beat in the time interval configured as the heart beat timeout.\n\nMitigation: Once heartbeat timeout is configured in activity options, you need to make sure the activity periodically sends a heart beat to the server to make sure the server is aware of the activity being alive.\n\nExample to send periodic heart beat\n\nIn go client, there is an option to register the activity with auto heart beating so that it is done automatically\n\nEnabling auto heart beat during activity registration example",normalizedContent:"# timeouts\n\na workflow could fail if an activity times out and will timeout when the entire workflow execution times out. workflows or activities time out when their time to execute or time to start has been longer than their configured timeout. some of the common causes for timeouts have been listed here.\n\n\n# missing pollers\n\ncadence workers are part of the service that hosts and executes the workflow. they are of two types: activity worker and workflow worker. each of these workers are responsible for having pollers which are go-routines that poll for activity tasks and decision tasks respectively from the cadence server. without pollers, the workflow cannot proceed with the execution.\n\nmitigation: make sure these workers are configured with the task lists that are used in the workflow and activities so the server can dispatch tasks to the cadence workers.\n\nworker setup example\n\n\n# tasklist backlog despite having pollers\n\nif a tasklist has pollers but the backlog continues to grow then it is a supply-demand issue. the workflow is growing faster than what the workers can handle. the server wants to dispatch more tasks to the workers but they are not able to keep up.\n\nmitigation: increase the number of cadence workers by horizontally scaling up the instances where the workflow is running.\n\noptionally you can also increase the number of pollers per worker by providing this via worker options.\n\nlink to options in go client link to options in java client\n\n\n# timeouts without heartbeating enabled\n\nactivities time out starttoclose or scheduletoclose if the activity took longer than the configured timeout.\n\nlink to description of timeouts\n\nfor long running activities, while the activity is executing, the worker can die due to regular deployments or host restarts or failures. cadence doesn't know about this and will wait for starttoclose or scheduletoclose timeouts to kick in.\n\nmitigation: consider enabling heartbeating\n\nconfiguring heartbeat timeout example\n\nfor short running activities, heart beating is not required but maybe consider increasing the timeout value to suit the actual activity execution time.\n\n\n# heartbeat timeouts after enabling heartbeating\n\nactivity has enabled heart beating but the activity timed out with heart beat timeout. this is because the server did not receive a heart beat in the time interval configured as the heart beat timeout.\n\nmitigation: once heartbeat timeout is configured in activity options, you need to make sure the activity periodically sends a heart beat to the server to make sure the server is aware of the activity being alive.\n\nexample to send periodic heart beat\n\nin go client, there is an option to register the activity with auto heart beating so that it is done automatically\n\nenabling auto heart beat during activity registration example",charsets:{}},{title:"Overview",frontmatter:{layout:"default",title:"Overview",permalink:"/docs/workflow-troubleshooting",readingShow:"top"},regularPath:"/docs/08-workflow-troubleshooting/",relativePath:"docs/08-workflow-troubleshooting/index.md",key:"v-46aa6bb2",path:"/docs/workflow-troubleshooting/",codeSwitcherOptions:{},headersStr:null,content:"# Workflow Troubleshooting Overview\n\nThis document will serve as a guide for troubleshooting a workflow for potential issues.",normalizedContent:"# workflow troubleshooting overview\n\nthis document will serve as a guide for troubleshooting a workflow for potential issues.",charsets:{}},{title:"MIT License",frontmatter:{layout:"default",title:"MIT License",permalink:"/docs/about/license",readingShow:"top"},regularPath:"/docs/09-about/01-license.html",relativePath:"docs/09-about/01-license.md",key:"v-e574b140",path:"/docs/about/license/",codeSwitcherOptions:{},headersStr:null,content:'# MIT License\n\nCopyright (c) 2017 Uber Technologies, Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n',normalizedContent:'# mit license\n\ncopyright (c) 2017 uber technologies, inc.\n\npermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "software"), to deal\nin the software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the software, and to permit persons to whom the software is\nfurnished to do so, subject to the following conditions:\n\nthe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the software.\n\nthe software is provided "as is", without warranty of any kind, express or\nimplied, including but not limited to the warranties of merchantability,\nfitness for a particular purpose and noninfringement. in no event shall the\nauthors or copyright holders be liable for any claim, damages or other\nliability, whether in an action of contract, tort or otherwise, arising from,\nout of or in connection with the software or the use or other dealings in\nthe software.\n',charsets:{}},{title:"Contact us",frontmatter:{layout:"default",title:"Contact us",permalink:"/docs/about",readingShow:"top"},regularPath:"/docs/09-about/",relativePath:"docs/09-about/index.md",key:"v-00de750a",path:"/docs/about/",codeSwitcherOptions:{},headersStr:null,content:"# Contact us\n\nIf you have a question, check whether it is already answered at stackoverflow under cadence-workflow tag.\n\nIf you still need help, visit .\n\nIf you have a feature request or a bug to report file an issue against one of the Cadence github repositories:\n\n * Cadence Service and CLI\n * Cadence Go Client\n * Cadence Go Client Samples\n * Cadence Java Client\n * Cadence Java Client Samples\n * Cadence Web UI",normalizedContent:"# contact us\n\nif you have a question, check whether it is already answered at stackoverflow under cadence-workflow tag.\n\nif you still need help, visit .\n\nif you have a feature request or a bug to report file an issue against one of the cadence github repositories:\n\n * cadence service and cli\n * cadence go client\n * cadence go client samples\n * cadence java client\n * cadence java client samples\n * cadence web ui",charsets:{}},{title:"Home",frontmatter:{home:!0,heroText:"Fault-Tolerant Stateful Code Platform",tagline:"Focus on your business logic and let Cadence take care of the complexity of distributed systems",actionText:"Get Started →",actionLink:"/docs/get-started/",readingShow:"top"},regularPath:"/",relativePath:"index.md",key:"v-7256933b",path:"/",codeSwitcherOptions:{},headersStr:null,content:"© {{ new Date().getFullYear() }} Uber Technologies, Inc.\n\n\nEasy to use\n\nWorkflows provide primitives to allow application developers to express complex business logic as code.\n\nThe underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.\n\n\nFault tolerant\n\nCadence enables writing stateful applications without worrying about the complexity of handling process failures.\n\nCadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.\n\n\n\n\nScalable & Reliable\n\nCadence is designed to scale out horizontally to handle millions of concurrent workflows.\n\nCadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.",normalizedContent:"© {{ new date().getfullyear() }} uber technologies, inc.\n\n\neasy to use\n\nworkflows provide primitives to allow application developers to express complex business logic as code.\n\nthe underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.\n\n\nfault tolerant\n\ncadence enables writing stateful applications without worrying about the complexity of handling process failures.\n\ncadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.\n\n\n\n\nscalable & reliable\n\ncadence is designed to scale out horizontally to handle millions of concurrent workflows.\n\ncadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.",charsets:{}}],themeConfig:{logo:"/img/logo-white.svg",nav:[{text:"Docs",items:[{text:"Get Started",link:"/docs/get-started/"},{text:"Use cases",link:"/docs/use-cases/"},{text:"Concepts",link:"/docs/concepts/"},{text:"Java client",link:"/docs/java-client/"},{text:"Go client",link:"/docs/go-client/"},{text:"Command line interface",link:"/docs/cli/"},{text:"Operation Guide",link:"/docs/operation-guide/"},{text:"Glossary",link:"/GLOSSARY"},{text:"About",link:"/docs/about/"}]},{text:"Blog",link:"/blog/"},{text:"Client",items:[{text:"Java Docs",link:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client"},{text:"Java Client",link:"https://mvnrepository.com/artifact/com.uber.cadence/cadence-client"},{text:"Go Docs",link:"https://godoc.org/go.uber.org/cadence"},{text:"Go Client",link:"https://github.com/uber-go/cadence-client/releases/latest"}]},{text:"Community",items:[{text:"Github Discussion",link:"https://github.com/uber/cadence/discussions"},{text:"StackOverflow",link:"https://stackoverflow.com/questions/tagged/cadence-workflow"},{text:"Github Issues",link:"https://github.com/uber/cadence/issues"},{text:"Slack",link:"http://t.uber.com/cadence-slack"},{text:"Office Hours Calendar",link:"https://calendar.google.com/event?action=TEMPLATE&tmeid=MjFwOW01NWhlZ3MyZWJkcmo2djVsMjNkNzNfMjAyMjA3MjVUMTYwMDAwWiBlNnI0MGdwM2MycjAxMDU0aWQ3ZTk5ZGxhY0Bn&tmsrc=e6r40gp3c2r01054id7e99dlac%40group.calendar.google.com&scp=ALL"}]},{text:"GitHub",items:[{text:"Cadence Service and CLI",link:"https://github.com/uber/cadence"},{text:"Cadence Go Client",link:"https://github.com/uber-go/cadence-client"},{text:"Cadence Go Client Samples",link:"https://github.com/uber-common/cadence-samples"},{text:"Cadence Java Client",link:"https://github.com/uber-java/cadence-client"},{text:"Cadence Java Client Samples",link:"https://github.com/uber/cadence-java-samples"},{text:"Cadence Web UI",link:"https://github.com/uber/cadence-web"},{text:"Cadence Docs",link:"https://github.com/uber/cadence-docs"}]},{text:"Docker",items:[{text:"Cadence Service",link:"https://hub.docker.com/r/ubercadence/server/tags"},{text:"Cadence CLI",link:"https://hub.docker.com/r/ubercadence/cli/tags"},{text:"Cadence Web UI",link:"https://hub.docker.com/r/ubercadence/web/tags"}]}],docsRepo:"uber/cadence-docs",docsDir:"src",editLinks:!0,sidebar:{"/docs/":[{title:"Get Started",path:"/docs/01-get-started",children:["01-get-started/","01-get-started/01-server-installation","01-get-started/02-java-hello-world","01-get-started/03-golang-hello-world","01-get-started/04-video-tutorials"]},{title:"Use cases",path:"/docs/02-use-cases",children:["02-use-cases/","02-use-cases/01-periodic-execution","02-use-cases/02-orchestration","02-use-cases/03-polling","02-use-cases/04-event-driven","02-use-cases/05-partitioned-scan","02-use-cases/06-batch-job","02-use-cases/07-provisioning","02-use-cases/08-deployment","02-use-cases/09-operational-management","02-use-cases/10-interactive","02-use-cases/11-dsl","02-use-cases/12-big-ml"]},{title:"Concepts",path:"/docs/03-concepts",children:["03-concepts/","03-concepts/01-workflows","03-concepts/02-activities","03-concepts/03-events","03-concepts/04-queries","03-concepts/05-topology","03-concepts/06-task-lists","03-concepts/07-archival","03-concepts/08-cross-dc-replication","03-concepts/09-search-workflows","03-concepts/10-http-api"]},{title:"Java client",path:"/docs/04-java-client",children:["04-java-client/","04-java-client/01-client-overview","04-java-client/02-workflow-interface","04-java-client/03-implementing-workflows","04-java-client/04-starting-workflow-executions","04-java-client/05-activity-interface","04-java-client/06-implementing-activities","04-java-client/07-versioning","04-java-client/08-distributed-cron","04-java-client/09-workers","04-java-client/10-signals","04-java-client/11-queries","04-java-client/12-retries","04-java-client/13-child-workflows","04-java-client/14-exception-handling","04-java-client/15-continue-as-new","04-java-client/16-side-effect","04-java-client/17-testing","04-java-client/18-workflow-replay-shadowing"]},{title:"Go client",path:"/docs/05-go-client",children:["05-go-client/","05-go-client/01-workers","05-go-client/02-create-workflows","05-go-client/02.5-starting-workflows","05-go-client/03-activities","05-go-client/04-execute-activity","05-go-client/05-child-workflows","05-go-client/06-retries","05-go-client/07-error-handling","05-go-client/08-signals","05-go-client/09-continue-as-new","05-go-client/10-side-effect","05-go-client/11-queries","05-go-client/12-activity-async-completion","05-go-client/13-workflow-testing","05-go-client/14-workflow-versioning","05-go-client/15-sessions","05-go-client/16-distributed-cron","05-go-client/17-tracing","05-go-client/18-workflow-replay-shadowing"]},{title:"Command line interface",path:"/docs/06-cli/"},{title:"Production Operation",path:"/docs/07-operation-guide/",children:["07-operation-guide/","07-operation-guide/01-setup","07-operation-guide/02-maintain","07-operation-guide/03-monitoring","07-operation-guide/04-troubleshooting","07-operation-guide/05-migration"]},{title:"Workflow Troubleshooting",path:"/docs/08-workflow-troubleshooting/",children:["08-workflow-troubleshooting/","08-workflow-troubleshooting/01-timeouts"]},{title:"Glossary",path:"../GLOSSARY"},{title:"About",path:"/docs/09-about",children:["09-about/","09-about/01-license"]}]}}};n(241);Vn.component("slack-link",()=>n.e(23).then(n.bind(null,328))),Vn.component("Badge",()=>Promise.all([n.e(0),n.e(4)]).then(n.bind(null,330))),Vn.component("CodeBlock",()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,324))),Vn.component("CodeGroup",()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,325)));n(242);var Ns={name:"BackToTop",props:{threshold:{type:Number,default:300}},data:()=>({scrollTop:null}),computed:{show(){return this.scrollTop>this.threshold}},mounted(){this.scrollTop=this.getScrollTop(),window.addEventListener("scroll",_s()(()=>{this.scrollTop=this.getScrollTop()},100))},methods:{getScrollTop:()=>window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0,scrollToTop(){window.scrollTo({top:0,behavior:"smooth"}),this.scrollTop=0}}},Rs=(n(243),Object(Es.a)(Ns,(function(){var e=this._self._c;return e("transition",{attrs:{name:"fade"}},[this.show?e("svg",{staticClass:"go-to-top",attrs:{xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 49.484 28.284"},on:{click:this.scrollToTop}},[e("g",{attrs:{transform:"translate(-229 -126.358)"}},[e("rect",{attrs:{fill:"currentColor",width:"35",height:"5",rx:"2",transform:"translate(229 151.107) rotate(-45)"}}),this._v(" "),e("rect",{attrs:{fill:"currentColor",width:"35",height:"5",rx:"2",transform:"translate(274.949 154.642) rotate(-135)"}})])]):this._e()])}),[],!1,null,"5fd4ef0c",null).exports);n(244);Vn.component("CodeSwitcher",()=>n.e(25).then(n.bind(null,329)));var zs={name:"ReadingProgress",data:()=>({readingTop:0,readingHeight:1,progressStyle:null,transform:void 0,running:!1}),watch:{$readingShow(){this.progressStyle=this.getProgressStyle(),this.$readingShow&&window.addEventListener("scroll",this.base)}},mounted(){this.transform=this.getTransform(),this.progressStyle=this.getProgressStyle(),this.$readingShow&&window.addEventListener("scroll",this.base)},beforeDestroy(){this.$readingShow&&window.removeEventListener("scroll",this.base)},methods:{base(){this.running||(this.running=!0,requestAnimationFrame(this.getReadingBase))},getReadingBase(){this.readingHeight=this.getReadingHeight()-this.getScreenHeight(),this.readingTop=this.getReadingTop(),this.progressStyle=this.getProgressStyle(),this.running=!1},getReadingHeight:()=>Math.max(document.body.scrollHeight,document.body.offsetHeight,0),getScreenHeight:()=>Math.max(window.innerHeight,document.documentElement.clientHeight,0),getReadingTop:()=>Math.max(window.pageYOffset,document.documentElement.scrollTop,0),getTransform(){const e=document.createElement("div");return["transform","-webkit-transform","-moz-transform","-o-transform","-ms-transform"].find(t=>t in e.style)||void 0},getProgressStyle(){const e=this.readingTop/this.readingHeight;switch(this.$readingShow){case"top":case"bottom":return this.transform?`${this.transform}: scaleX(${e})`:`width: ${100*e}%`;case"left":case"right":return this.transform?`${this.transform}: scaleY(${e})`:`height: ${100*e}%`;default:return null}}}},Ls=(n(245),Object(Es.a)(zs,(function(){var e=this._self._c;return e("ClientOnly",[this.$readingShow?e("div",{staticClass:"reading-progress",class:this.$readingShow},[e("div",{staticClass:"progress",style:this.progressStyle})]):this._e()])}),[],!1,null,"3640397f",null).exports);function Fs(e,t){let n=!0;void 0===e?(e="Term not found in the glossary",n=!1):e=Ms(e);return`${t=Hs(t)}`}function Ms(e){return e.replace(/:[\w+]*:([\w+]*):/g,(e,t)=>t).replace(/:([\w+]*):/g,(e,t)=>t)}function Hs(e){return e.split("_").join(" ")}function $s(e){return e.split("_").join(" ")}var Us={name:"Term",props:{term:{type:String,required:!0},show:{type:String,required:!1,default:""}},data:()=>({termNotFound:!1}),computed:{terms(){return this.$site.pages.find(e=>"/GLOSSARY.html"===e.path).frontmatter.terms},definition(){const e=$s(this.term),t=this.terms[e];return t?Ms(t):(this.termNotFound=!0,"Term not found in the glossary")},displayText(){return $s(this.show?this.show:this.term)}}},Gs=Object(Es.a)(Us,(function(){return(0,this._self._c)("a",{class:{"term-not-found":this.termNotFound,term:!0},attrs:{title:this.definition}},[this._v(this._s(this.displayText))])}),[],!1,null,null,null).exports,Bs={props:{terms:{type:Object,required:!0}},methods:{definition(e){return function(e,t){let n=t[Hs(e)];return n=n.replace(/:([\w+]*):([\w+]*):/g,(e,n,o)=>Fs(t[Hs(n)],o)),n=n.replace(/:([\w+]*):/g,(e,n,o)=>Fs(t[Hs(n)],n)),n}(e,this.terms)}}},Vs=(n(246),Object(Es.a)(Bs,(function(){var e=this,t=e._self._c;return t("dl",e._l(Object.keys(e.terms),(function(n){return t("div",[t("dt",{staticClass:"defined-term"},[e._v(e._s(n))]),e._v(" "),t("dd",{staticClass:"term-definition",domProps:{innerHTML:e._s(e.definition(n,e.terms))}})])})),0)}),[],!1,null,null,null).exports),Ys=n(46);const Ks={redirectors:[{base:"/docs/",alternative:["get-started"]}]};var Xs=[({router:e})=>{e.beforeResolve((e,t,n)=>{const o="undefined"!=typeof window?window:null;o&&e.matched.length&&("*"!==e.matched[0].path&&e.redirectedFrom||"/blog/"===e.path)?o.location.href=e.fullPath:n()})},{},({Vue:e})=>{e.mixin({computed:{$dataBlock(){return this.$options.__data__block__}}})},{},{},({Vue:e})=>{e.component("BackToTop",Rs)},{},{},({Vue:e})=>{e.component(Ls.name,Ls),e.mixin({computed:{$readingShow(){return this.$page.frontmatter.readingShow}}})},({Vue:e})=>{e.component("CodeCopy",Ps)},({Vue:e,options:t,router:n,siteData:o})=>{e.component("Term",Gs),e.component("Glossary",Vs)},({router:e,siteData:t})=>{const{routes:n=[]}=e.options,{redirectors:o=[]}=Ks;function i(e){return n.some(t=>t.path.toLowerCase()===e.toLowerCase())}function a(e){if(i(e))return e;if(!/\/$/.test(e)){const t=e+"/";if(i(t))return t}if(!/\.html$/.test(e)){const t=e.replace(/\/$/,"")+".html";if(i(t))return t}return null}if(Ks.locales&&t.locales){const e=t.locales,n=Object.keys(e),i=n.map(t=>({key:t.replace(/^\/|\/$/,""),lang:e[t].lang}));"object"!=typeof Ks.locales&&(Ks.locales={});const{fallback:a,storage:r=!0}=Ks.locales;a&&n.unshift(a),o.unshift({storage:r,base:"/",alternative(){if("undefined"!=typeof window&&window.navigator){const e=window.navigator.languages||[window.navigator.language],t=i.find(({lang:t})=>e.includes(t));if(t)return t.key}return n}})}const r=o.map(({base:e="/",storage:t=!1,alternative:n})=>{let o=!1;if(t)if("object"!=typeof t){const n="string"!=typeof t?"vuepress:redirect:"+e:t;o={get:()=>"undefined"==typeof localStorage?null:localStorage.getItem(n),set(e){"undefined"!=typeof localStorage&&localStorage.setItem(n,e)}}}else t.get&&t.set&&(o=t);return{base:e,storage:o,alternative:n}});e.beforeEach((e,t,n)=>{if(a(e.path))return n();let o;for(const t of r){const{base:n="/",storage:i=!1}=t;let{alternative:r}=t;if(!e.path.startsWith(n))continue;const s=e.path.slice(n.length)||"/";if(i){const e=i.get(t);if(e){const t=a(Object(Ys.join)(n,e,s));if(t){o=t;break}}}if("function"==typeof r&&(r=r(s)),r){"string"==typeof r&&(r=[r]);for(const e of r){const t=a(Object(Ys.join)(n,e,s));if(t){o=t;break}}if(o)break}}n(o)}),e.afterEach(e=>{if(i(e.path))for(const t of r){const{base:n,storage:o}=t;if(!o||!e.path.startsWith(n))continue;const i=e.path.slice(n.length).split("/")[0];i&&o.set(i,t)}})}],Qs=["BackToTop","ReadingProgress"];class Js extends class{constructor(){this.store=new Vn({data:{state:{}}})}$get(e){return this.store.state[e]}$set(e,t){Vn.set(this.store.state,e,t)}$emit(...e){this.store.$emit(...e)}$on(...e){this.store.$on(...e)}}{}Object.assign(Js.prototype,{getPageAsyncComponent:ss,getLayoutAsyncComponent:cs,getAsyncComponent:ls,getVueComponent:ds});var Zs={install(e){const t=new Js;e.$vuepress=t,e.prototype.$vuepress=t}};function ec(e,t){const n=t.toLowerCase();return e.options.routes.some(e=>e.path.toLowerCase()===n)}var tc={props:{pageKey:String,slotKey:{type:String,default:"default"}},render(e){const t=this.pageKey||this.$parent.$page.key;return hs("pageKey",t),Vn.component(t)||Vn.component(t,ss(t)),Vn.component(t)?e(t):e("")}},nc={functional:!0,props:{slotKey:String,required:!0},render:(e,{props:t,slots:n})=>e("div",{class:["content__"+t.slotKey]},n()[t.slotKey])},oc={computed:{openInNewWindowTitle(){return this.$themeLocaleConfig.openNewWindowText||"(opens new window)"}}},ic=(n(247),n(248),Object(Es.a)(oc,(function(){var e=this._self._c;return e("span",[e("svg",{staticClass:"icon outbound",attrs:{xmlns:"http://www.w3.org/2000/svg","aria-hidden":"true",focusable:"false",x:"0px",y:"0px",viewBox:"0 0 100 100",width:"15",height:"15"}},[e("path",{attrs:{fill:"currentColor",d:"M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"}}),this._v(" "),e("polygon",{attrs:{fill:"currentColor",points:"45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"}})]),this._v(" "),e("span",{staticClass:"sr-only"},[this._v(this._s(this.openInNewWindowTitle))])])}),[],!1,null,null,null).exports),ac={functional:!0,render(e,{parent:t,children:n}){if(t._isMounted)return n;t.$once("hook:mounted",()=>{t.$forceUpdate()})}};Vn.config.productionTip=!1,Vn.use(Gr),Vn.use(Zs),Vn.mixin(function(e,t,n=Vn){!function(e){e.locales&&Object.keys(e.locales).forEach(t=>{e.locales[t].path=t});Object.freeze(e)}(t),n.$vuepress.$set("siteData",t);const o=new(e(n.$vuepress.$get("siteData"))),i=Object.getOwnPropertyDescriptors(Object.getPrototypeOf(o)),a={};return Object.keys(i).reduce((e,t)=>(t.startsWith("$")&&(e[t]=i[t].get),e),a),{computed:a}}(e=>class{setPage(e){this.__page=e}get $site(){return e}get $themeConfig(){return this.$site.themeConfig}get $frontmatter(){return this.$page.frontmatter}get $localeConfig(){const{locales:e={}}=this.$site;let t,n;for(const o in e)"/"===o?n=e[o]:0===this.$page.path.indexOf(o)&&(t=e[o]);return t||n||{}}get $siteTitle(){return this.$localeConfig.title||this.$site.title||""}get $canonicalUrl(){const{canonicalUrl:e}=this.$page.frontmatter;return"string"==typeof e&&e}get $title(){const e=this.$page,{metaTitle:t}=this.$page.frontmatter;if("string"==typeof t)return t;const n=this.$siteTitle,o=e.frontmatter.home?null:e.frontmatter.title||e.title;return n?o?o+" | "+n:n:o||"VuePress"}get $description(){const e=function(e){if(e){const t=e.filter(e=>"description"===e.name)[0];if(t)return t.content}}(this.$page.frontmatter.meta);return e||(this.$page.frontmatter.description||this.$localeConfig.description||this.$site.description||"")}get $lang(){return this.$page.frontmatter.lang||this.$localeConfig.lang||"en-US"}get $localePath(){return this.$localeConfig.path||"/"}get $themeLocaleConfig(){return(this.$site.themeConfig.locales||{})[this.$localePath]||{}}get $page(){return this.__page?this.__page:function(e,t){for(let n=0;nn||(e.hash?!Vn.$vuepress.$get("disableScrollBehavior")&&{selector:decodeURIComponent(e.hash)}:{x:0,y:0})});!function(e){e.beforeEach((t,n,o)=>{if(ec(e,t.path))o();else if(/(\/|\.html)$/.test(t.path))if(/\/$/.test(t.path)){const n=t.path.replace(/\/$/,"")+".html";ec(e,n)?o(n):o()}else o();else{const n=t.path+"/",i=t.path+".html";ec(e,i)?o(i):ec(e,n)?o(n):o()}})}(n);const o={};try{await Promise.all(Xs.filter(e=>"function"==typeof e).map(t=>t({Vue:Vn,options:o,router:n,siteData:js,isServer:e})))}catch(e){console.error(e)}return{app:new Vn(Object.assign(o,{router:n,render:e=>e("div",{attrs:{id:"app"}},[e("RouterView",{ref:"layout"}),e("div",{class:"global-ui"},Qs.map(t=>e(t)))])})),router:n}}(!1).then(({app:e,router:t})=>{t.onReady(()=>{e.$mount("#app")})})}]); \ No newline at end of file diff --git a/assets/js/app.8e3b53f9.js b/assets/js/app.8e3b53f9.js new file mode 100644 index 000000000..1d0f364b4 --- /dev/null +++ b/assets/js/app.8e3b53f9.js @@ -0,0 +1,16 @@ +(window.webpackJsonp=window.webpackJsonp||[]).push([[0],[]]);!function(e){function t(t){for(var o,r,s=t[0],c=t[1],l=t[2],u=0,h=[];u=t||n<0||w&&e-l>=a}function k(){var e=p();if(b(e))return x(e);s=setTimeout(k,function(e){var n=t-(e-c);return w?h(n,a-(e-l)):n}(e))}function x(e){return s=void 0,g&&o?y(e):(o=i=void 0,r)}function _(){var e=p(),n=b(e);if(o=arguments,i=this,c=e,n){if(void 0===s)return v(c);if(w)return s=setTimeout(k,t),y(c)}return void 0===s&&(s=setTimeout(k,t)),r}return t=f(t)||0,m(n)&&(d=!!n.leading,a=(w="maxWait"in n)?u(f(n.maxWait)||0,t):a,g="trailing"in n?!!n.trailing:g),_.cancel=function(){void 0!==s&&clearTimeout(s),l=0,o=c=i=s=void 0},_.flush=function(){return void 0===s?r:x(p())},_}},function(e,t,n){var o,i; +/* NProgress, (c) 2013, 2014 Rico Sta. Cruz - http://ricostacruz.com/nprogress + * @license MIT */void 0===(i="function"==typeof(o=function(){var e,t,n={version:"0.2.0"},o=n.settings={minimum:.08,easing:"ease",positionUsing:"",speed:200,trickle:!0,trickleRate:.02,trickleSpeed:800,showSpinner:!0,barSelector:'[role="bar"]',spinnerSelector:'[role="spinner"]',parent:"body",template:'
'};function i(e,t,n){return en?n:e}function a(e){return 100*(-1+e)}n.configure=function(e){var t,n;for(t in e)void 0!==(n=e[t])&&e.hasOwnProperty(t)&&(o[t]=n);return this},n.status=null,n.set=function(e){var t=n.isStarted();e=i(e,o.minimum,1),n.status=1===e?null:e;var c=n.render(!t),l=c.querySelector(o.barSelector),d=o.speed,u=o.easing;return c.offsetWidth,r((function(t){""===o.positionUsing&&(o.positionUsing=n.getPositioningCSS()),s(l,function(e,t,n){var i;return(i="translate3d"===o.positionUsing?{transform:"translate3d("+a(e)+"%,0,0)"}:"translate"===o.positionUsing?{transform:"translate("+a(e)+"%,0)"}:{"margin-left":a(e)+"%"}).transition="all "+t+"ms "+n,i}(e,d,u)),1===e?(s(c,{transition:"none",opacity:1}),c.offsetWidth,setTimeout((function(){s(c,{transition:"all "+d+"ms linear",opacity:0}),setTimeout((function(){n.remove(),t()}),d)}),d)):setTimeout(t,d)})),this},n.isStarted=function(){return"number"==typeof n.status},n.start=function(){n.status||n.set(0);var e=function(){setTimeout((function(){n.status&&(n.trickle(),e())}),o.trickleSpeed)};return o.trickle&&e(),this},n.done=function(e){return e||n.status?n.inc(.3+.5*Math.random()).set(1):this},n.inc=function(e){var t=n.status;return t?("number"!=typeof e&&(e=(1-t)*i(Math.random()*t,.1,.95)),t=i(t+e,0,.994),n.set(t)):n.start()},n.trickle=function(){return n.inc(Math.random()*o.trickleRate)},e=0,t=0,n.promise=function(o){return o&&"resolved"!==o.state()?(0===t&&n.start(),e++,t++,o.always((function(){0==--t?(e=0,n.done()):n.set((e-t)/e)})),this):this},n.render=function(e){if(n.isRendered())return document.getElementById("nprogress");l(document.documentElement,"nprogress-busy");var t=document.createElement("div");t.id="nprogress",t.innerHTML=o.template;var i,r=t.querySelector(o.barSelector),c=e?"-100":a(n.status||0),d=document.querySelector(o.parent);return s(r,{transition:"all 0 linear",transform:"translate3d("+c+"%,0,0)"}),o.showSpinner||(i=t.querySelector(o.spinnerSelector))&&h(i),d!=document.body&&l(d,"nprogress-custom-parent"),d.appendChild(t),t},n.remove=function(){d(document.documentElement,"nprogress-busy"),d(document.querySelector(o.parent),"nprogress-custom-parent");var e=document.getElementById("nprogress");e&&h(e)},n.isRendered=function(){return!!document.getElementById("nprogress")},n.getPositioningCSS=function(){var e=document.body.style,t="WebkitTransform"in e?"Webkit":"MozTransform"in e?"Moz":"msTransform"in e?"ms":"OTransform"in e?"O":"";return t+"Perspective"in e?"translate3d":t+"Transform"in e?"translate":"margin"};var r=function(){var e=[];function t(){var n=e.shift();n&&n(t)}return function(n){e.push(n),1==e.length&&t()}}(),s=function(){var e=["Webkit","O","Moz","ms"],t={};function n(n){return n=n.replace(/^-ms-/,"ms-").replace(/-([\da-z])/gi,(function(e,t){return t.toUpperCase()})),t[n]||(t[n]=function(t){var n=document.body.style;if(t in n)return t;for(var o,i=e.length,a=t.charAt(0).toUpperCase()+t.slice(1);i--;)if((o=e[i]+a)in n)return o;return t}(n))}function o(e,t,o){t=n(t),e.style[t]=o}return function(e,t){var n,i,a=arguments;if(2==a.length)for(n in t)void 0!==(i=t[n])&&t.hasOwnProperty(n)&&o(e,n,i);else o(e,a[1],a[2])}}();function c(e,t){return("string"==typeof e?e:u(e)).indexOf(" "+t+" ")>=0}function l(e,t){var n=u(e),o=n+t;c(n,t)||(e.className=o.substring(1))}function d(e,t){var n,o=u(e);c(e,t)&&(n=o.replace(" "+t+" "," "),e.className=n.substring(1,n.length-1))}function u(e){return(" "+(e.className||"")+" ").replace(/\s+/gi," ")}function h(e){e&&e.parentNode&&e.parentNode.removeChild(e)}return n})?o.call(t,n,t,e):o)||(e.exports=i)},function(e,t,n){"use strict";var o=n(8),i=String,a=TypeError;e.exports=function(e){if(o(e))return e;throw new a(i(e)+" is not an object")}},function(e,t,n){"use strict";var o=n(1),i=n(50).f,a=n(13),r=n(95),s=n(36),c=n(63),l=n(124);e.exports=function(e,t){var n,d,u,h,p,m=e.target,f=e.global,w=e.stat;if(n=f?o:w?o[m]||s(m,{}):o[m]&&o[m].prototype)for(d in t){if(h=t[d],u=e.dontCallGetSet?(p=i(n,d))&&p.value:n[d],!l(f?d:m+(w?".":"#")+d,e.forced)&&void 0!==u){if(typeof h==typeof u)continue;c(h,u)}(e.sham||u&&u.sham)&&a(h,"sham",!0),r(n,d,h,e)}}},function(e,t,n){"use strict";var o=n(4);e.exports=!o((function(){var e=function(){}.bind();return"function"!=typeof e||e.hasOwnProperty("prototype")}))},function(e,t,n){"use strict";var o=n(47),i=n(51);e.exports=function(e){return o(i(e))}},function(e,t,n){"use strict";var o=n(1),i=n(2),a=function(e){return i(e)?e:void 0};e.exports=function(e,t){return arguments.length<2?a(o[e]):o[e]&&o[e][t]}},function(e,t,n){"use strict";var o=n(2),i=n(111),a=TypeError;e.exports=function(e){if(o(e))return e;throw new a(i(e)+" is not a function")}},function(e,t,n){"use strict";var o=n(1),i=n(59),a=n(9),r=n(61),s=n(57),c=n(56),l=o.Symbol,d=i("wks"),u=c?l.for||l:l&&l.withoutSetter||r;e.exports=function(e){return a(d,e)||(d[e]=s&&a(l,e)?l[e]:u("Symbol."+e)),d[e]}},function(e,t,n){"use strict";var o=n(51),i=Object;e.exports=function(e){return i(o(e))}},function(e,t,n){"use strict";var o=n(122);e.exports=function(e){return o(e.length)}},function(e,t,n){"use strict";var o=n(26),i=Function.prototype.call;e.exports=o?i.bind(i):function(){return i.apply(i,arguments)}},function(e,t,n){"use strict";e.exports=function(e,t){return{enumerable:!(1&e),configurable:!(2&e),writable:!(4&e),value:t}}},function(e,t,n){"use strict";var o=n(60),i=n(1),a=n(36),r=e.exports=i["__core-js_shared__"]||a("__core-js_shared__",{});(r.versions||(r.versions=[])).push({version:"3.36.0",mode:o?"pure":"global",copyright:"© 2014-2024 Denis Pushkarev (zloirock.ru)",license:"https://github.com/zloirock/core-js/blob/v3.36.0/LICENSE",source:"https://github.com/zloirock/core-js"})},function(e,t,n){"use strict";var o=n(1),i=Object.defineProperty;e.exports=function(e,t){try{i(o,e,{value:t,configurable:!0,writable:!0})}catch(n){o[e]=t}return t}},function(e,t,n){var o=n(147),i=n(11),a=Object.prototype,r=a.hasOwnProperty,s=a.propertyIsEnumerable,c=o(function(){return arguments}())?o:function(e){return i(e)&&r.call(e,"callee")&&!s.call(e,"callee")};e.exports=c},function(e,t,n){var o=n(10)(n(7),"Map");e.exports=o},function(e,t){e.exports=function(e){var t=typeof e;return null!=e&&("object"==t||"function"==t)}},function(e,t,n){var o=n(167),i=n(174),a=n(176),r=n(177),s=n(178);function c(e){var t=-1,n=null==e?0:e.length;for(this.clear();++t-1&&e%1==0&&e<=9007199254740991}},function(e,t,n){var o=n(6),i=n(44),a=/\.|\[(?:[^[\]]*|(["'])(?:(?!\1)[^\\]|\\.)*?\1)\]/,r=/^\w*$/;e.exports=function(e,t){if(o(e))return!1;var n=typeof e;return!("number"!=n&&"symbol"!=n&&"boolean"!=n&&null!=e&&!i(e))||(r.test(e)||!a.test(e)||null!=t&&e in Object(t))}},function(e,t,n){var o=n(12),i=n(11);e.exports=function(e){return"symbol"==typeof e||i(e)&&"[object Symbol]"==o(e)}},function(e,t){e.exports=function(e){return e}},function(e,t){function n(e,t){for(var n=0,o=e.length-1;o>=0;o--){var i=e[o];"."===i?e.splice(o,1):".."===i?(e.splice(o,1),n++):n&&(e.splice(o,1),n--)}if(t)for(;n--;n)e.unshift("..");return e}function o(e,t){if(e.filter)return e.filter(t);for(var n=[],o=0;o=-1&&!t;i--){var a=i>=0?arguments[i]:process.cwd();if("string"!=typeof a)throw new TypeError("Arguments to path.resolve must be strings");a&&(e=a+"/"+e,t="/"===a.charAt(0))}return(t?"/":"")+(e=n(o(e.split("/"),(function(e){return!!e})),!t).join("/"))||"."},t.normalize=function(e){var a=t.isAbsolute(e),r="/"===i(e,-1);return(e=n(o(e.split("/"),(function(e){return!!e})),!a).join("/"))||a||(e="."),e&&r&&(e+="/"),(a?"/":"")+e},t.isAbsolute=function(e){return"/"===e.charAt(0)},t.join=function(){var e=Array.prototype.slice.call(arguments,0);return t.normalize(o(e,(function(e,t){if("string"!=typeof e)throw new TypeError("Arguments to path.join must be strings");return e})).join("/"))},t.relative=function(e,n){function o(e){for(var t=0;t=0&&""===e[n];n--);return t>n?[]:e.slice(t,n-t+1)}e=t.resolve(e).substr(1),n=t.resolve(n).substr(1);for(var i=o(e.split("/")),a=o(n.split("/")),r=Math.min(i.length,a.length),s=r,c=0;c=1;--a)if(47===(t=e.charCodeAt(a))){if(!i){o=a;break}}else i=!1;return-1===o?n?"/":".":n&&1===o?"/":e.slice(0,o)},t.basename=function(e,t){var n=function(e){"string"!=typeof e&&(e+="");var t,n=0,o=-1,i=!0;for(t=e.length-1;t>=0;--t)if(47===e.charCodeAt(t)){if(!i){n=t+1;break}}else-1===o&&(i=!1,o=t+1);return-1===o?"":e.slice(n,o)}(e);return t&&n.substr(-1*t.length)===t&&(n=n.substr(0,n.length-t.length)),n},t.extname=function(e){"string"!=typeof e&&(e+="");for(var t=-1,n=0,o=-1,i=!0,a=0,r=e.length-1;r>=0;--r){var s=e.charCodeAt(r);if(47!==s)-1===o&&(i=!1,o=r+1),46===s?-1===t?t=r:1!==a&&(a=1):-1!==t&&(a=-1);else if(!i){n=r+1;break}}return-1===t||-1===o||0===a||1===a&&t===o-1&&t===n+1?"":e.slice(t,o)};var i="b"==="ab".substr(-1)?function(e,t,n){return e.substr(t,n)}:function(e,t,n){return t<0&&(t=e.length+t),e.substr(t,n)}},function(e,t,n){"use strict";var o=n(3),i=n(4),a=n(16),r=Object,s=o("".split);e.exports=i((function(){return!r("z").propertyIsEnumerable(0)}))?function(e){return"String"===a(e)?s(e,""):r(e)}:r},function(e,t,n){"use strict";e.exports={}},function(e,t){e.exports=function(e){return e.webpackPolyfill||(e.deprecate=function(){},e.paths=[],e.children||(e.children=[]),Object.defineProperty(e,"loaded",{enumerable:!0,get:function(){return e.l}}),Object.defineProperty(e,"id",{enumerable:!0,get:function(){return e.i}}),e.webpackPolyfill=1),e}},function(e,t,n){"use strict";var o=n(5),i=n(33),a=n(107),r=n(34),s=n(27),c=n(53),l=n(9),d=n(62),u=Object.getOwnPropertyDescriptor;t.f=o?u:function(e,t){if(e=s(e),t=c(t),d)try{return u(e,t)}catch(e){}if(l(e,t))return r(!i(a.f,e,t),e[t])}},function(e,t,n){"use strict";var o=n(52),i=TypeError;e.exports=function(e){if(o(e))throw new i("Can't call method on "+e);return e}},function(e,t,n){"use strict";e.exports=function(e){return null==e}},function(e,t,n){"use strict";var o=n(108),i=n(54);e.exports=function(e){var t=o(e,"string");return i(t)?t:t+""}},function(e,t,n){"use strict";var o=n(28),i=n(2),a=n(55),r=n(56),s=Object;e.exports=r?function(e){return"symbol"==typeof e}:function(e){var t=o("Symbol");return i(t)&&a(t.prototype,s(e))}},function(e,t,n){"use strict";var o=n(3);e.exports=o({}.isPrototypeOf)},function(e,t,n){"use strict";var o=n(57);e.exports=o&&!Symbol.sham&&"symbol"==typeof Symbol.iterator},function(e,t,n){"use strict";var o=n(58),i=n(4),a=n(1).String;e.exports=!!Object.getOwnPropertySymbols&&!i((function(){var e=Symbol("symbol detection");return!a(e)||!(Object(e)instanceof Symbol)||!Symbol.sham&&o&&o<41}))},function(e,t,n){"use strict";var o,i,a=n(1),r=n(109),s=a.process,c=a.Deno,l=s&&s.versions||c&&c.version,d=l&&l.v8;d&&(i=(o=d.split("."))[0]>0&&o[0]<4?1:+(o[0]+o[1])),!i&&r&&(!(o=r.match(/Edge\/(\d+)/))||o[1]>=74)&&(o=r.match(/Chrome\/(\d+)/))&&(i=+o[1]),e.exports=i},function(e,t,n){"use strict";var o=n(35);e.exports=function(e,t){return o[e]||(o[e]=t||{})}},function(e,t,n){"use strict";e.exports=!1},function(e,t,n){"use strict";var o=n(3),i=0,a=Math.random(),r=o(1..toString);e.exports=function(e){return"Symbol("+(void 0===e?"":e)+")_"+r(++i+a,36)}},function(e,t,n){"use strict";var o=n(5),i=n(4),a=n(100);e.exports=!o&&!i((function(){return 7!==Object.defineProperty(a("div"),"a",{get:function(){return 7}}).a}))},function(e,t,n){"use strict";var o=n(9),i=n(117),a=n(50),r=n(15);e.exports=function(e,t,n){for(var s=i(t),c=r.f,l=a.f,d=0;dd))return!1;var h=c.get(e),p=c.get(t);if(h&&p)return h==t&&p==e;var m=-1,f=!0,w=2&n?new o:void 0;for(c.set(e,t),c.set(t,e);++m-1&&e%1==0&&e]/;e.exports=function(e){var t,n=""+e,i=o.exec(n);if(!i)return n;var a="",r=0,s=0;for(r=i.index;rl;)i(o,n=t[l++])&&(~r(d,n)||c(d,n));return d}},function(e,t,n){"use strict";var o=n(25),i=n(1),a=n(128),r=n(129),s=i.WebAssembly,c=7!==new Error("e",{cause:7}).cause,l=function(e,t){var n={};n[e]=r(e,t,c),o({global:!0,constructor:!0,arity:1,forced:c},n)},d=function(e,t){if(s&&s[e]){var n={};n[e]=r("WebAssembly."+e,t,c),o({target:"WebAssembly",stat:!0,constructor:!0,arity:1,forced:c},n)}};l("Error",(function(e){return function(t){return a(e,this,arguments)}})),l("EvalError",(function(e){return function(t){return a(e,this,arguments)}})),l("RangeError",(function(e){return function(t){return a(e,this,arguments)}})),l("ReferenceError",(function(e){return function(t){return a(e,this,arguments)}})),l("SyntaxError",(function(e){return function(t){return a(e,this,arguments)}})),l("TypeError",(function(e){return function(t){return a(e,this,arguments)}})),l("URIError",(function(e){return function(t){return a(e,this,arguments)}})),d("CompileError",(function(e){return function(t){return a(e,this,arguments)}})),d("LinkError",(function(e){return function(t){return a(e,this,arguments)}})),d("RuntimeError",(function(e){return function(t){return a(e,this,arguments)}}))},function(e,t,n){e.exports=n(249)},function(e,t,n){"use strict";var o=n(25),i=n(125).left,a=n(126),r=n(58);o({target:"Array",proto:!0,forced:!n(127)&&r>79&&r<83||!a("reduce")},{reduce:function(e){var t=arguments.length;return i(this,e,t,t>1?arguments[1]:void 0)}})},function(e,t,n){"use strict";var o={}.propertyIsEnumerable,i=Object.getOwnPropertyDescriptor,a=i&&!o.call({1:2},1);t.f=a?function(e){var t=i(this,e);return!!t&&t.enumerable}:o},function(e,t,n){"use strict";var o=n(33),i=n(8),a=n(54),r=n(110),s=n(112),c=n(30),l=TypeError,d=c("toPrimitive");e.exports=function(e,t){if(!i(e)||a(e))return e;var n,c=r(e,d);if(c){if(void 0===t&&(t="default"),n=o(c,e,t),!i(n)||a(n))return n;throw new l("Can't convert object to primitive value")}return void 0===t&&(t="number"),s(e,t)}},function(e,t,n){"use strict";e.exports="undefined"!=typeof navigator&&String(navigator.userAgent)||""},function(e,t,n){"use strict";var o=n(29),i=n(52);e.exports=function(e,t){var n=e[t];return i(n)?void 0:o(n)}},function(e,t,n){"use strict";var o=String;e.exports=function(e){try{return o(e)}catch(e){return"Object"}}},function(e,t,n){"use strict";var o=n(33),i=n(2),a=n(8),r=TypeError;e.exports=function(e,t){var n,s;if("string"===t&&i(n=e.toString)&&!a(s=o(n,e)))return s;if(i(n=e.valueOf)&&!a(s=o(n,e)))return s;if("string"!==t&&i(n=e.toString)&&!a(s=o(n,e)))return s;throw new r("Can't convert object to primitive value")}},function(e,t,n){"use strict";var o=n(5),i=n(9),a=Function.prototype,r=o&&Object.getOwnPropertyDescriptor,s=i(a,"name"),c=s&&"something"===function(){}.name,l=s&&(!o||o&&r(a,"name").configurable);e.exports={EXISTS:s,PROPER:c,CONFIGURABLE:l}},function(e,t,n){"use strict";var o=n(3),i=n(2),a=n(35),r=o(Function.toString);i(a.inspectSource)||(a.inspectSource=function(e){return r(e)}),e.exports=a.inspectSource},function(e,t,n){"use strict";var o,i,a,r=n(116),s=n(1),c=n(8),l=n(13),d=n(9),u=n(35),h=n(102),p=n(48),m=s.TypeError,f=s.WeakMap;if(r||u.state){var w=u.state||(u.state=new f);w.get=w.get,w.has=w.has,w.set=w.set,o=function(e,t){if(w.has(e))throw new m("Object already initialized");return t.facade=e,w.set(e,t),t},i=function(e){return w.get(e)||{}},a=function(e){return w.has(e)}}else{var g=h("state");p[g]=!0,o=function(e,t){if(d(e,g))throw new m("Object already initialized");return t.facade=e,l(e,g,t),t},i=function(e){return d(e,g)?e[g]:{}},a=function(e){return d(e,g)}}e.exports={set:o,get:i,has:a,enforce:function(e){return a(e)?i(e):o(e,{})},getterFor:function(e){return function(t){var n;if(!c(t)||(n=i(t)).type!==e)throw new m("Incompatible receiver, "+e+" required");return n}}}},function(e,t,n){"use strict";var o=n(1),i=n(2),a=o.WeakMap;e.exports=i(a)&&/native code/.test(String(a))},function(e,t,n){"use strict";var o=n(28),i=n(3),a=n(118),r=n(123),s=n(24),c=i([].concat);e.exports=o("Reflect","ownKeys")||function(e){var t=a.f(s(e)),n=r.f;return n?c(t,n(e)):t}},function(e,t,n){"use strict";var o=n(103),i=n(99).concat("length","prototype");t.f=Object.getOwnPropertyNames||function(e){return o(e,i)}},function(e,t,n){"use strict";var o=n(27),i=n(120),a=n(32),r=function(e){return function(t,n,r){var s=o(t),c=a(s);if(0===c)return!e&&-1;var l,d=i(r,c);if(e&&n!=n){for(;c>d;)if((l=s[d++])!=l)return!0}else for(;c>d;d++)if((e||d in s)&&s[d]===n)return e||d||0;return!e&&-1}};e.exports={includes:r(!0),indexOf:r(!1)}},function(e,t,n){"use strict";var o=n(64),i=Math.max,a=Math.min;e.exports=function(e,t){var n=o(e);return n<0?i(n+t,0):a(n,t)}},function(e,t,n){"use strict";var o=Math.ceil,i=Math.floor;e.exports=Math.trunc||function(e){var t=+e;return(t>0?i:o)(t)}},function(e,t,n){"use strict";var o=n(64),i=Math.min;e.exports=function(e){var t=o(e);return t>0?i(t,9007199254740991):0}},function(e,t,n){"use strict";t.f=Object.getOwnPropertySymbols},function(e,t,n){"use strict";var o=n(4),i=n(2),a=/#|\.prototype\./,r=function(e,t){var n=c[s(e)];return n===d||n!==l&&(i(t)?o(t):!!t)},s=r.normalize=function(e){return String(e).replace(a,".").toLowerCase()},c=r.data={},l=r.NATIVE="N",d=r.POLYFILL="P";e.exports=r},function(e,t,n){"use strict";var o=n(29),i=n(31),a=n(47),r=n(32),s=TypeError,c="Reduce of empty array with no initial value",l=function(e){return function(t,n,l,d){var u=i(t),h=a(u),p=r(u);if(o(n),0===p&&l<2)throw new s(c);var m=e?p-1:0,f=e?-1:1;if(l<2)for(;;){if(m in h){d=h[m],m+=f;break}if(m+=f,e?m<0:p<=m)throw new s(c)}for(;e?m>=0:p>m;m+=f)m in h&&(d=n(d,h[m],m,u));return d}};e.exports={left:l(!1),right:l(!0)}},function(e,t,n){"use strict";var o=n(4);e.exports=function(e,t){var n=[][e];return!!n&&o((function(){n.call(null,t||function(){return 1},1)}))}},function(e,t,n){"use strict";var o=n(1),i=n(16);e.exports="process"===i(o.process)},function(e,t,n){"use strict";var o=n(26),i=Function.prototype,a=i.apply,r=i.call;e.exports="object"==typeof Reflect&&Reflect.apply||(o?r.bind(a):function(){return r.apply(a,arguments)})},function(e,t,n){"use strict";var o=n(28),i=n(9),a=n(13),r=n(55),s=n(65),c=n(63),l=n(133),d=n(134),u=n(135),h=n(138),p=n(139),m=n(5),f=n(60);e.exports=function(e,t,n,w){var g=w?2:1,y=e.split("."),v=y[y.length-1],b=o.apply(null,y);if(b){var k=b.prototype;if(!f&&i(k,"cause")&&delete k.cause,!n)return b;var x=o("Error"),_=t((function(e,t){var n=u(w?t:e,void 0),o=w?new b(e):new b;return void 0!==n&&a(o,"message",n),p(o,_,o.stack,2),this&&r(k,this)&&d(o,this,_),arguments.length>g&&h(o,arguments[g]),o}));if(_.prototype=k,"Error"!==v?s?s(_,x):c(_,x,{name:!0}):m&&"stackTraceLimit"in b&&(l(_,b,"stackTraceLimit"),l(_,b,"prepareStackTrace")),c(_,b),!f)try{k.name!==v&&a(k,"name",v),k.constructor=_}catch(e){}return _}}},function(e,t,n){"use strict";var o=n(3),i=n(29);e.exports=function(e,t,n){try{return o(i(Object.getOwnPropertyDescriptor(e,t)[n]))}catch(e){}}},function(e,t,n){"use strict";var o=n(132),i=String,a=TypeError;e.exports=function(e){if(o(e))return e;throw new a("Can't set "+i(e)+" as a prototype")}},function(e,t,n){"use strict";var o=n(8);e.exports=function(e){return o(e)||null===e}},function(e,t,n){"use strict";var o=n(15).f;e.exports=function(e,t,n){n in e||o(e,n,{configurable:!0,get:function(){return t[n]},set:function(e){t[n]=e}})}},function(e,t,n){"use strict";var o=n(2),i=n(8),a=n(65);e.exports=function(e,t,n){var r,s;return a&&o(r=t.constructor)&&r!==n&&i(s=r.prototype)&&s!==n.prototype&&a(e,s),e}},function(e,t,n){"use strict";var o=n(96);e.exports=function(e,t){return void 0===e?arguments.length<2?"":t:o(e)}},function(e,t,n){"use strict";var o=n(137),i=n(2),a=n(16),r=n(30)("toStringTag"),s=Object,c="Arguments"===a(function(){return arguments}());e.exports=o?a:function(e){var t,n,o;return void 0===e?"Undefined":null===e?"Null":"string"==typeof(n=function(e,t){try{return e[t]}catch(e){}}(t=s(e),r))?n:c?a(t):"Object"===(o=a(t))&&i(t.callee)?"Arguments":o}},function(e,t,n){"use strict";var o={};o[n(30)("toStringTag")]="z",e.exports="[object z]"===String(o)},function(e,t,n){"use strict";var o=n(8),i=n(13);e.exports=function(e,t){o(t)&&"cause"in t&&i(e,"cause",t.cause)}},function(e,t,n){"use strict";var o=n(13),i=n(140),a=n(141),r=Error.captureStackTrace;e.exports=function(e,t,n,s){a&&(r?r(e,t):o(e,"stack",i(n,s)))}},function(e,t,n){"use strict";var o=n(3),i=Error,a=o("".replace),r=String(new i("zxcasd").stack),s=/\n\s*at [^:]*:[^\n]*/,c=s.test(r);e.exports=function(e,t){if(c&&"string"==typeof e&&!i.prepareStackTrace)for(;t--;)e=a(e,s,"");return e}},function(e,t,n){"use strict";var o=n(4),i=n(34);e.exports=!o((function(){var e=new Error("a");return!("stack"in e)||(Object.defineProperty(e,"stack",i(1,7)),7!==e.stack)}))},function(e,t,n){"use strict";var o=n(5),i=n(143),a=TypeError,r=Object.getOwnPropertyDescriptor,s=o&&!function(){if(void 0!==this)return!0;try{Object.defineProperty([],"length",{writable:!1}).length=1}catch(e){return e instanceof TypeError}}();e.exports=s?function(e,t){if(i(e)&&!r(e,"length").writable)throw new a("Cannot set read only .length");return e.length=t}:function(e,t){return e.length=t}},function(e,t,n){"use strict";var o=n(16);e.exports=Array.isArray||function(e){return"Array"===o(e)}},function(e,t,n){"use strict";var o=TypeError;e.exports=function(e){if(e>9007199254740991)throw o("Maximum allowed index exceeded");return e}},function(e,t,n){var o=n(66),i=n(146);e.exports=function e(t,n,a,r,s){var c=-1,l=t.length;for(a||(a=i),s||(s=[]);++c0&&a(d)?n>1?e(d,n-1,a,r,s):o(s,d):r||(s[s.length]=d)}return s}},function(e,t,n){var o=n(14),i=n(37),a=n(6),r=o?o.isConcatSpreadable:void 0;e.exports=function(e){return a(e)||i(e)||!!(r&&e&&e[r])}},function(e,t,n){var o=n(12),i=n(11);e.exports=function(e){return i(e)&&"[object Arguments]"==o(e)}},function(e,t,n){var o=n(14),i=Object.prototype,a=i.hasOwnProperty,r=i.toString,s=o?o.toStringTag:void 0;e.exports=function(e){var t=a.call(e,s),n=e[s];try{e[s]=void 0;var o=!0}catch(e){}var i=r.call(e);return o&&(t?e[s]=n:delete e[s]),i}},function(e,t){var n=Object.prototype.toString;e.exports=function(e){return n.call(e)}},function(e,t,n){var o=n(151),i=n(207),a=n(45),r=n(6),s=n(218);e.exports=function(e){return"function"==typeof e?e:null==e?a:"object"==typeof e?r(e)?i(e[0],e[1]):o(e):s(e)}},function(e,t,n){var o=n(152),i=n(206),a=n(83);e.exports=function(e){var t=i(e);return 1==t.length&&t[0][2]?a(t[0][0],t[0][1]):function(n){return n===e||o(n,e,t)}}},function(e,t,n){var o=n(68),i=n(72);e.exports=function(e,t,n,a){var r=n.length,s=r,c=!a;if(null==e)return!s;for(e=Object(e);r--;){var l=n[r];if(c&&l[2]?l[1]!==e[l[0]]:!(l[0]in e))return!1}for(;++r-1}},function(e,t,n){var o=n(18);e.exports=function(e,t){var n=this.__data__,i=o(n,e);return i<0?(++this.size,n.push([e,t])):n[i][1]=t,this}},function(e,t,n){var o=n(17);e.exports=function(){this.__data__=new o,this.size=0}},function(e,t){e.exports=function(e){var t=this.__data__,n=t.delete(e);return this.size=t.size,n}},function(e,t){e.exports=function(e){return this.__data__.get(e)}},function(e,t){e.exports=function(e){return this.__data__.has(e)}},function(e,t,n){var o=n(17),i=n(38),a=n(40);e.exports=function(e,t){var n=this.__data__;if(n instanceof o){var r=n.__data__;if(!i||r.length<199)return r.push([e,t]),this.size=++n.size,this;n=this.__data__=new a(r)}return n.set(e,t),this.size=n.size,this}},function(e,t,n){var o=n(70),i=n(164),a=n(39),r=n(71),s=/^\[object .+?Constructor\]$/,c=Function.prototype,l=Object.prototype,d=c.toString,u=l.hasOwnProperty,h=RegExp("^"+d.call(u).replace(/[\\^$.*+?()[\]{}|]/g,"\\$&").replace(/hasOwnProperty|(function).*?(?=\\\()| for .+?(?=\\\])/g,"$1.*?")+"$");e.exports=function(e){return!(!a(e)||i(e))&&(o(e)?h:s).test(r(e))}},function(e,t,n){var o,i=n(165),a=(o=/[^.]+$/.exec(i&&i.keys&&i.keys.IE_PROTO||""))?"Symbol(src)_1."+o:"";e.exports=function(e){return!!a&&a in e}},function(e,t,n){var o=n(7)["__core-js_shared__"];e.exports=o},function(e,t){e.exports=function(e,t){return null==e?void 0:e[t]}},function(e,t,n){var o=n(168),i=n(17),a=n(38);e.exports=function(){this.size=0,this.__data__={hash:new o,map:new(a||i),string:new o}}},function(e,t,n){var o=n(169),i=n(170),a=n(171),r=n(172),s=n(173);function c(e){var t=-1,n=null==e?0:e.length;for(this.clear();++t0){if(++t>=800)return arguments[0]}else t=0;return e.apply(void 0,arguments)}}},function(e,t,n){var o=n(74),i=n(230),a=n(235),r=n(75),s=n(236),c=n(41);e.exports=function(e,t,n){var l=-1,d=i,u=e.length,h=!0,p=[],m=p;if(n)h=!1,d=a;else if(u>=200){var f=t?null:s(e);if(f)return c(f);h=!1,d=r,m=new o}else m=t?[]:p;e:for(;++l-1}},function(e,t,n){var o=n(232),i=n(233),a=n(234);e.exports=function(e,t,n){return t==t?a(e,t,n):o(e,i,n)}},function(e,t){e.exports=function(e,t,n,o){for(var i=e.length,a=n+(o?1:-1);o?a--:++a=0&&Math.floor(t)===t&&isFinite(e)}function f(e){return r(e)&&"function"==typeof e.then&&"function"==typeof e.catch}function w(e){return null==e?"":Array.isArray(e)||h(e)&&e.toString===u?JSON.stringify(e,g,2):String(e)}function g(e,t){return t&&t.__v_isRef?t.value:t}function y(e){var t=parseFloat(e);return isNaN(t)?e:t}function v(e,t){for(var n=Object.create(null),o=e.split(","),i=0;i-1)return e.splice(o,1)}}var x=Object.prototype.hasOwnProperty;function _(e,t){return x.call(e,t)}function T(e){var t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}var S=/-(\w)/g,C=T((function(e){return e.replace(S,(function(e,t){return t?t.toUpperCase():""}))})),I=T((function(e){return e.charAt(0).toUpperCase()+e.slice(1)})),A=/\B([A-Z])/g,E=T((function(e){return e.replace(A,"-$1").toLowerCase()}));var P=Function.prototype.bind?function(e,t){return e.bind(t)}:function(e,t){function n(n){var o=arguments.length;return o?o>1?e.apply(t,arguments):e.call(t,n):e.call(t)}return n._length=e.length,n};function W(e,t){t=t||0;for(var n=e.length-t,o=new Array(n);n--;)o[n]=e[n+t];return o}function q(e,t){for(var n in t)e[n]=t[n];return e}function D(e){for(var t={},n=0;n0,Z=Q&&Q.indexOf("edge/")>0;Q&&Q.indexOf("android");var ee=Q&&/iphone|ipad|ipod|ios/.test(Q);Q&&/chrome\/\d+/.test(Q),Q&&/phantomjs/.test(Q);var te,ne=Q&&Q.match(/firefox\/(\d+)/),oe={}.watch,ie=!1;if(K)try{var ae={};Object.defineProperty(ae,"passive",{get:function(){ie=!0}}),window.addEventListener("test-passive",null,ae)}catch(e){}var re=function(){return void 0===te&&(te=!K&&"undefined"!=typeof global&&(global.process&&"server"===global.process.env.VUE_ENV)),te},se=K&&window.__VUE_DEVTOOLS_GLOBAL_HOOK__;function ce(e){return"function"==typeof e&&/native code/.test(e.toString())}var le,de="undefined"!=typeof Symbol&&ce(Symbol)&&"undefined"!=typeof Reflect&&ce(Reflect.ownKeys);le="undefined"!=typeof Set&&ce(Set)?Set:function(){function e(){this.set=Object.create(null)}return e.prototype.has=function(e){return!0===this.set[e]},e.prototype.add=function(e){this.set[e]=!0},e.prototype.clear=function(){this.set=Object.create(null)},e}();var ue=null;function he(e){void 0===e&&(e=null),e||ue&&ue._scope.off(),ue=e,e&&e._scope.on()}var pe=function(){function e(e,t,n,o,i,a,r,s){this.tag=e,this.data=t,this.children=n,this.text=o,this.elm=i,this.ns=void 0,this.context=a,this.fnContext=void 0,this.fnOptions=void 0,this.fnScopeId=void 0,this.key=t&&t.key,this.componentOptions=r,this.componentInstance=void 0,this.parent=void 0,this.raw=!1,this.isStatic=!1,this.isRootInsert=!0,this.isComment=!1,this.isCloned=!1,this.isOnce=!1,this.asyncFactory=s,this.asyncMeta=void 0,this.isAsyncPlaceholder=!1}return Object.defineProperty(e.prototype,"child",{get:function(){return this.componentInstance},enumerable:!1,configurable:!0}),e}(),me=function(e){void 0===e&&(e="");var t=new pe;return t.text=e,t.isComment=!0,t};function fe(e){return new pe(void 0,void 0,void 0,String(e))}function we(e){var t=new pe(e.tag,e.data,e.children&&e.children.slice(),e.text,e.elm,e.context,e.componentOptions,e.asyncFactory);return t.ns=e.ns,t.isStatic=e.isStatic,t.key=e.key,t.isComment=e.isComment,t.fnContext=e.fnContext,t.fnOptions=e.fnOptions,t.fnScopeId=e.fnScopeId,t.asyncMeta=e.asyncMeta,t.isCloned=!0,t}"function"==typeof SuppressedError&&SuppressedError;var ge=0,ye=[],ve=function(){function e(){this._pending=!1,this.id=ge++,this.subs=[]}return e.prototype.addSub=function(e){this.subs.push(e)},e.prototype.removeSub=function(e){this.subs[this.subs.indexOf(e)]=null,this._pending||(this._pending=!0,ye.push(this))},e.prototype.depend=function(t){e.target&&e.target.addDep(this)},e.prototype.notify=function(e){var t=this.subs.filter((function(e){return e}));for(var n=0,o=t.length;n0&&(Qe((l=e(l,"".concat(n||"","_").concat(o)))[0])&&Qe(u)&&(h[d]=fe(u.text+l[0].text),l.shift()),h.push.apply(h,l)):c(l)?Qe(u)?h[d]=fe(u.text+l):""!==l&&h.push(fe(l)):Qe(l)&&Qe(u)?h[d]=fe(u.text+l.text):(s(t._isVList)&&r(l.tag)&&a(l.key)&&r(n)&&(l.key="__vlist".concat(n,"_").concat(o,"__")),h.push(l)));return h}(e):void 0}function Qe(e){return r(e)&&r(e.text)&&!1===e.isComment}function Xe(e,t){var n,o,a,s,c=null;if(i(e)||"string"==typeof e)for(c=new Array(e.length),n=0,o=e.length;n0,s=t?!!t.$stable:!r,c=t&&t.$key;if(t){if(t._normalized)return t._normalized;if(s&&i&&i!==o&&c===i.$key&&!r&&!i.$hasNormal)return i;for(var l in a={},t)t[l]&&"$"!==l[0]&&(a[l]=wt(e,n,l,t[l]))}else a={};for(var d in n)d in a||(a[d]=gt(n,d));return t&&Object.isExtensible(t)&&(t._normalized=a),B(a,"$stable",s),B(a,"$key",c),B(a,"$hasNormal",r),a}function wt(e,t,n,o){var a=function(){var t=ue;he(e);var n=arguments.length?o.apply(null,arguments):o({}),a=(n=n&&"object"==typeof n&&!i(n)?[n]:Ke(n))&&n[0];return he(t),n&&(!a||1===n.length&&a.isComment&&!mt(a))?void 0:n};return o.proxy&&Object.defineProperty(t,n,{get:a,enumerable:!0,configurable:!0}),a}function gt(e,t){return function(){return e[t]}}function yt(e){return{get attrs(){if(!e._attrsProxy){var t=e._attrsProxy={};B(t,"_v_attr_proxy",!0),vt(t,e.$attrs,o,e,"$attrs")}return e._attrsProxy},get listeners(){e._listenersProxy||vt(e._listenersProxy={},e.$listeners,o,e,"$listeners");return e._listenersProxy},get slots(){return function(e){e._slotsProxy||kt(e._slotsProxy={},e.$scopedSlots);return e._slotsProxy}(e)},emit:P(e.$emit,e),expose:function(t){t&&Object.keys(t).forEach((function(n){return Fe(e,t,n)}))}}}function vt(e,t,n,o,i){var a=!1;for(var r in t)r in e?t[r]!==n[r]&&(a=!0):(a=!0,bt(e,r,o,i));for(var r in e)r in t||(a=!0,delete e[r]);return a}function bt(e,t,n,o){Object.defineProperty(e,t,{enumerable:!0,configurable:!0,get:function(){return n[o][t]}})}function kt(e,t){for(var n in t)e[n]=t[n];for(var n in e)n in t||delete e[n]}var xt=null;function _t(e,t){return(e.__esModule||de&&"Module"===e[Symbol.toStringTag])&&(e=e.default),d(e)?t.extend(e):e}function Tt(e){if(i(e))for(var t=0;tdocument.createEvent("Event").timeStamp&&(ln=function(){return dn.now()})}var un=function(e,t){if(e.post){if(!t.post)return 1}else if(t.post)return-1;return e.id-t.id};function hn(){var e,t;for(cn=ln(),rn=!0,tn.sort(un),sn=0;snsn&&tn[n].id>e.id;)n--;tn.splice(n+1,0,e)}else tn.push(e);an||(an=!0,Lt(hn))}}function mn(e,t){if(e){for(var n=Object.create(null),o=de?Reflect.ownKeys(e):Object.keys(e),i=0;i-1)if(a&&!_(i,"default"))r=!1;else if(""===r||r===E(e)){var c=Nn(String,i.type);(c<0||s-1:"string"==typeof e?e.split(",").indexOf(t)>-1:!!p(e)&&e.test(t)}function Xn(e,t){var n=e.cache,o=e.keys,i=e._vnode,a=e.$vnode;for(var r in n){var s=n[r];if(s){var c=s.name;c&&!t(c)&&Jn(n,r,o,i)}}a.componentOptions.children=void 0}function Jn(e,t,n,o){var i=e[t];!i||o&&i.tag===o.tag||i.componentInstance.$destroy(),e[t]=null,k(n,t)}Vn.prototype._init=function(e){var t=this;t._uid=Un++,t._isVue=!0,t.__v_skip=!0,t._scope=new He(!0),t._scope.parent=void 0,t._scope._vm=!0,e&&e._isComponent?function(e,t){var n=e.$options=Object.create(e.constructor.options),o=t._parentVnode;n.parent=t.parent,n._parentVnode=o;var i=o.componentOptions;n.propsData=i.propsData,n._parentListeners=i.listeners,n._renderChildren=i.children,n._componentTag=i.tag,t.render&&(n.render=t.render,n.staticRenderFns=t.staticRenderFns)}(t,e):t.$options=Pn(Bn(t.constructor),e||{},t),t._renderProxy=t,t._self=t,function(e){var t=e.$options,n=t.parent;if(n&&!t.abstract){for(;n.$options.abstract&&n.$parent;)n=n.$parent;n.$children.push(e)}e.$parent=n,e.$root=n?n.$root:e,e.$children=[],e.$refs={},e._provided=n?n._provided:Object.create(null),e._watcher=null,e._inactive=null,e._directInactive=!1,e._isMounted=!1,e._isDestroyed=!1,e._isBeingDestroyed=!1}(t),function(e){e._events=Object.create(null),e._hasHookEvent=!1;var t=e.$options._parentListeners;t&&Kt(e,t)}(t),function(e){e._vnode=null,e._staticTrees=null;var t=e.$options,n=e.$vnode=t._parentVnode,i=n&&n.context;e.$slots=ht(t._renderChildren,i),e.$scopedSlots=n?ft(e.$parent,n.data.scopedSlots,e.$slots):o,e._c=function(t,n,o,i){return St(e,t,n,o,i,!1)},e.$createElement=function(t,n,o,i){return St(e,t,n,o,i,!0)};var a=n&&n.data;qe(e,"$attrs",a&&a.attrs||o,null,!0),qe(e,"$listeners",t._parentListeners||o,null,!0)}(t),en(t,"beforeCreate",void 0,!1),function(e){var t=mn(e.$options.inject,e);t&&(Ae(!1),Object.keys(t).forEach((function(n){qe(e,n,t[n])})),Ae(!0))}(t),Ln(t),function(e){var t=e.$options.provide;if(t){var n=l(t)?t.call(e):t;if(!d(n))return;for(var o=$e(e),i=de?Reflect.ownKeys(n):Object.keys(n),a=0;a1?W(n):n;for(var o=W(arguments,1),i='event handler for "'.concat(e,'"'),a=0,r=n.length;aparseInt(this.max)&&Jn(e,t[0],t,this._vnode),this.vnodeToCache=null}}},created:function(){this.cache=Object.create(null),this.keys=[]},destroyed:function(){for(var e in this.cache)Jn(this.cache,e,this.keys)},mounted:function(){var e=this;this.cacheVNode(),this.$watch("include",(function(t){Xn(e,(function(e){return Qn(t,e)}))})),this.$watch("exclude",(function(t){Xn(e,(function(e){return!Qn(t,e)}))}))},updated:function(){this.cacheVNode()},render:function(){var e=this.$slots.default,t=Tt(e),n=t&&t.componentOptions;if(n){var o=Kn(n),i=this.include,a=this.exclude;if(i&&(!o||!Qn(i,o))||a&&o&&Qn(a,o))return t;var r=this.cache,s=this.keys,c=null==t.key?n.Ctor.cid+(n.tag?"::".concat(n.tag):""):t.key;r[c]?(t.componentInstance=r[c].componentInstance,k(s,c),s.push(c)):(this.vnodeToCache=t,this.keyToCache=c),t.data.keepAlive=!0}return t||e&&e[0]}}};!function(e){var t={get:function(){return $}};Object.defineProperty(e,"config",t),e.util={warn:_n,extend:q,mergeOptions:Pn,defineReactive:qe},e.set=De,e.delete=Oe,e.nextTick=Lt,e.observable=function(e){return We(e),e},e.options=Object.create(null),M.forEach((function(t){e.options[t+"s"]=Object.create(null)})),e.options._base=e,q(e.options.components,eo),function(e){e.use=function(e){var t=this._installedPlugins||(this._installedPlugins=[]);if(t.indexOf(e)>-1)return this;var n=W(arguments,1);return n.unshift(this),l(e.install)?e.install.apply(e,n):l(e)&&e.apply(null,n),t.push(e),this}}(e),function(e){e.mixin=function(e){return this.options=Pn(this.options,e),this}}(e),Yn(e),function(e){M.forEach((function(t){e[t]=function(e,n){return n?("component"===t&&h(n)&&(n.name=n.name||e,n=this.options._base.extend(n)),"directive"===t&&l(n)&&(n={bind:n,update:n}),this.options[t+"s"][e]=n,n):this.options[t+"s"][e]}}))}(e)}(Vn),Object.defineProperty(Vn.prototype,"$isServer",{get:re}),Object.defineProperty(Vn.prototype,"$ssrContext",{get:function(){return this.$vnode&&this.$vnode.ssrContext}}),Object.defineProperty(Vn,"FunctionalRenderContext",{value:fn}),Vn.version="2.7.16";var to=v("style,class"),no=v("input,textarea,option,select,progress"),oo=v("contenteditable,draggable,spellcheck"),io=v("events,caret,typing,plaintext-only"),ao=v("allowfullscreen,async,autofocus,autoplay,checked,compact,controls,declare,default,defaultchecked,defaultmuted,defaultselected,defer,disabled,enabled,formnovalidate,hidden,indeterminate,inert,ismap,itemscope,loop,multiple,muted,nohref,noresize,noshade,novalidate,nowrap,open,pauseonexit,readonly,required,reversed,scoped,seamless,selected,sortable,truespeed,typemustmatch,visible"),ro="http://www.w3.org/1999/xlink",so=function(e){return":"===e.charAt(5)&&"xlink"===e.slice(0,5)},co=function(e){return so(e)?e.slice(6,e.length):""},lo=function(e){return null==e||!1===e};function uo(e){for(var t=e.data,n=e,o=e;r(o.componentInstance);)(o=o.componentInstance._vnode)&&o.data&&(t=ho(o.data,t));for(;r(n=n.parent);)n&&n.data&&(t=ho(t,n.data));return function(e,t){if(r(e)||r(t))return po(e,mo(t));return""}(t.staticClass,t.class)}function ho(e,t){return{staticClass:po(e.staticClass,t.staticClass),class:r(e.class)?[e.class,t.class]:t.class}}function po(e,t){return e?t?e+" "+t:e:t||""}function mo(e){return Array.isArray(e)?function(e){for(var t,n="",o=0,i=e.length;o-1?zo(e,t,n):ao(t)?lo(n)?e.removeAttribute(t):(n="allowfullscreen"===t&&"EMBED"===e.tagName?"true":t,e.setAttribute(t,n)):oo(t)?e.setAttribute(t,function(e,t){return lo(t)||"false"===t?"false":"contenteditable"===e&&io(t)?t:"true"}(t,n)):so(t)?lo(n)?e.removeAttributeNS(ro,co(t)):e.setAttributeNS(ro,t,n):zo(e,t,n)}function zo(e,t,n){if(lo(n))e.removeAttribute(t);else{if(X&&!J&&"TEXTAREA"===e.tagName&&"placeholder"===t&&""!==n&&!e.__ieph){var o=function(t){t.stopImmediatePropagation(),e.removeEventListener("input",o)};e.addEventListener("input",o),e.__ieph=!0}e.setAttribute(t,n)}}var Lo={create:No,update:No};function Fo(e,t){var n=t.elm,o=t.data,i=e.data;if(!(a(o.staticClass)&&a(o.class)&&(a(i)||a(i.staticClass)&&a(i.class)))){var s=uo(t),c=n._transitionClasses;r(c)&&(s=po(s,mo(c))),s!==n._prevClass&&(n.setAttribute("class",s),n._prevClass=s)}}var Mo,Ho={create:Fo,update:Fo};function $o(e,t,n){var o=Mo;return function i(){var a=t.apply(null,arguments);null!==a&&Bo(e,i,n,o)}}var Go=Wt&&!(ne&&Number(ne[1])<=53);function Uo(e,t,n,o){if(Go){var i=cn,a=t;t=a._wrapper=function(e){if(e.target===e.currentTarget||e.timeStamp>=i||e.timeStamp<=0||e.target.ownerDocument!==document)return a.apply(this,arguments)}}Mo.addEventListener(e,t,ie?{capture:n,passive:o}:n)}function Bo(e,t,n,o){(o||Mo).removeEventListener(e,t._wrapper||t,n)}function Vo(e,t){if(!a(e.data.on)||!a(t.data.on)){var n=t.data.on||{},o=e.data.on||{};Mo=t.elm||e.elm,function(e){if(r(e.__r)){var t=X?"change":"input";e[t]=[].concat(e.__r,e[t]||[]),delete e.__r}r(e.__c)&&(e.change=[].concat(e.__c,e.change||[]),delete e.__c)}(n),Be(n,o,Uo,Bo,$o,t.context),Mo=void 0}}var Yo,Ko={create:Vo,update:Vo,destroy:function(e){return Vo(e,So)}};function Qo(e,t){if(!a(e.data.domProps)||!a(t.data.domProps)){var n,o,i=t.elm,c=e.data.domProps||{},l=t.data.domProps||{};for(n in(r(l.__ob__)||s(l._v_attr_proxy))&&(l=t.data.domProps=q({},l)),c)n in l||(i[n]="");for(n in l){if(o=l[n],"textContent"===n||"innerHTML"===n){if(t.children&&(t.children.length=0),o===c[n])continue;1===i.childNodes.length&&i.removeChild(i.childNodes[0])}if("value"===n&&"PROGRESS"!==i.tagName){i._value=o;var d=a(o)?"":String(o);Xo(i,d)&&(i.value=d)}else if("innerHTML"===n&&go(i.tagName)&&a(i.innerHTML)){(Yo=Yo||document.createElement("div")).innerHTML="".concat(o,"");for(var u=Yo.firstChild;i.firstChild;)i.removeChild(i.firstChild);for(;u.firstChild;)i.appendChild(u.firstChild)}else if(o!==c[n])try{i[n]=o}catch(e){}}}}function Xo(e,t){return!e.composing&&("OPTION"===e.tagName||function(e,t){var n=!0;try{n=document.activeElement!==e}catch(e){}return n&&e.value!==t}(e,t)||function(e,t){var n=e.value,o=e._vModifiers;if(r(o)){if(o.number)return y(n)!==y(t);if(o.trim)return n.trim()!==t.trim()}return n!==t}(e,t))}var Jo={create:Qo,update:Qo},Zo=T((function(e){var t={},n=/:(.+)/;return e.split(/;(?![^(]*\))/g).forEach((function(e){if(e){var o=e.split(n);o.length>1&&(t[o[0].trim()]=o[1].trim())}})),t}));function ei(e){var t=ti(e.style);return e.staticStyle?q(e.staticStyle,t):t}function ti(e){return Array.isArray(e)?D(e):"string"==typeof e?Zo(e):e}var ni,oi=/^--/,ii=/\s*!important$/,ai=function(e,t,n){if(oi.test(t))e.style.setProperty(t,n);else if(ii.test(n))e.style.setProperty(E(t),n.replace(ii,""),"important");else{var o=si(t);if(Array.isArray(n))for(var i=0,a=n.length;i-1?t.split(di).forEach((function(t){return e.classList.add(t)})):e.classList.add(t);else{var n=" ".concat(e.getAttribute("class")||""," ");n.indexOf(" "+t+" ")<0&&e.setAttribute("class",(n+t).trim())}}function hi(e,t){if(t&&(t=t.trim()))if(e.classList)t.indexOf(" ")>-1?t.split(di).forEach((function(t){return e.classList.remove(t)})):e.classList.remove(t),e.classList.length||e.removeAttribute("class");else{for(var n=" ".concat(e.getAttribute("class")||""," "),o=" "+t+" ";n.indexOf(o)>=0;)n=n.replace(o," ");(n=n.trim())?e.setAttribute("class",n):e.removeAttribute("class")}}function pi(e){if(e){if("object"==typeof e){var t={};return!1!==e.css&&q(t,mi(e.name||"v")),q(t,e),t}return"string"==typeof e?mi(e):void 0}}var mi=T((function(e){return{enterClass:"".concat(e,"-enter"),enterToClass:"".concat(e,"-enter-to"),enterActiveClass:"".concat(e,"-enter-active"),leaveClass:"".concat(e,"-leave"),leaveToClass:"".concat(e,"-leave-to"),leaveActiveClass:"".concat(e,"-leave-active")}})),fi=K&&!J,wi="transition",gi="transitionend",yi="animation",vi="animationend";fi&&(void 0===window.ontransitionend&&void 0!==window.onwebkittransitionend&&(wi="WebkitTransition",gi="webkitTransitionEnd"),void 0===window.onanimationend&&void 0!==window.onwebkitanimationend&&(yi="WebkitAnimation",vi="webkitAnimationEnd"));var bi=K?window.requestAnimationFrame?window.requestAnimationFrame.bind(window):setTimeout:function(e){return e()};function ki(e){bi((function(){bi(e)}))}function xi(e,t){var n=e._transitionClasses||(e._transitionClasses=[]);n.indexOf(t)<0&&(n.push(t),ui(e,t))}function _i(e,t){e._transitionClasses&&k(e._transitionClasses,t),hi(e,t)}function Ti(e,t,n){var o=Ci(e,t),i=o.type,a=o.timeout,r=o.propCount;if(!i)return n();var s="transition"===i?gi:vi,c=0,l=function(){e.removeEventListener(s,d),n()},d=function(t){t.target===e&&++c>=r&&l()};setTimeout((function(){c0&&(n="transition",d=r,u=a.length):"animation"===t?l>0&&(n="animation",d=l,u=c.length):u=(n=(d=Math.max(r,l))>0?r>l?"transition":"animation":null)?"transition"===n?a.length:c.length:0,{type:n,timeout:d,propCount:u,hasTransform:"transition"===n&&Si.test(o[wi+"Property"])}}function Ii(e,t){for(;e.length1}function Di(e,t){!0!==t.data.show&&Ei(t)}var Oi=function(e){var t,n,o={},l=e.modules,d=e.nodeOps;for(t=0;tm?b(e,a(n[g+1])?null:n[g+1].elm,n,p,g,o):p>g&&x(t,u,m)}(u,f,g,n,l):r(g)?(r(e.text)&&d.setTextContent(u,""),b(u,null,g,0,g.length-1,n)):r(f)?x(f,0,f.length-1):r(e.text)&&d.setTextContent(u,""):e.text!==t.text&&d.setTextContent(u,t.text),r(m)&&r(p=m.hook)&&r(p=p.postpatch)&&p(e,t)}}}function C(e,t,n){if(s(n)&&r(e.parent))e.parent.data.pendingInsert=t;else for(var o=0;o-1,r.selected!==a&&(r.selected=a);else if(R(Li(r),o))return void(e.selectedIndex!==s&&(e.selectedIndex=s));i||(e.selectedIndex=-1)}}function zi(e,t){return t.every((function(t){return!R(t,e)}))}function Li(e){return"_value"in e?e._value:e.value}function Fi(e){e.target.composing=!0}function Mi(e){e.target.composing&&(e.target.composing=!1,Hi(e.target,"input"))}function Hi(e,t){var n=document.createEvent("HTMLEvents");n.initEvent(t,!0,!0),e.dispatchEvent(n)}function $i(e){return!e.componentInstance||e.data&&e.data.transition?e:$i(e.componentInstance._vnode)}var Gi={model:ji,show:{bind:function(e,t,n){var o=t.value,i=(n=$i(n)).data&&n.data.transition,a=e.__vOriginalDisplay="none"===e.style.display?"":e.style.display;o&&i?(n.data.show=!0,Ei(n,(function(){e.style.display=a}))):e.style.display=o?a:"none"},update:function(e,t,n){var o=t.value;!o!=!t.oldValue&&((n=$i(n)).data&&n.data.transition?(n.data.show=!0,o?Ei(n,(function(){e.style.display=e.__vOriginalDisplay})):Pi(n,(function(){e.style.display="none"}))):e.style.display=o?e.__vOriginalDisplay:"none")},unbind:function(e,t,n,o,i){i||(e.style.display=e.__vOriginalDisplay)}}},Ui={name:String,appear:Boolean,css:Boolean,mode:String,type:String,enterClass:String,leaveClass:String,enterToClass:String,leaveToClass:String,enterActiveClass:String,leaveActiveClass:String,appearClass:String,appearActiveClass:String,appearToClass:String,duration:[Number,String,Object]};function Bi(e){var t=e&&e.componentOptions;return t&&t.Ctor.options.abstract?Bi(Tt(t.children)):e}function Vi(e){var t={},n=e.$options;for(var o in n.propsData)t[o]=e[o];var i=n._parentListeners;for(var o in i)t[C(o)]=i[o];return t}function Yi(e,t){if(/\d-keep-alive$/.test(t.tag))return e("keep-alive",{props:t.componentOptions.propsData})}var Ki=function(e){return e.tag||mt(e)},Qi=function(e){return"show"===e.name},Xi={name:"transition",props:Ui,abstract:!0,render:function(e){var t=this,n=this.$slots.default;if(n&&(n=n.filter(Ki)).length){0;var o=this.mode;0;var i=n[0];if(function(e){for(;e=e.parent;)if(e.data.transition)return!0}(this.$vnode))return i;var a=Bi(i);if(!a)return i;if(this._leaving)return Yi(e,i);var r="__transition-".concat(this._uid,"-");a.key=null==a.key?a.isComment?r+"comment":r+a.tag:c(a.key)?0===String(a.key).indexOf(r)?a.key:r+a.key:a.key;var s=(a.data||(a.data={})).transition=Vi(this),l=this._vnode,d=Bi(l);if(a.data.directives&&a.data.directives.some(Qi)&&(a.data.show=!0),d&&d.data&&!function(e,t){return t.key===e.key&&t.tag===e.tag}(a,d)&&!mt(d)&&(!d.componentInstance||!d.componentInstance._vnode.isComment)){var u=d.data.transition=q({},s);if("out-in"===o)return this._leaving=!0,Ve(u,"afterLeave",(function(){t._leaving=!1,t.$forceUpdate()})),Yi(e,i);if("in-out"===o){if(mt(a))return l;var h,p=function(){h()};Ve(s,"afterEnter",p),Ve(s,"enterCancelled",p),Ve(u,"delayLeave",(function(e){h=e}))}}return i}}},Ji=q({tag:String,moveClass:String},Ui);function Zi(e){e.elm._moveCb&&e.elm._moveCb(),e.elm._enterCb&&e.elm._enterCb()}function ea(e){e.data.newPos=e.elm.getBoundingClientRect()}function ta(e){var t=e.data.pos,n=e.data.newPos,o=t.left-n.left,i=t.top-n.top;if(o||i){e.data.moved=!0;var a=e.elm.style;a.transform=a.WebkitTransform="translate(".concat(o,"px,").concat(i,"px)"),a.transitionDuration="0s"}}delete Ji.mode;var na={Transition:Xi,TransitionGroup:{props:Ji,beforeMount:function(){var e=this,t=this._update;this._update=function(n,o){var i=Xt(e);e.__patch__(e._vnode,e.kept,!1,!0),e._vnode=e.kept,i(),t.call(e,n,o)}},render:function(e){for(var t=this.tag||this.$vnode.data.tag||"span",n=Object.create(null),o=this.prevChildren=this.children,i=this.$slots.default||[],a=this.children=[],r=Vi(this),s=0;s-1?vo[e]=t.constructor===window.HTMLUnknownElement||t.constructor===window.HTMLElement:vo[e]=/HTMLUnknownElement/.test(t.toString())},q(Vn.options.directives,Gi),q(Vn.options.components,na),Vn.prototype.__patch__=K?Oi:O,Vn.prototype.$mount=function(e,t){return function(e,t,n){var o;e.$el=t,e.$options.render||(e.$options.render=me),en(e,"beforeMount"),o=function(){e._update(e._render(),n)},new Ut(e,o,O,{before:function(){e._isMounted&&!e._isDestroyed&&en(e,"beforeUpdate")}},!0),n=!1;var i=e._preWatchers;if(i)for(var a=0;a=0&&(t=e.slice(o),e=e.slice(0,o));var i=e.indexOf("?");return i>=0&&(n=e.slice(i+1),e=e.slice(0,i)),{path:e,query:n,hash:t}}(i.path||""),l=t&&t.path||"/",d=c.path?_a(c.path,l,n||i.append):l,u=function(e,t,n){void 0===t&&(t={});var o,i=n||da;try{o=i(e||"")}catch(e){o={}}for(var a in t){var r=t[a];o[a]=Array.isArray(r)?r.map(la):la(r)}return o}(c.query,i.query,o&&o.options.parseQuery),h=i.hash||c.hash;return h&&"#"!==h.charAt(0)&&(h="#"+h),{_normalized:!0,path:d,query:u,hash:h}}var Ga,Ua=function(){},Ba={name:"RouterLink",props:{to:{type:[String,Object],required:!0},tag:{type:String,default:"a"},custom:Boolean,exact:Boolean,exactPath:Boolean,append:Boolean,replace:Boolean,activeClass:String,exactActiveClass:String,ariaCurrentValue:{type:String,default:"page"},event:{type:[String,Array],default:"click"}},render:function(e){var t=this,n=this.$router,o=this.$route,i=n.resolve(this.to,o,this.append),a=i.location,r=i.route,s=i.href,c={},l=n.options.linkActiveClass,d=n.options.linkExactActiveClass,u=null==l?"router-link-active":l,h=null==d?"router-link-exact-active":d,p=null==this.activeClass?u:this.activeClass,m=null==this.exactActiveClass?h:this.exactActiveClass,f=r.redirectedFrom?pa(null,$a(r.redirectedFrom),null,n):r;c[m]=ya(o,f,this.exactPath),c[p]=this.exact||this.exactPath?c[m]:function(e,t){return 0===e.path.replace(ha,"/").indexOf(t.path.replace(ha,"/"))&&(!t.hash||e.hash===t.hash)&&function(e,t){for(var n in t)if(!(n in e))return!1;return!0}(e.query,t.query)}(o,f);var w=c[m]?this.ariaCurrentValue:null,g=function(e){Va(e)&&(t.replace?n.replace(a,Ua):n.push(a,Ua))},y={click:Va};Array.isArray(this.event)?this.event.forEach((function(e){y[e]=g})):y[this.event]=g;var v={class:c},b=!this.$scopedSlots.$hasNormal&&this.$scopedSlots.default&&this.$scopedSlots.default({href:s,route:r,navigate:g,isActive:c[p],isExactActive:c[m]});if(b){if(1===b.length)return b[0];if(b.length>1||!b.length)return 0===b.length?e():e("span",{},b)}if("a"===this.tag)v.on=y,v.attrs={href:s,"aria-current":w};else{var k=function e(t){var n;if(t)for(var o=0;o-1&&(s.params[h]=n.params[h]);return s.path=Ha(d.path,s.params),c(d,s,r)}if(s.path){s.params={};for(var p=0;p-1}function Tr(e,t){return _r(e)&&e._isRouter&&(null==t||e.type===t)}function Sr(e,t,n){var o=function(i){i>=e.length?n():e[i]?t(e[i],(function(){o(i+1)})):o(i+1)};o(0)}function Cr(e){return function(t,n,o){var i=!1,a=0,r=null;Ir(e,(function(e,t,n,s){if("function"==typeof e&&void 0===e.cid){i=!0,a++;var c,l=Pr((function(t){var i;((i=t).__esModule||Er&&"Module"===i[Symbol.toStringTag])&&(t=t.default),e.resolved="function"==typeof t?t:Ga.extend(t),n.components[s]=t,--a<=0&&o()})),d=Pr((function(e){var t="Failed to resolve async component "+s+": "+e;r||(r=_r(e)?e:new Error(t),o(r))}));try{c=e(l,d)}catch(e){d(e)}if(c)if("function"==typeof c.then)c.then(l,d);else{var u=c.component;u&&"function"==typeof u.then&&u.then(l,d)}}})),i||o()}}function Ir(e,t){return Ar(e.map((function(e){return Object.keys(e.components).map((function(n){return t(e.components[n],e.instances[n],e,n)}))})))}function Ar(e){return Array.prototype.concat.apply([],e)}var Er="function"==typeof Symbol&&"symbol"==typeof Symbol.toStringTag;function Pr(e){var t=!1;return function(){for(var n=[],o=arguments.length;o--;)n[o]=arguments[o];if(!t)return t=!0,e.apply(this,n)}}var Wr=function(e,t){this.router=e,this.base=function(e){if(!e)if(Ya){var t=document.querySelector("base");e=(e=t&&t.getAttribute("href")||"/").replace(/^https?:\/\/[^\/]+/,"")}else e="/";"/"!==e.charAt(0)&&(e="/"+e);return e.replace(/\/$/,"")}(t),this.current=fa,this.pending=null,this.ready=!1,this.readyCbs=[],this.readyErrorCbs=[],this.errorCbs=[],this.listeners=[]};function qr(e,t,n,o){var i=Ir(e,(function(e,o,i,a){var r=function(e,t){"function"!=typeof e&&(e=Ga.extend(e));return e.options[t]}(e,t);if(r)return Array.isArray(r)?r.map((function(e){return n(e,o,i,a)})):n(r,o,i,a)}));return Ar(o?i.reverse():i)}function Dr(e,t){if(t)return function(){return e.apply(t,arguments)}}Wr.prototype.listen=function(e){this.cb=e},Wr.prototype.onReady=function(e,t){this.ready?e():(this.readyCbs.push(e),t&&this.readyErrorCbs.push(t))},Wr.prototype.onError=function(e){this.errorCbs.push(e)},Wr.prototype.transitionTo=function(e,t,n){var o,i=this;try{o=this.router.match(e,this.current)}catch(e){throw this.errorCbs.forEach((function(t){t(e)})),e}var a=this.current;this.confirmTransition(o,(function(){i.updateRoute(o),t&&t(o),i.ensureURL(),i.router.afterHooks.forEach((function(e){e&&e(o,a)})),i.ready||(i.ready=!0,i.readyCbs.forEach((function(e){e(o)})))}),(function(e){n&&n(e),e&&!i.ready&&(Tr(e,yr.redirected)&&a===fa||(i.ready=!0,i.readyErrorCbs.forEach((function(t){t(e)}))))}))},Wr.prototype.confirmTransition=function(e,t,n){var o=this,i=this.current;this.pending=e;var a,r,s=function(e){!Tr(e)&&_r(e)&&(o.errorCbs.length?o.errorCbs.forEach((function(t){t(e)})):console.error(e)),n&&n(e)},c=e.matched.length-1,l=i.matched.length-1;if(ya(e,i)&&c===l&&e.matched[c]===i.matched[l])return this.ensureURL(),e.hash&&rr(this.router,i,e,!1),s(((r=kr(a=i,e,yr.duplicated,'Avoided redundant navigation to current location: "'+a.fullPath+'".')).name="NavigationDuplicated",r));var d=function(e,t){var n,o=Math.max(e.length,t.length);for(n=0;n0)){var t=this.router,n=t.options.scrollBehavior,o=fr&&n;o&&this.listeners.push(ar());var i=function(){var n=e.current,i=jr(e.base);e.current===fa&&i===e._startLocation||e.transitionTo(i,(function(e){o&&rr(t,e,n,!0)}))};window.addEventListener("popstate",i),this.listeners.push((function(){window.removeEventListener("popstate",i)}))}},t.prototype.go=function(e){window.history.go(e)},t.prototype.push=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){wr(Ta(o.base+e.fullPath)),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){gr(Ta(o.base+e.fullPath)),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.ensureURL=function(e){if(jr(this.base)!==this.current.fullPath){var t=Ta(this.base+this.current.fullPath);e?wr(t):gr(t)}},t.prototype.getCurrentLocation=function(){return jr(this.base)},t}(Wr);function jr(e){var t=window.location.pathname,n=t.toLowerCase(),o=e.toLowerCase();return!e||n!==o&&0!==n.indexOf(Ta(o+"/"))||(t=t.slice(e.length)),(t||"/")+window.location.search+window.location.hash}var Nr=function(e){function t(t,n,o){e.call(this,t,n),o&&function(e){var t=jr(e);if(!/^\/#/.test(t))return window.location.replace(Ta(e+"/#"+t)),!0}(this.base)||Rr()}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.setupListeners=function(){var e=this;if(!(this.listeners.length>0)){var t=this.router.options.scrollBehavior,n=fr&&t;n&&this.listeners.push(ar());var o=function(){var t=e.current;Rr()&&e.transitionTo(zr(),(function(o){n&&rr(e.router,o,t,!0),fr||Mr(o.fullPath)}))},i=fr?"popstate":"hashchange";window.addEventListener(i,o),this.listeners.push((function(){window.removeEventListener(i,o)}))}},t.prototype.push=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){Fr(e.fullPath),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this,i=this.current;this.transitionTo(e,(function(e){Mr(e.fullPath),rr(o.router,e,i,!1),t&&t(e)}),n)},t.prototype.go=function(e){window.history.go(e)},t.prototype.ensureURL=function(e){var t=this.current.fullPath;zr()!==t&&(e?Fr(t):Mr(t))},t.prototype.getCurrentLocation=function(){return zr()},t}(Wr);function Rr(){var e=zr();return"/"===e.charAt(0)||(Mr("/"+e),!1)}function zr(){var e=window.location.href,t=e.indexOf("#");return t<0?"":e=e.slice(t+1)}function Lr(e){var t=window.location.href,n=t.indexOf("#");return(n>=0?t.slice(0,n):t)+"#"+e}function Fr(e){fr?wr(Lr(e)):window.location.hash=e}function Mr(e){fr?gr(Lr(e)):window.location.replace(Lr(e))}var Hr=function(e){function t(t,n){e.call(this,t,n),this.stack=[],this.index=-1}return e&&(t.__proto__=e),t.prototype=Object.create(e&&e.prototype),t.prototype.constructor=t,t.prototype.push=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index+1).concat(e),o.index++,t&&t(e)}),n)},t.prototype.replace=function(e,t,n){var o=this;this.transitionTo(e,(function(e){o.stack=o.stack.slice(0,o.index).concat(e),t&&t(e)}),n)},t.prototype.go=function(e){var t=this,n=this.index+e;if(!(n<0||n>=this.stack.length)){var o=this.stack[n];this.confirmTransition(o,(function(){var e=t.current;t.index=n,t.updateRoute(o),t.router.afterHooks.forEach((function(t){t&&t(o,e)}))}),(function(e){Tr(e,yr.duplicated)&&(t.index=n)}))}},t.prototype.getCurrentLocation=function(){var e=this.stack[this.stack.length-1];return e?e.fullPath:"/"},t.prototype.ensureURL=function(){},t}(Wr),$r=function(e){void 0===e&&(e={}),this.app=null,this.apps=[],this.options=e,this.beforeHooks=[],this.resolveHooks=[],this.afterHooks=[],this.matcher=Xa(e.routes||[],this);var t=e.mode||"hash";switch(this.fallback="history"===t&&!fr&&!1!==e.fallback,this.fallback&&(t="hash"),Ya||(t="abstract"),this.mode=t,t){case"history":this.history=new Or(this,e.base);break;case"hash":this.history=new Nr(this,e.base,this.fallback);break;case"abstract":this.history=new Hr(this,e.base);break;default:0}},Gr={currentRoute:{configurable:!0}};$r.prototype.match=function(e,t,n){return this.matcher.match(e,t,n)},Gr.currentRoute.get=function(){return this.history&&this.history.current},$r.prototype.init=function(e){var t=this;if(this.apps.push(e),e.$once("hook:destroyed",(function(){var n=t.apps.indexOf(e);n>-1&&t.apps.splice(n,1),t.app===e&&(t.app=t.apps[0]||null),t.app||t.history.teardown()})),!this.app){this.app=e;var n=this.history;if(n instanceof Or||n instanceof Nr){var o=function(e){n.setupListeners(),function(e){var o=n.current,i=t.options.scrollBehavior;fr&&i&&"fullPath"in e&&rr(t,e,o,!1)}(e)};n.transitionTo(n.getCurrentLocation(),o,o)}n.listen((function(e){t.apps.forEach((function(t){t._route=e}))}))}},$r.prototype.beforeEach=function(e){return Br(this.beforeHooks,e)},$r.prototype.beforeResolve=function(e){return Br(this.resolveHooks,e)},$r.prototype.afterEach=function(e){return Br(this.afterHooks,e)},$r.prototype.onReady=function(e,t){this.history.onReady(e,t)},$r.prototype.onError=function(e){this.history.onError(e)},$r.prototype.push=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.push(e,t,n)}));this.history.push(e,t,n)},$r.prototype.replace=function(e,t,n){var o=this;if(!t&&!n&&"undefined"!=typeof Promise)return new Promise((function(t,n){o.history.replace(e,t,n)}));this.history.replace(e,t,n)},$r.prototype.go=function(e){this.history.go(e)},$r.prototype.back=function(){this.go(-1)},$r.prototype.forward=function(){this.go(1)},$r.prototype.getMatchedComponents=function(e){var t=e?e.matched?e:this.resolve(e).route:this.currentRoute;return t?[].concat.apply([],t.matched.map((function(e){return Object.keys(e.components).map((function(t){return e.components[t]}))}))):[]},$r.prototype.resolve=function(e,t,n){var o=$a(e,t=t||this.history.current,n,this),i=this.match(o,t),a=i.redirectedFrom||i.fullPath;return{location:o,route:i,href:function(e,t,n){var o="hash"===n?"#"+t:t;return e?Ta(e+"/"+o):o}(this.history.base,a,this.mode),normalizedTo:o,resolved:i}},$r.prototype.getRoutes=function(){return this.matcher.getRoutes()},$r.prototype.addRoute=function(e,t){this.matcher.addRoute(e,t),this.history.current!==fa&&this.history.transitionTo(this.history.getCurrentLocation())},$r.prototype.addRoutes=function(e){this.matcher.addRoutes(e),this.history.current!==fa&&this.history.transitionTo(this.history.getCurrentLocation())},Object.defineProperties($r.prototype,Gr);var Ur=$r;function Br(e,t){return e.push(t),function(){var n=e.indexOf(t);n>-1&&e.splice(n,1)}}$r.install=function e(t){if(!e.installed||Ga!==t){e.installed=!0,Ga=t;var n=function(e){return void 0!==e},o=function(e,t){var o=e.$options._parentVnode;n(o)&&n(o=o.data)&&n(o=o.registerRouteInstance)&&o(e,t)};t.mixin({beforeCreate:function(){n(this.$options.router)?(this._routerRoot=this,this._router=this.$options.router,this._router.init(this),t.util.defineReactive(this,"_route",this._router.history.current)):this._routerRoot=this.$parent&&this.$parent._routerRoot||this,o(this,this)},destroyed:function(){o(this)}}),Object.defineProperty(t.prototype,"$router",{get:function(){return this._routerRoot._router}}),Object.defineProperty(t.prototype,"$route",{get:function(){return this._routerRoot._route}}),t.component("RouterView",ka),t.component("RouterLink",Ba);var i=t.config.optionMergeStrategies;i.beforeRouteEnter=i.beforeRouteLeave=i.beforeRouteUpdate=i.created}},$r.version="3.6.5",$r.isNavigationFailure=Tr,$r.NavigationFailureType=yr,$r.START_LOCATION=fa,Ya&&window.Vue&&window.Vue.use($r);n(106);n(104),n(94);var Vr={"components/AlgoliaSearchBox":()=>Promise.all([n.e(0),n.e(13)]).then(n.bind(null,323)),"components/DropdownLink":()=>Promise.all([n.e(0),n.e(14)]).then(n.bind(null,265)),"components/DropdownTransition":()=>Promise.all([n.e(0),n.e(19)]).then(n.bind(null,253)),"components/Home":()=>Promise.all([n.e(0),n.e(16)]).then(n.bind(null,295)),"components/NavLink":()=>n.e(22).then(n.bind(null,252)),"components/NavLinks":()=>Promise.all([n.e(0),n.e(12)]).then(n.bind(null,277)),"components/Navbar":()=>Promise.all([n.e(0),n.e(1)]).then(n.bind(null,320)),"components/Page":()=>Promise.all([n.e(0),n.e(11)]).then(n.bind(null,296)),"components/PageEdit":()=>Promise.all([n.e(0),n.e(17)]).then(n.bind(null,279)),"components/PageNav":()=>Promise.all([n.e(0),n.e(15)]).then(n.bind(null,280)),"components/Sidebar":()=>Promise.all([n.e(0),n.e(10)]).then(n.bind(null,297)),"components/SidebarButton":()=>Promise.all([n.e(0),n.e(20)]).then(n.bind(null,298)),"components/SidebarGroup":()=>Promise.all([n.e(0),n.e(3)]).then(n.bind(null,278)),"components/SidebarLink":()=>Promise.all([n.e(0),n.e(18)]).then(n.bind(null,266)),"components/SidebarLinks":()=>Promise.all([n.e(0),n.e(3)]).then(n.bind(null,264)),"global-components/Badge":()=>Promise.all([n.e(0),n.e(4)]).then(n.bind(null,330)),"global-components/CodeBlock":()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,324)),"global-components/CodeGroup":()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,325)),"layouts/404":()=>n.e(7).then(n.bind(null,326)),"layouts/Layout":()=>Promise.all([n.e(0),n.e(1),n.e(2)]).then(n.bind(null,327)),NotFound:()=>n.e(7).then(n.bind(null,326)),Layout:()=>Promise.all([n.e(0),n.e(1),n.e(2)]).then(n.bind(null,327))},Yr={"v-9cd9f09c":()=>n.e(26).then(n.bind(null,331)),"v-40447742":()=>n.e(28).then(n.bind(null,332)),"v-5261e03c":()=>n.e(21).then(n.bind(null,333)),"v-4bb753c4":()=>n.e(27).then(n.bind(null,334)),"v-696d6f80":()=>n.e(29).then(n.bind(null,335)),"v-5ab4294a":()=>n.e(30).then(n.bind(null,336)),"v-c2f362bc":()=>n.e(31).then(n.bind(null,337)),"v-d5dcd2a0":()=>n.e(32).then(n.bind(null,338)),"v-7a5c92a2":()=>n.e(34).then(n.bind(null,339)),"v-88def7ac":()=>n.e(33).then(n.bind(null,340)),"v-1bc7fd02":()=>n.e(35).then(n.bind(null,341)),"v-a14b6054":()=>n.e(36).then(n.bind(null,342)),"v-c99e5abc":()=>n.e(38).then(n.bind(null,343)),"v-28bf3ec2":()=>n.e(37).then(n.bind(null,344)),"v-36ed9422":()=>n.e(39).then(n.bind(null,345)),"v-6b66fa18":()=>n.e(40).then(n.bind(null,346)),"v-611b8c3c":()=>n.e(41).then(n.bind(null,347)),"v-13d0c1ca":()=>n.e(43).then(n.bind(null,348)),"v-163bae3c":()=>n.e(42).then(n.bind(null,349)),"v-8d905b7c":()=>n.e(44).then(n.bind(null,350)),"v-e240404c":()=>n.e(45).then(n.bind(null,351)),"v-2d8e6278":()=>n.e(46).then(n.bind(null,352)),"v-7b43cf3c":()=>n.e(47).then(n.bind(null,353)),"v-eec246bc":()=>n.e(50).then(n.bind(null,354)),"v-1c104a48":()=>n.e(48).then(n.bind(null,355)),"v-5d616cea":()=>n.e(51).then(n.bind(null,356)),"v-3c665d38":()=>n.e(52).then(n.bind(null,357)),"v-78a9ec22":()=>n.e(49).then(n.bind(null,358)),"v-c2670478":()=>n.e(53).then(n.bind(null,359)),"v-2f3b4398":()=>n.e(55).then(n.bind(null,360)),"v-347319df":()=>n.e(54).then(n.bind(null,361)),"v-44a96002":()=>n.e(56).then(n.bind(null,362)),"v-73f5d8c2":()=>n.e(57).then(n.bind(null,363)),"v-7106a8e2":()=>n.e(58).then(n.bind(null,364)),"v-4af1f23c":()=>n.e(59).then(n.bind(null,365)),"v-b64a802c":()=>n.e(60).then(n.bind(null,366)),"v-423a333c":()=>n.e(62).then(n.bind(null,367)),"v-65cef250":()=>n.e(64).then(n.bind(null,368)),"v-3c541bc2":()=>n.e(61).then(n.bind(null,369)),"v-2ef7ad44":()=>n.e(66).then(n.bind(null,370)),"v-47e211a0":()=>n.e(65).then(n.bind(null,371)),"v-272408a2":()=>n.e(67).then(n.bind(null,372)),"v-d965e2bc":()=>n.e(68).then(n.bind(null,373)),"v-47638d30":()=>n.e(63).then(n.bind(null,374)),"v-68ae0de4":()=>n.e(69).then(n.bind(null,375)),"v-7a33750a":()=>n.e(72).then(n.bind(null,376)),"v-56629f80":()=>n.e(71).then(n.bind(null,377)),"v-53d65f58":()=>n.e(70).then(n.bind(null,378)),"v-c1687e0a":()=>n.e(73).then(n.bind(null,379)),"v-861efabc":()=>n.e(75).then(n.bind(null,380)),"v-e5936714":()=>n.e(74).then(n.bind(null,381)),"v-43760982":()=>n.e(77).then(n.bind(null,382)),"v-caeda73c":()=>n.e(78).then(n.bind(null,383)),"v-76c4aa02":()=>n.e(76).then(n.bind(null,384)),"v-0327ca12":()=>n.e(79).then(n.bind(null,385)),"v-5fac5e6c":()=>n.e(80).then(n.bind(null,386)),"v-595589a2":()=>n.e(81).then(n.bind(null,387)),"v-7732347a":()=>n.e(83).then(n.bind(null,388)),"v-67f3ae7c":()=>n.e(82).then(n.bind(null,389)),"v-d0383dd4":()=>n.e(84).then(n.bind(null,390)),"v-a1460e54":()=>n.e(85).then(n.bind(null,391)),"v-0a1dd2ec":()=>n.e(86).then(n.bind(null,392)),"v-c8a8f07c":()=>n.e(87).then(n.bind(null,393)),"v-0b9844ac":()=>n.e(88).then(n.bind(null,394)),"v-edf882bc":()=>n.e(89).then(n.bind(null,395)),"v-35913a62":()=>n.e(90).then(n.bind(null,396)),"v-9d2716dc":()=>n.e(91).then(n.bind(null,397)),"v-d043b980":()=>n.e(92).then(n.bind(null,398)),"v-5df8103c":()=>n.e(24).then(n.bind(null,399)),"v-740be4db":()=>n.e(93).then(n.bind(null,400)),"v-6be5daf6":()=>n.e(95).then(n.bind(null,401)),"v-6fa6d57b":()=>n.e(94).then(n.bind(null,402)),"v-1a836dbc":()=>n.e(97).then(n.bind(null,403)),"v-c3677d3c":()=>n.e(96).then(n.bind(null,404)),"v-6f38e6b6":()=>n.e(98).then(n.bind(null,405)),"v-3569388c":()=>n.e(99).then(n.bind(null,406)),"v-fc381aca":()=>n.e(100).then(n.bind(null,407)),"v-3f3e4754":()=>n.e(101).then(n.bind(null,408)),"v-46aa6bb2":()=>n.e(102).then(n.bind(null,409)),"v-e574b140":()=>n.e(103).then(n.bind(null,410)),"v-7256933b":()=>n.e(105).then(n.bind(null,411)),"v-00de750a":()=>n.e(104).then(n.bind(null,412))};function Kr(e){const t=Object.create(null);return function(n){return t[n]||(t[n]=e(n))}}const Qr=/-(\w)/g,Xr=Kr(e=>e.replace(Qr,(e,t)=>t?t.toUpperCase():"")),Jr=/\B([A-Z])/g,Zr=Kr(e=>e.replace(Jr,"-$1").toLowerCase()),es=Kr(e=>e.charAt(0).toUpperCase()+e.slice(1));function ts(e,t){if(!t)return;if(e(t))return e(t);return t.includes("-")?e(es(Xr(t))):e(es(t))||e(Zr(t))}const ns=Object.assign({},Vr,Yr),os=e=>ns[e],is=e=>Yr[e],as=e=>Vr[e],rs=e=>Vn.component(e);function ss(e){return ts(is,e)}function cs(e){return ts(as,e)}function ls(e){return ts(os,e)}function ds(e){return ts(rs,e)}function us(...e){return Promise.all(e.filter(e=>e).map(async e=>{if(!ds(e)&&ls(e)){const t=await ls(e)();Vn.component(e,t.default)}}))}function hs(e,t){"undefined"!=typeof window&&window.__VUEPRESS__&&(window.__VUEPRESS__[e]=t)}var ps=n(92),ms=n.n(ps),fs=n(93),ws=n.n(fs),gs={created(){if(this.siteMeta=this.$site.headTags.filter(([e])=>"meta"===e).map(([e,t])=>t),this.$ssrContext){const t=this.getMergedMetaTags();this.$ssrContext.title=this.$title,this.$ssrContext.lang=this.$lang,this.$ssrContext.pageMeta=(e=t)?e.map(e=>{let t="{t+=` ${n}="${ws()(e[n])}"`}),t+">"}).join("\n "):"",this.$ssrContext.canonicalLink=vs(this.$canonicalUrl)}var e},mounted(){this.currentMetaTags=[...document.querySelectorAll("meta")],this.updateMeta(),this.updateCanonicalLink()},methods:{updateMeta(){document.title=this.$title,document.documentElement.lang=this.$lang;const e=this.getMergedMetaTags();this.currentMetaTags=bs(e,this.currentMetaTags)},getMergedMetaTags(){const e=this.$page.frontmatter.meta||[];return ms()([{name:"description",content:this.$description}],e,this.siteMeta,ks)},updateCanonicalLink(){ys(),this.$canonicalUrl&&document.head.insertAdjacentHTML("beforeend",vs(this.$canonicalUrl))}},watch:{$page(){this.updateMeta(),this.updateCanonicalLink()}},beforeDestroy(){bs(null,this.currentMetaTags),ys()}};function ys(){const e=document.querySelector("link[rel='canonical']");e&&e.remove()}function vs(e=""){return e?``:""}function bs(e,t){if(t&&[...t].filter(e=>e.parentNode===document.head).forEach(e=>document.head.removeChild(e)),e)return e.map(e=>{const t=document.createElement("meta");return Object.keys(e).forEach(n=>{t.setAttribute(n,e[n])}),document.head.appendChild(t),t})}function ks(e){for(const t of["name","property","itemprop"])if(e.hasOwnProperty(t))return e[t]+t;return JSON.stringify(e)}var xs=n(22),_s=n.n(xs),Ts={mounted(){window.addEventListener("scroll",this.onScroll)},methods:{onScroll:_s()((function(){this.setActiveHash()}),300),setActiveHash(){const e=[].slice.call(document.querySelectorAll(".sidebar-link")),t=[].slice.call(document.querySelectorAll(".header-anchor")).filter(t=>e.some(e=>e.hash===t.hash)),n=Math.max(window.pageYOffset,document.documentElement.scrollTop,document.body.scrollTop),o=Math.max(document.documentElement.scrollHeight,document.body.scrollHeight),i=window.innerHeight+n;for(let e=0;e=a.parentElement.offsetTop+10&&(!r||n{this.$nextTick(()=>{this.$vuepress.$set("disableScrollBehavior",!1)})})}}}},beforeDestroy(){window.removeEventListener("scroll",this.onScroll)}},Ss=n(23),Cs=n.n(Ss),Is={mounted(){Cs.a.configure({showSpinner:!1}),this.$router.beforeEach((e,t,n)=>{e.path===t.path||Vn.component(e.name)||Cs.a.start(),n()}),this.$router.afterEach(()=>{Cs.a.done(),this.isSidebarOpen=!1})}},As={props:{parent:Object,code:String,options:{align:String,color:String,backgroundTransition:Boolean,backgroundColor:String,successText:String,staticIcon:Boolean}},data:()=>({success:!1,originalBackground:null,originalTransition:null}),computed:{alignStyle(){let e={};return e[this.options.align]="7.5px",e},iconClass(){return this.options.staticIcon?"":"hover"}},mounted(){this.originalTransition=this.parent.style.transition,this.originalBackground=this.parent.style.background},beforeDestroy(){this.parent.style.transition=this.originalTransition,this.parent.style.background=this.originalBackground},methods:{hexToRgb(e){let t=/^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(e);return t?{r:parseInt(t[1],16),g:parseInt(t[2],16),b:parseInt(t[3],16)}:null},copyToClipboard(e){if(navigator.clipboard)navigator.clipboard.writeText(this.code).then(()=>{this.setSuccessTransitions()},()=>{});else{let e=document.createElement("textarea");document.body.appendChild(e),e.value=this.code,e.select(),document.execCommand("Copy"),e.remove(),this.setSuccessTransitions()}},setSuccessTransitions(){if(clearTimeout(this.successTimeout),this.options.backgroundTransition){this.parent.style.transition="background 350ms";let e=this.hexToRgb(this.options.backgroundColor);this.parent.style.background=`rgba(${e.r}, ${e.g}, ${e.b}, 0.1)`}this.success=!0,this.successTimeout=setTimeout(()=>{this.options.backgroundTransition&&(this.parent.style.background=this.originalBackground,this.parent.style.transition=this.originalTransition),this.success=!1},500)}}},Es=(n(239),n(0)),Ps=Object(Es.a)(As,(function(){var e=this,t=e._self._c;return t("div",{staticClass:"code-copy"},[t("svg",{class:e.iconClass,style:e.alignStyle,attrs:{xmlns:"http://www.w3.org/2000/svg",width:"24",height:"24",viewBox:"0 0 24 24"},on:{click:e.copyToClipboard}},[t("path",{attrs:{fill:"none",d:"M0 0h24v24H0z"}}),e._v(" "),t("path",{attrs:{fill:e.options.color,d:"M16 1H4c-1.1 0-2 .9-2 2v14h2V3h12V1zm-1 4l6 6v10c0 1.1-.9 2-2 2H7.99C6.89 23 6 22.1 6 21l.01-14c0-1.1.89-2 1.99-2h7zm-1 7h5.5L14 6.5V12z"}})]),e._v(" "),t("span",{class:e.success?"success":"",style:e.alignStyle},[e._v("\n "+e._s(e.options.successText)+"\n ")])])}),[],!1,null,"49140617",null).exports,Ws=(n(240),[gs,Ts,Is,{updated(){this.update()},methods:{update(){setTimeout(()=>{document.querySelectorAll('div[class*="language-"] pre').forEach(e=>{if(e.classList.contains("code-copy-added"))return;let t=new(Vn.extend(Ps));t.options={align:"bottom",color:"#27b1ff",backgroundTransition:!0,backgroundColor:"#0075b8",successText:"Copied!",staticIcon:!1},t.code=e.innerText,t.parent=e,t.$mount(),e.classList.add("code-copy-added"),e.appendChild(t.$el)})},100)}}}]),qs={name:"GlobalLayout",computed:{layout(){const e=this.getLayout();return hs("layout",e),Vn.component(e)}},methods:{getLayout(){if(this.$page.path){const e=this.$page.frontmatter.layout;return e&&(this.$vuepress.getLayoutAsyncComponent(e)||this.$vuepress.getVueComponent(e))?e:"Layout"}return"NotFound"}}},Ds=Object(Es.a)(qs,(function(){return(0,this._self._c)(this.layout,{tag:"component"})}),[],!1,null,null,null).exports;!function(e,t,n){switch(t){case"components":e[t]||(e[t]={}),Object.assign(e[t],n);break;case"mixins":e[t]||(e[t]=[]),e[t].push(...n);break;default:throw new Error("Unknown option name.")}}(Ds,"mixins",Ws);const Os=[{name:"v-9cd9f09c",path:"/GLOSSARY.html",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-9cd9f09c").then(n)}},{name:"v-40447742",path:"/docs/get-started/java-hello-world/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-40447742").then(n)}},{path:"/docs/get-started/java-hello-world/index.html",redirect:"/docs/get-started/java-hello-world/"},{path:"/docs/01-get-started/02-java-hello-world.html",redirect:"/docs/get-started/java-hello-world/"},{name:"v-5261e03c",path:"/docs/get-started/golang-hello-world/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5261e03c").then(n)}},{path:"/docs/get-started/golang-hello-world/index.html",redirect:"/docs/get-started/golang-hello-world/"},{path:"/docs/01-get-started/03-golang-hello-world.html",redirect:"/docs/get-started/golang-hello-world/"},{name:"v-4bb753c4",path:"/docs/get-started/installation/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-4bb753c4").then(n)}},{path:"/docs/get-started/installation/index.html",redirect:"/docs/get-started/installation/"},{path:"/docs/01-get-started/01-server-installation.html",redirect:"/docs/get-started/installation/"},{name:"v-696d6f80",path:"/docs/get-started/video-tutorials/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-696d6f80").then(n)}},{path:"/docs/get-started/video-tutorials/index.html",redirect:"/docs/get-started/video-tutorials/"},{path:"/docs/01-get-started/04-video-tutorials.html",redirect:"/docs/get-started/video-tutorials/"},{name:"v-5ab4294a",path:"/docs/get-started/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5ab4294a").then(n)}},{path:"/docs/get-started/index.html",redirect:"/docs/get-started/"},{path:"/docs/01-get-started/",redirect:"/docs/get-started/"},{name:"v-c2f362bc",path:"/docs/use-cases/periodic-execution/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c2f362bc").then(n)}},{path:"/docs/use-cases/periodic-execution/index.html",redirect:"/docs/use-cases/periodic-execution/"},{path:"/docs/02-use-cases/01-periodic-execution.html",redirect:"/docs/use-cases/periodic-execution/"},{name:"v-d5dcd2a0",path:"/docs/use-cases/orchestration/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d5dcd2a0").then(n)}},{path:"/docs/use-cases/orchestration/index.html",redirect:"/docs/use-cases/orchestration/"},{path:"/docs/02-use-cases/02-orchestration.html",redirect:"/docs/use-cases/orchestration/"},{name:"v-7a5c92a2",path:"/docs/use-cases/event-driven/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7a5c92a2").then(n)}},{path:"/docs/use-cases/event-driven/index.html",redirect:"/docs/use-cases/event-driven/"},{path:"/docs/02-use-cases/04-event-driven.html",redirect:"/docs/use-cases/event-driven/"},{name:"v-88def7ac",path:"/docs/use-cases/polling/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-88def7ac").then(n)}},{path:"/docs/use-cases/polling/index.html",redirect:"/docs/use-cases/polling/"},{path:"/docs/02-use-cases/03-polling.html",redirect:"/docs/use-cases/polling/"},{name:"v-1bc7fd02",path:"/docs/use-cases/partitioned-scan/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-1bc7fd02").then(n)}},{path:"/docs/use-cases/partitioned-scan/index.html",redirect:"/docs/use-cases/partitioned-scan/"},{path:"/docs/02-use-cases/05-partitioned-scan.html",redirect:"/docs/use-cases/partitioned-scan/"},{name:"v-a14b6054",path:"/docs/use-cases/batch-job/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-a14b6054").then(n)}},{path:"/docs/use-cases/batch-job/index.html",redirect:"/docs/use-cases/batch-job/"},{path:"/docs/02-use-cases/06-batch-job.html",redirect:"/docs/use-cases/batch-job/"},{name:"v-c99e5abc",path:"/docs/use-cases/deployment/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c99e5abc").then(n)}},{path:"/docs/use-cases/deployment/index.html",redirect:"/docs/use-cases/deployment/"},{path:"/docs/02-use-cases/08-deployment.html",redirect:"/docs/use-cases/deployment/"},{name:"v-28bf3ec2",path:"/docs/use-cases/provisioning/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-28bf3ec2").then(n)}},{path:"/docs/use-cases/provisioning/index.html",redirect:"/docs/use-cases/provisioning/"},{path:"/docs/02-use-cases/07-provisioning.html",redirect:"/docs/use-cases/provisioning/"},{name:"v-36ed9422",path:"/docs/use-cases/operational-management/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-36ed9422").then(n)}},{path:"/docs/use-cases/operational-management/index.html",redirect:"/docs/use-cases/operational-management/"},{path:"/docs/02-use-cases/09-operational-management.html",redirect:"/docs/use-cases/operational-management/"},{name:"v-6b66fa18",path:"/docs/use-cases/interactive/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6b66fa18").then(n)}},{path:"/docs/use-cases/interactive/index.html",redirect:"/docs/use-cases/interactive/"},{path:"/docs/02-use-cases/10-interactive.html",redirect:"/docs/use-cases/interactive/"},{name:"v-611b8c3c",path:"/docs/use-cases/dsl/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-611b8c3c").then(n)}},{path:"/docs/use-cases/dsl/index.html",redirect:"/docs/use-cases/dsl/"},{path:"/docs/02-use-cases/11-dsl.html",redirect:"/docs/use-cases/dsl/"},{name:"v-13d0c1ca",path:"/docs/use-cases/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-13d0c1ca").then(n)}},{path:"/docs/use-cases/index.html",redirect:"/docs/use-cases/"},{path:"/docs/02-use-cases/",redirect:"/docs/use-cases/"},{name:"v-163bae3c",path:"/docs/use-cases/big-ml/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-163bae3c").then(n)}},{path:"/docs/use-cases/big-ml/index.html",redirect:"/docs/use-cases/big-ml/"},{path:"/docs/02-use-cases/12-big-ml.html",redirect:"/docs/use-cases/big-ml/"},{name:"v-8d905b7c",path:"/docs/concepts/workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-8d905b7c").then(n)}},{path:"/docs/concepts/workflows/index.html",redirect:"/docs/concepts/workflows/"},{path:"/docs/03-concepts/01-workflows.html",redirect:"/docs/concepts/workflows/"},{name:"v-e240404c",path:"/docs/concepts/activities/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-e240404c").then(n)}},{path:"/docs/concepts/activities/index.html",redirect:"/docs/concepts/activities/"},{path:"/docs/03-concepts/02-activities.html",redirect:"/docs/concepts/activities/"},{name:"v-2d8e6278",path:"/docs/concepts/events/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-2d8e6278").then(n)}},{path:"/docs/concepts/events/index.html",redirect:"/docs/concepts/events/"},{path:"/docs/03-concepts/03-events.html",redirect:"/docs/concepts/events/"},{name:"v-7b43cf3c",path:"/docs/concepts/queries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7b43cf3c").then(n)}},{path:"/docs/concepts/queries/index.html",redirect:"/docs/concepts/queries/"},{path:"/docs/03-concepts/04-queries.html",redirect:"/docs/concepts/queries/"},{name:"v-eec246bc",path:"/docs/concepts/archival/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-eec246bc").then(n)}},{path:"/docs/concepts/archival/index.html",redirect:"/docs/concepts/archival/"},{path:"/docs/03-concepts/07-archival.html",redirect:"/docs/concepts/archival/"},{name:"v-1c104a48",path:"/docs/concepts/topology/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-1c104a48").then(n)}},{path:"/docs/concepts/topology/index.html",redirect:"/docs/concepts/topology/"},{path:"/docs/03-concepts/05-topology.html",redirect:"/docs/concepts/topology/"},{name:"v-5d616cea",path:"/docs/concepts/cross-dc-replication/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5d616cea").then(n)}},{path:"/docs/concepts/cross-dc-replication/index.html",redirect:"/docs/concepts/cross-dc-replication/"},{path:"/docs/03-concepts/08-cross-dc-replication.html",redirect:"/docs/concepts/cross-dc-replication/"},{name:"v-3c665d38",path:"/docs/concepts/search-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3c665d38").then(n)}},{path:"/docs/concepts/search-workflows/index.html",redirect:"/docs/concepts/search-workflows/"},{path:"/docs/03-concepts/09-search-workflows.html",redirect:"/docs/concepts/search-workflows/"},{name:"v-78a9ec22",path:"/docs/concepts/task-lists/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-78a9ec22").then(n)}},{path:"/docs/concepts/task-lists/index.html",redirect:"/docs/concepts/task-lists/"},{path:"/docs/03-concepts/06-task-lists.html",redirect:"/docs/concepts/task-lists/"},{name:"v-c2670478",path:"/docs/concepts/http-api/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c2670478").then(n)}},{path:"/docs/concepts/http-api/index.html",redirect:"/docs/concepts/http-api/"},{path:"/docs/03-concepts/10-http-api.html",redirect:"/docs/concepts/http-api/"},{name:"v-2f3b4398",path:"/docs/java-client/client-overview/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-2f3b4398").then(n)}},{path:"/docs/java-client/client-overview/index.html",redirect:"/docs/java-client/client-overview/"},{path:"/docs/04-java-client/01-client-overview.html",redirect:"/docs/java-client/client-overview/"},{name:"v-347319df",path:"/docs/concepts/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-347319df").then(n)}},{path:"/docs/concepts/index.html",redirect:"/docs/concepts/"},{path:"/docs/03-concepts/",redirect:"/docs/concepts/"},{name:"v-44a96002",path:"/docs/java-client/workflow-interface/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-44a96002").then(n)}},{path:"/docs/java-client/workflow-interface/index.html",redirect:"/docs/java-client/workflow-interface/"},{path:"/docs/04-java-client/02-workflow-interface.html",redirect:"/docs/java-client/workflow-interface/"},{name:"v-73f5d8c2",path:"/docs/java-client/implementing-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-73f5d8c2").then(n)}},{path:"/docs/java-client/implementing-workflows/index.html",redirect:"/docs/java-client/implementing-workflows/"},{path:"/docs/04-java-client/03-implementing-workflows.html",redirect:"/docs/java-client/implementing-workflows/"},{name:"v-7106a8e2",path:"/docs/java-client/starting-workflow-executions/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7106a8e2").then(n)}},{path:"/docs/java-client/starting-workflow-executions/index.html",redirect:"/docs/java-client/starting-workflow-executions/"},{path:"/docs/04-java-client/04-starting-workflow-executions.html",redirect:"/docs/java-client/starting-workflow-executions/"},{name:"v-4af1f23c",path:"/docs/java-client/activity-interface/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-4af1f23c").then(n)}},{path:"/docs/java-client/activity-interface/index.html",redirect:"/docs/java-client/activity-interface/"},{path:"/docs/04-java-client/05-activity-interface.html",redirect:"/docs/java-client/activity-interface/"},{name:"v-b64a802c",path:"/docs/java-client/implementing-activities/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-b64a802c").then(n)}},{path:"/docs/java-client/implementing-activities/index.html",redirect:"/docs/java-client/implementing-activities/"},{path:"/docs/04-java-client/06-implementing-activities.html",redirect:"/docs/java-client/implementing-activities/"},{name:"v-423a333c",path:"/docs/java-client/distributed-cron/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-423a333c").then(n)}},{path:"/docs/java-client/distributed-cron/index.html",redirect:"/docs/java-client/distributed-cron/"},{path:"/docs/04-java-client/08-distributed-cron.html",redirect:"/docs/java-client/distributed-cron/"},{name:"v-65cef250",path:"/docs/java-client/signals/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-65cef250").then(n)}},{path:"/docs/java-client/signals/index.html",redirect:"/docs/java-client/signals/"},{path:"/docs/04-java-client/10-signals.html",redirect:"/docs/java-client/signals/"},{name:"v-3c541bc2",path:"/docs/java-client/versioning/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3c541bc2").then(n)}},{path:"/docs/java-client/versioning/index.html",redirect:"/docs/java-client/versioning/"},{path:"/docs/04-java-client/07-versioning.html",redirect:"/docs/java-client/versioning/"},{name:"v-2ef7ad44",path:"/docs/java-client/retries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-2ef7ad44").then(n)}},{path:"/docs/java-client/retries/index.html",redirect:"/docs/java-client/retries/"},{path:"/docs/04-java-client/12-retries.html",redirect:"/docs/java-client/retries/"},{name:"v-47e211a0",path:"/docs/java-client/queries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-47e211a0").then(n)}},{path:"/docs/java-client/queries/index.html",redirect:"/docs/java-client/queries/"},{path:"/docs/04-java-client/11-queries.html",redirect:"/docs/java-client/queries/"},{name:"v-272408a2",path:"/docs/java-client/child-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-272408a2").then(n)}},{path:"/docs/java-client/child-workflows/index.html",redirect:"/docs/java-client/child-workflows/"},{path:"/docs/04-java-client/13-child-workflows.html",redirect:"/docs/java-client/child-workflows/"},{name:"v-d965e2bc",path:"/docs/java-client/exception-handling/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d965e2bc").then(n)}},{path:"/docs/java-client/exception-handling/index.html",redirect:"/docs/java-client/exception-handling/"},{path:"/docs/04-java-client/14-exception-handling.html",redirect:"/docs/java-client/exception-handling/"},{name:"v-47638d30",path:"/docs/java-client/workers/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-47638d30").then(n)}},{path:"/docs/java-client/workers/index.html",redirect:"/docs/java-client/workers/"},{path:"/docs/04-java-client/09-workers.html",redirect:"/docs/java-client/workers/"},{name:"v-68ae0de4",path:"/docs/java-client/continue-as-new/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-68ae0de4").then(n)}},{path:"/docs/java-client/continue-as-new/index.html",redirect:"/docs/java-client/continue-as-new/"},{path:"/docs/04-java-client/15-continue-as-new.html",redirect:"/docs/java-client/continue-as-new/"},{name:"v-7a33750a",path:"/docs/java-client/workflow-replay-shadowing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7a33750a").then(n)}},{path:"/docs/java-client/workflow-replay-shadowing/index.html",redirect:"/docs/java-client/workflow-replay-shadowing/"},{path:"/docs/04-java-client/18-workflow-replay-shadowing.html",redirect:"/docs/java-client/workflow-replay-shadowing/"},{name:"v-56629f80",path:"/docs/java-client/testing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-56629f80").then(n)}},{path:"/docs/java-client/testing/index.html",redirect:"/docs/java-client/testing/"},{path:"/docs/04-java-client/17-testing.html",redirect:"/docs/java-client/testing/"},{name:"v-53d65f58",path:"/docs/java-client/side-effect/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-53d65f58").then(n)}},{path:"/docs/java-client/side-effect/index.html",redirect:"/docs/java-client/side-effect/"},{path:"/docs/04-java-client/16-side-effect.html",redirect:"/docs/java-client/side-effect/"},{name:"v-c1687e0a",path:"/docs/java-client/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c1687e0a").then(n)}},{path:"/docs/java-client/index.html",redirect:"/docs/java-client/"},{path:"/docs/04-java-client/",redirect:"/docs/java-client/"},{name:"v-861efabc",path:"/docs/go-client/create-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-861efabc").then(n)}},{path:"/docs/go-client/create-workflows/index.html",redirect:"/docs/go-client/create-workflows/"},{path:"/docs/05-go-client/02-create-workflows.html",redirect:"/docs/go-client/create-workflows/"},{name:"v-e5936714",path:"/docs/go-client/workers/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-e5936714").then(n)}},{path:"/docs/go-client/workers/index.html",redirect:"/docs/go-client/workers/"},{path:"/docs/05-go-client/01-workers.html",redirect:"/docs/go-client/workers/"},{name:"v-43760982",path:"/docs/go-client/activities/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-43760982").then(n)}},{path:"/docs/go-client/activities/index.html",redirect:"/docs/go-client/activities/"},{path:"/docs/05-go-client/03-activities.html",redirect:"/docs/go-client/activities/"},{name:"v-caeda73c",path:"/docs/go-client/execute-activity/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-caeda73c").then(n)}},{path:"/docs/go-client/execute-activity/index.html",redirect:"/docs/go-client/execute-activity/"},{path:"/docs/05-go-client/04-execute-activity.html",redirect:"/docs/go-client/execute-activity/"},{name:"v-76c4aa02",path:"/docs/go-client/start-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-76c4aa02").then(n)}},{path:"/docs/go-client/start-workflows/index.html",redirect:"/docs/go-client/start-workflows/"},{path:"/docs/05-go-client/02.5-starting-workflows.html",redirect:"/docs/go-client/start-workflows/"},{name:"v-0327ca12",path:"/docs/go-client/child-workflows/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-0327ca12").then(n)}},{path:"/docs/go-client/child-workflows/index.html",redirect:"/docs/go-client/child-workflows/"},{path:"/docs/05-go-client/05-child-workflows.html",redirect:"/docs/go-client/child-workflows/"},{name:"v-5fac5e6c",path:"/docs/go-client/retries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5fac5e6c").then(n)}},{path:"/docs/go-client/retries/index.html",redirect:"/docs/go-client/retries/"},{path:"/docs/05-go-client/06-retries.html",redirect:"/docs/go-client/retries/"},{name:"v-595589a2",path:"/docs/go-client/error-handling/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-595589a2").then(n)}},{path:"/docs/go-client/error-handling/index.html",redirect:"/docs/go-client/error-handling/"},{path:"/docs/05-go-client/07-error-handling.html",redirect:"/docs/go-client/error-handling/"},{name:"v-7732347a",path:"/docs/go-client/continue-as-new/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-7732347a").then(n)}},{path:"/docs/go-client/continue-as-new/index.html",redirect:"/docs/go-client/continue-as-new/"},{path:"/docs/05-go-client/09-continue-as-new.html",redirect:"/docs/go-client/continue-as-new/"},{name:"v-67f3ae7c",path:"/docs/go-client/signals/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-67f3ae7c").then(n)}},{path:"/docs/go-client/signals/index.html",redirect:"/docs/go-client/signals/"},{path:"/docs/05-go-client/08-signals.html",redirect:"/docs/go-client/signals/"},{name:"v-d0383dd4",path:"/docs/go-client/side-effect/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d0383dd4").then(n)}},{path:"/docs/go-client/side-effect/index.html",redirect:"/docs/go-client/side-effect/"},{path:"/docs/05-go-client/10-side-effect.html",redirect:"/docs/go-client/side-effect/"},{name:"v-a1460e54",path:"/docs/go-client/queries/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-a1460e54").then(n)}},{path:"/docs/go-client/queries/index.html",redirect:"/docs/go-client/queries/"},{path:"/docs/05-go-client/11-queries.html",redirect:"/docs/go-client/queries/"},{name:"v-0a1dd2ec",path:"/docs/go-client/activity-async-completion/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-0a1dd2ec").then(n)}},{path:"/docs/go-client/activity-async-completion/index.html",redirect:"/docs/go-client/activity-async-completion/"},{path:"/docs/05-go-client/12-activity-async-completion.html",redirect:"/docs/go-client/activity-async-completion/"},{name:"v-c8a8f07c",path:"/docs/go-client/workflow-testing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c8a8f07c").then(n)}},{path:"/docs/go-client/workflow-testing/index.html",redirect:"/docs/go-client/workflow-testing/"},{path:"/docs/05-go-client/13-workflow-testing.html",redirect:"/docs/go-client/workflow-testing/"},{name:"v-0b9844ac",path:"/docs/go-client/workflow-versioning/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-0b9844ac").then(n)}},{path:"/docs/go-client/workflow-versioning/index.html",redirect:"/docs/go-client/workflow-versioning/"},{path:"/docs/05-go-client/14-workflow-versioning.html",redirect:"/docs/go-client/workflow-versioning/"},{name:"v-edf882bc",path:"/docs/go-client/sessions/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-edf882bc").then(n)}},{path:"/docs/go-client/sessions/index.html",redirect:"/docs/go-client/sessions/"},{path:"/docs/05-go-client/15-sessions.html",redirect:"/docs/go-client/sessions/"},{name:"v-35913a62",path:"/docs/go-client/distributed-cron/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-35913a62").then(n)}},{path:"/docs/go-client/distributed-cron/index.html",redirect:"/docs/go-client/distributed-cron/"},{path:"/docs/05-go-client/16-distributed-cron.html",redirect:"/docs/go-client/distributed-cron/"},{name:"v-9d2716dc",path:"/docs/go-client/tracing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-9d2716dc").then(n)}},{path:"/docs/go-client/tracing/index.html",redirect:"/docs/go-client/tracing/"},{path:"/docs/05-go-client/17-tracing.html",redirect:"/docs/go-client/tracing/"},{name:"v-d043b980",path:"/docs/go-client/workflow-replay-shadowing/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-d043b980").then(n)}},{path:"/docs/go-client/workflow-replay-shadowing/index.html",redirect:"/docs/go-client/workflow-replay-shadowing/"},{path:"/docs/05-go-client/18-workflow-replay-shadowing.html",redirect:"/docs/go-client/workflow-replay-shadowing/"},{name:"v-5df8103c",path:"/docs/go-client/workflow-non-deterministic-errors/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-5df8103c").then(n)}},{path:"/docs/go-client/workflow-non-deterministic-errors/index.html",redirect:"/docs/go-client/workflow-non-deterministic-errors/"},{path:"/docs/05-go-client/19-workflow-non-deterministic-error.html",redirect:"/docs/go-client/workflow-non-deterministic-errors/"},{name:"v-740be4db",path:"/docs/go-client/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-740be4db").then(n)}},{path:"/docs/go-client/index.html",redirect:"/docs/go-client/"},{path:"/docs/05-go-client/",redirect:"/docs/go-client/"},{name:"v-6be5daf6",path:"/docs/operation-guide/setup/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6be5daf6").then(n)}},{path:"/docs/operation-guide/setup/index.html",redirect:"/docs/operation-guide/setup/"},{path:"/docs/07-operation-guide/01-setup.html",redirect:"/docs/operation-guide/setup/"},{name:"v-6fa6d57b",path:"/docs/cli/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6fa6d57b").then(n)}},{path:"/docs/cli/index.html",redirect:"/docs/cli/"},{path:"/docs/06-cli/",redirect:"/docs/cli/"},{name:"v-1a836dbc",path:"/docs/operation-guide/monitor/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-1a836dbc").then(n)}},{path:"/docs/operation-guide/monitor/index.html",redirect:"/docs/operation-guide/monitor/"},{path:"/docs/07-operation-guide/03-monitoring.html",redirect:"/docs/operation-guide/monitor/"},{name:"v-c3677d3c",path:"/docs/operation-guide/maintain/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-c3677d3c").then(n)}},{path:"/docs/operation-guide/maintain/index.html",redirect:"/docs/operation-guide/maintain/"},{path:"/docs/07-operation-guide/02-maintain.html",redirect:"/docs/operation-guide/maintain/"},{name:"v-6f38e6b6",path:"/docs/operation-guide/troubleshooting/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-6f38e6b6").then(n)}},{path:"/docs/operation-guide/troubleshooting/index.html",redirect:"/docs/operation-guide/troubleshooting/"},{path:"/docs/07-operation-guide/04-troubleshooting.html",redirect:"/docs/operation-guide/troubleshooting/"},{name:"v-3569388c",path:"/docs/operation-guide/migration/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3569388c").then(n)}},{path:"/docs/operation-guide/migration/index.html",redirect:"/docs/operation-guide/migration/"},{path:"/docs/07-operation-guide/05-migration.html",redirect:"/docs/operation-guide/migration/"},{name:"v-fc381aca",path:"/docs/operation-guide/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-fc381aca").then(n)}},{path:"/docs/operation-guide/index.html",redirect:"/docs/operation-guide/"},{path:"/docs/07-operation-guide/",redirect:"/docs/operation-guide/"},{name:"v-3f3e4754",path:"/docs/workflow-troubleshooting/timeouts/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-3f3e4754").then(n)}},{path:"/docs/workflow-troubleshooting/timeouts/index.html",redirect:"/docs/workflow-troubleshooting/timeouts/"},{path:"/docs/08-workflow-troubleshooting/01-timeouts.html",redirect:"/docs/workflow-troubleshooting/timeouts/"},{name:"v-46aa6bb2",path:"/docs/workflow-troubleshooting/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-46aa6bb2").then(n)}},{path:"/docs/workflow-troubleshooting/index.html",redirect:"/docs/workflow-troubleshooting/"},{path:"/docs/08-workflow-troubleshooting/",redirect:"/docs/workflow-troubleshooting/"},{name:"v-e574b140",path:"/docs/about/license/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-e574b140").then(n)}},{path:"/docs/about/license/index.html",redirect:"/docs/about/license/"},{path:"/docs/09-about/01-license.html",redirect:"/docs/about/license/"},{name:"v-7256933b",path:"/",component:Ds,beforeEnter:(e,t,n)=>{us("Layout","v-7256933b").then(n)}},{path:"/index.html",redirect:"/"},{name:"v-00de750a",path:"/docs/about/",component:Ds,beforeEnter:(e,t,n)=>{us("default","v-00de750a").then(n)}},{path:"/docs/about/index.html",redirect:"/docs/about/"},{path:"/docs/09-about/",redirect:"/docs/about/"},{path:"*",component:Ds}],js={title:"Cadence",description:"",base:"/",headTags:[["link",{rel:"icon",href:"/img/favicon.ico"}],["script",{async:!0,src:"https://www.googletagmanager.com/gtag/js?id=G-W63QD8QE6E"}],["script",{},"window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'G-W63QD8QE6E');"]],pages:[{title:"Glossary",frontmatter:{layout:"default",title:"Glossary",terms:{activity:"A business-level function that implements your application logic such as calling a service or transcoding a media file. An activity usually implements a single well-defined action; it can be short or long running. An activity can be implemented as a synchronous method or fully asynchronously involving multiple processes. An activity can be retried indefinitely according to the provided exponential retry policy. If for any reason an activity is not completed within the specified timeout, an error is reported to the workflow and the workflow decides how to handle it. There is no limit on potential activity duration.","activity task":"A task that contains an activity invocation information that is delivered to an activity worker through and an activity task list. An activity worker upon receiving activity task executes a correponding activity","activity task list":"Task list that is used to deliver activity task to activity worker","activity worker":"An object that is executed in the client application and receives activity task from an activity task list it is subscribed to. Once task is received it invokes a correspondent activity.",archival:"Archival is a feature that automatically moves event history from persistence to a blobstore after the workflow retention period. The purpose of archival is to be able to keep histories as long as needed while not overwhelming the persistence store. There are two reasons you may want to keep the histories after the retention period has passed: 1. Compliance: For legal reasons, histories may need to be stored for a long period of time. 2. Debugging: Old histories can still be accessed for debugging.",CLI:"Cadence command-line interface.","client stub":"A client-side proxy used to make remote invocations to an entity that it represents. For example, to start a workflow, a stub object that represents this workflow is created through a special API. Then this stub is used to start, query, or signal the corresponding workflow.\nThe Go client doesn't use this.",decision:"Any action taken by the workflow durable function is called a decision. For example: scheduling an activity, canceling a child workflow, or starting a timer. A decision task contains an optional list of decisions. Every decision is recorded in the event history as an event. See also [1] for more explanation","decision task":"Every time a new external event that might affect a workflow state is recorded, a decision task that contains it is added to a decision task list and then picked up by a workflow worker. After the new event is handled, the decision task is completed with a list of decision. Note that handling of a decision task is usually very fast and is not related to duration of operations that the workflow invokes. See also [1] for more explanation","decision task list":"Task list that is used to deliver decision task to workflow worker. From user's point of view, it can be viewed as a worker pool. It defines a pool of worker executing workflow or activity tasks.",domain:"Cadence is backed by a multitenant service. The unit of isolation is called a domain. Each domain acts as a namespace for task list names as well as workflow IDs. For example, when a workflow is started, it is started in a specific domain. Cadence guarantees a unique workflow ID within a domain, and supports running workflow executions to use the same workflow ID if they are in different domains. Various configuration options like retention period or archival destination are configured per domain as well through a special CRUD API or through the Cadence CLI. In the multi-cluster deployment, domain is a unit of fail-over. Each domain can only be active on a single Cadence cluster at a time. However, different domains can be active in different clusters and can fail-over independently.",event:"An indivisible operation performed by your application. For example, activity_task_started, task_failed, or timer_canceled. Events are recorded in the event history.","event history":"An append log of events for your application. History is durably persisted by the Cadence service, enabling seamless recovery of your application state from crashes or failures. It also serves as an audit log for debugging.","local activity":"A local activity is an activity that is invoked directly in the same process by a workflow code. It consumes much less resources than a normal activity, but imposes a lot of limitations like low duration and lack of rate limiting.",query:"A synchronous (from the caller's point of view) operation that is used to report a workflow state. Note that a query is inherently read only and cannot affect a workflow state.","run ID":"A UUID that a Cadence service assigns to each workflow run. If allowed by a configured policy, you might be able to re-execute a workflow, after it has closed or failed, with the same workflow id. Each such re-execution is called a run. run id is used to uniquely identify a run even if it shares a workflow id with others.",signal:"An external asynchronous request to a workflow. It can be used to deliver notifications or updates to a running workflow at any point in its existence.",task:"The context needed to execute a specific activity or workflow state transition. There are two types of tasks: an activity task and a decision task (aka workflow task). Note that a single activity execution corresponds to a single activity task, while a workflow execution employs multiple decision tasks.","task list":"Common name for activity task list and decision task list","task token":"A unique correlation ID for a Cadence activity. Activity completion calls take either task token or DomainName, WorkflowID, ActivityID arguments.",worker:"Also known as a worker service. A service that hosts the workflow and activity implementations. The worker polls the Cadence service for tasks, performs those tasks, and communicates task execution results back to the Cadence service. Worker services are developed, deployed, and operated by Cadence customers.",workflow:"A fault-oblivious stateful function that orchestrates activities. A workflow has full control over which activities are executed, and in which order. A workflow must not affect the external world directly, only through activities. What makes workflow code a workflow is that its state is preserved by Cadence. Therefore any failure of a worker process that hosts the workflow code does not affect the workflow execution. The workflow continues as if these failures did not happen. At the same time, activities can fail any moment for any reason. Because workflow code is fully fault-oblivious, it is guaranteed to get notifications about activity failures or timeouts and act accordingly. There is no limit on potential workflow duration.","workflow execution":"An instance of a workflow. The instance can be in the process of executing or it could have already completed execution.","workflow ID":"A unique identifier for a workflow execution. Cadence guarantees the uniqueness of an ID within a domain. An attempt to start a workflow with a duplicate ID results in an already started error.","workflow task":"Synonym of the decision task.","workflow worker":"An object that is executed in the client application and receives decision task from an decision task list it is subscribed to. Once task is received it is handled by a correponding workflow."},readingShow:"top"},regularPath:"/GLOSSARY.html",relativePath:"GLOSSARY.md",key:"v-9cd9f09c",path:"/GLOSSARY.html",codeSwitcherOptions:{},headersStr:null,content:"# Glossary\n\n1 What exactly is a Cadence decision task?",normalizedContent:"# glossary\n\n1 what exactly is a cadence decision task?",charsets:{}},{title:"Java hello world",frontmatter:{layout:"default",title:"Java hello world",permalink:"/docs/get-started/java-hello-world",readingShow:"top"},regularPath:"/docs/01-get-started/02-java-hello-world.html",relativePath:"docs/01-get-started/02-java-hello-world.md",key:"v-40447742",path:"/docs/get-started/java-hello-world/",headers:[{level:2,title:"Include Cadence Java Client Dependency",slug:"include-cadence-java-client-dependency",normalizedTitle:"include cadence java client dependency",charIndex:295},{level:2,title:"Implement Hello World Workflow",slug:"implement-hello-world-workflow",normalizedTitle:"implement hello world workflow",charIndex:1932},{level:2,title:"Execute Hello World Workflow using the CLI",slug:"execute-hello-world-workflow-using-the-cli",normalizedTitle:"execute hello world workflow using the cli",charIndex:3650},{level:2,title:"List Workflows and Workflow History",slug:"list-workflows-and-workflow-history",normalizedTitle:"list workflows and workflow history",charIndex:7725},{level:2,title:"What is Next",slug:"what-is-next",normalizedTitle:"what is next",charIndex:10214}],codeSwitcherOptions:{},headersStr:"Include Cadence Java Client Dependency Implement Hello World Workflow Execute Hello World Workflow using the CLI List Workflows and Workflow History What is Next",content:'# Java Hello World\n\nThis section provides step by step instructions on how to write and run a HelloWorld with Java.\n\nFor complete, ready to build samples covering all the key Cadence concepts go to Cadence-Java-Samples.\n\nYou can also review Java-Client and java-docs for more documentation.\n\n\n# Include Cadence Java Client Dependency\n\nGo to the Maven Repository Uber Cadence Java Client Page and find the latest version of the library. Include it as a dependency into your Java project. For example if you are using Gradle the dependency looks like:\n\ncompile group: \'com.uber.cadence\', name: \'cadence-client\', version: \'\'\n\n\nAlso add the following dependencies that cadence-client relies on:\n\ncompile group: \'commons-configuration\', name: \'commons-configuration\', version: \'1.9\'\ncompile group: \'ch.qos.logback\', name: \'logback-classic\', version: \'1.2.3\'\n\n\nMake sure that the following code compiles:\n\nimport com.uber.cadence.workflow.Workflow;\nimport com.uber.cadence.workflow.WorkflowMethod;\nimport org.slf4j.Logger;\n\npublic class GettingStarted {\n\n private static Logger logger = Workflow.getLogger(GettingStarted.class);\n\n public interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n }\n\n}\n\n\nIf you are having problems setting up the build files use the Cadence Java Samples GitHub repository as a reference.\n\nAlso add the following logback config file somewhere in your classpath:\n\n\n \n \x3c!-- encoders are assigned the type\n ch.qos.logback.classic.encoder.PatternLayoutEncoder by default --\x3e\n \n %d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n\n \n \n \n \n \n \n\n\n\n\n# Implement Hello World Workflow\n\nLet\'s add HelloWorldImpl with the sayHello method that just logs the "Hello ..." and returns.\n\nimport com.uber.cadence.worker.Worker;\nimport com.uber.cadence.workflow.Workflow;\nimport com.uber.cadence.workflow.WorkflowMethod;\nimport org.slf4j.Logger;\n\npublic class GettingStarted {\n\n private static Logger logger = Workflow.getLogger(GettingStarted.class);\n\n public interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n }\n\n public static class HelloWorldImpl implements HelloWorld {\n\n @Override\n public void sayHello(String name) {\n logger.info("Hello " + name + "!");\n }\n }\n}\n\n\nTo link the implementation to the Cadence framework, it should be registered with a that connects to a Cadence Service. By default the connects to the locally running Cadence service.\n\npublic static void main(String[] args) {\n WorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(ClientOptions.defaultInstance()),\n WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());\n // Get worker to poll the task list.\n WorkerFactory factory = WorkerFactory.newInstance(workflowClient);\n Worker worker = factory.newWorker(TASK_LIST);\n worker.registerWorkflowImplementationTypes(HelloWorldImpl.class);\n factory.start();\n}\n\n\nThe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(String[] args) {\n Worker.Factory factory = new Worker.Factory("test-domain");\n Worker worker = factory.newWorker("HelloWorldTaskList");\n worker.registerWorkflowImplementationTypes(HelloWorldImpl.class);\n factory.start();\n}\n\n\n\n# Execute Hello World Workflow using the CLI\n\nNow run the program. Following is an example log:\n\n13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0\n13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'}, identity=45937@maxim-C02XD0AAJGH6}\n13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n\n\nNo Hello printed. This is expected because a is just a code host. The has to be started to execute. Let\'s use Cadence to start the workflow:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"World\\"\nStarted Workflow Id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run Id: e7c40431-8e23-485b-9649-e8f161219efe\n\n\nThe output of the program should change to:\n\n13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0\n13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'}, identity=45937@maxim-C02XD0AAJGH6}\n13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello World!\n\n\nLet\'s start another\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"Cadence\\"\nStarted Workflow Id: d2083532-9c68-49ab-90e1-d960175377a7, run Id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e\n\n\nAnd the output changed to:\n\n13:35:02.575 [main] INFO c.u.c.s.WorkflowServiceTChannel - Initialized TChannel for service cadence-frontend, LibraryVersion: 2.2.0, FeatureVersion: 1.0.0\n13:35:02.671 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'Workflow Poller taskList="HelloWorldTaskList", domain="test-domain", type="workflow"\'}, identity=45937@maxim-C02XD0AAJGH6}\n13:35:02.673 [main] INFO c.u.cadence.internal.worker.Poller - start(): Poller{options=PollerOptions{maximumPollRateIntervalMilliseconds=1000, maximumPollRatePerSecond=0.0, pollBackoffCoefficient=2.0, pollBackoffInitialInterval=PT0.2S, pollBackoffMaximumInterval=PT20S, pollThreadCount=1, pollThreadNamePrefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello World!\n13:42:34.994 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - Hello Cadence!\n\n\n\n# List Workflows and Workflow History\n\nLet\'s list our in the\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow list\n WORKFLOW TYPE | WORKFLOW ID | RUN ID | START TIME | EXECUTION TIME | END TIME\n HelloWorld::sayHello | d2083532-9c68-49ab-90e1-d960175377a7 | 331bfa04-834b-45a7-861e-bcb9f6ddae3e | 20:42:34 | 20:42:34 | 20:42:35\n HelloWorld::sayHello | bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 | e7c40431-8e23-485b-9649-e8f161219efe | 20:40:28 | 20:40:28 | 20:40:29\n\n\nNow let\'s look at the history:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7\n 1 WorkflowExecutionStarted {WorkflowType:{Name:HelloWorld::sayHello},\n TaskList:{Name:HelloWorldTaskList},\n Input:["World"],\n ExecutionStartToCloseTimeoutSeconds:3600,\n TaskStartToCloseTimeoutSeconds:10,\n ContinuedFailureDetails:[],\n LastCompletionResult:[],\n Identity:cadence-cli@linuxkit-025000000001,\n Attempt:0,\n FirstDecisionTaskBackoffSeconds:0}\n 2 DecisionTaskScheduled {TaskList:{Name:HelloWorldTaskList},\n StartToCloseTimeoutSeconds:10,\n Attempt:0}\n 3 DecisionTaskStarted {ScheduledEventId:2,\n Identity:45937@maxim-C02XD0AAJGH6,\n RequestId:481a14e5-67a4-436e-9a23-7f7fb7f87ef3}\n 4 DecisionTaskCompleted {ExecutionContext:[],\n ScheduledEventId:2,\n StartedEventId:3,\n Identity:45937@maxim-C02XD0AAJGH6}\n 5 WorkflowExecutionCompleted {Result:[],\n DecisionTaskCompletedEventId:4}\n\n\nEven for such a trivial , the history gives a lot of useful information. For complex this is a really useful tool for production and development troubleshooting. History can be automatically archived to a long-term blob store (for example Amazon S3) upon completion for compliance, analytical, and troubleshooting purposes.\n\n\n# What is Next\n\nNow you have completed the tutorials. You can continue to explore the key concepts in Cadence, and also how to use them with Java Client',normalizedContent:'# java hello world\n\nthis section provides step by step instructions on how to write and run a helloworld with java.\n\nfor complete, ready to build samples covering all the key cadence concepts go to cadence-java-samples.\n\nyou can also review java-client and java-docs for more documentation.\n\n\n# include cadence java client dependency\n\ngo to the maven repository uber cadence java client page and find the latest version of the library. include it as a dependency into your java project. for example if you are using gradle the dependency looks like:\n\ncompile group: \'com.uber.cadence\', name: \'cadence-client\', version: \'\'\n\n\nalso add the following dependencies that cadence-client relies on:\n\ncompile group: \'commons-configuration\', name: \'commons-configuration\', version: \'1.9\'\ncompile group: \'ch.qos.logback\', name: \'logback-classic\', version: \'1.2.3\'\n\n\nmake sure that the following code compiles:\n\nimport com.uber.cadence.workflow.workflow;\nimport com.uber.cadence.workflow.workflowmethod;\nimport org.slf4j.logger;\n\npublic class gettingstarted {\n\n private static logger logger = workflow.getlogger(gettingstarted.class);\n\n public interface helloworld {\n @workflowmethod\n void sayhello(string name);\n }\n\n}\n\n\nif you are having problems setting up the build files use the cadence java samples github repository as a reference.\n\nalso add the following logback config file somewhere in your classpath:\n\n\n \n \x3c!-- encoders are assigned the type\n ch.qos.logback.classic.encoder.patternlayoutencoder by default --\x3e\n \n %d{hh:mm:ss.sss} [%thread] %-5level %logger{36} - %msg%n\n \n \n \n \n \n \n\n\n\n\n# implement hello world workflow\n\nlet\'s add helloworldimpl with the sayhello method that just logs the "hello ..." and returns.\n\nimport com.uber.cadence.worker.worker;\nimport com.uber.cadence.workflow.workflow;\nimport com.uber.cadence.workflow.workflowmethod;\nimport org.slf4j.logger;\n\npublic class gettingstarted {\n\n private static logger logger = workflow.getlogger(gettingstarted.class);\n\n public interface helloworld {\n @workflowmethod\n void sayhello(string name);\n }\n\n public static class helloworldimpl implements helloworld {\n\n @override\n public void sayhello(string name) {\n logger.info("hello " + name + "!");\n }\n }\n}\n\n\nto link the implementation to the cadence framework, it should be registered with a that connects to a cadence service. by default the connects to the locally running cadence service.\n\npublic static void main(string[] args) {\n workflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(clientoptions.defaultinstance()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n // get worker to poll the task list.\n workerfactory factory = workerfactory.newinstance(workflowclient);\n worker worker = factory.newworker(task_list);\n worker.registerworkflowimplementationtypes(helloworldimpl.class);\n factory.start();\n}\n\n\nthe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(string[] args) {\n worker.factory factory = new worker.factory("test-domain");\n worker worker = factory.newworker("helloworldtasklist");\n worker.registerworkflowimplementationtypes(helloworldimpl.class);\n factory.start();\n}\n\n\n\n# execute hello world workflow using the cli\n\nnow run the program. following is an example log:\n\n13:35:02.575 [main] info c.u.c.s.workflowservicetchannel - initialized tchannel for service cadence-frontend, libraryversion: 2.2.0, featureversion: 1.0.0\n13:35:02.671 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'workflow poller tasklist="helloworldtasklist", domain="test-domain", type="workflow"\'}, identity=45937@maxim-c02xd0aajgh6}\n13:35:02.673 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n\n\nno hello printed. this is expected because a is just a code host. the has to be started to execute. let\'s use cadence to start the workflow:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"world\\"\nstarted workflow id: bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7, run id: e7c40431-8e23-485b-9649-e8f161219efe\n\n\nthe output of the program should change to:\n\n13:35:02.575 [main] info c.u.c.s.workflowservicetchannel - initialized tchannel for service cadence-frontend, libraryversion: 2.2.0, featureversion: 1.0.0\n13:35:02.671 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'workflow poller tasklist="helloworldtasklist", domain="test-domain", type="workflow"\'}, identity=45937@maxim-c02xd0aajgh6}\n13:35:02.673 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] info c.u.c.samples.hello.gettingstarted - hello world!\n\n\nlet\'s start another\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"cadence\\"\nstarted workflow id: d2083532-9c68-49ab-90e1-d960175377a7, run id: 331bfa04-834b-45a7-861e-bcb9f6ddae3e\n\n\nand the output changed to:\n\n13:35:02.575 [main] info c.u.c.s.workflowservicetchannel - initialized tchannel for service cadence-frontend, libraryversion: 2.2.0, featureversion: 1.0.0\n13:35:02.671 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'workflow poller tasklist="helloworldtasklist", domain="test-domain", type="workflow"\'}, identity=45937@maxim-c02xd0aajgh6}\n13:35:02.673 [main] info c.u.cadence.internal.worker.poller - start(): poller{options=polleroptions{maximumpollrateintervalmilliseconds=1000, maximumpollratepersecond=0.0, pollbackoffcoefficient=2.0, pollbackoffinitialinterval=pt0.2s, pollbackoffmaximuminterval=pt20s, pollthreadcount=1, pollthreadnameprefix=\'null\'}, identity=81b8d0ac-ff89-47e8-b842-3dd26337feea}\n13:40:28.308 [workflow-root] info c.u.c.samples.hello.gettingstarted - hello world!\n13:42:34.994 [workflow-root] info c.u.c.samples.hello.gettingstarted - hello cadence!\n\n\n\n# list workflows and workflow history\n\nlet\'s list our in the\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow list\n workflow type | workflow id | run id | start time | execution time | end time\n helloworld::sayhello | d2083532-9c68-49ab-90e1-d960175377a7 | 331bfa04-834b-45a7-861e-bcb9f6ddae3e | 20:42:34 | 20:42:34 | 20:42:35\n helloworld::sayhello | bcacfabd-9f9a-46ac-9b25-83bcea5d7fd7 | e7c40431-8e23-485b-9649-e8f161219efe | 20:40:28 | 20:40:28 | 20:40:29\n\n\nnow let\'s look at the history:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow showid 1965109f-607f-4b14-a5f2-24399a7b8fa7\n 1 workflowexecutionstarted {workflowtype:{name:helloworld::sayhello},\n tasklist:{name:helloworldtasklist},\n input:["world"],\n executionstarttoclosetimeoutseconds:3600,\n taskstarttoclosetimeoutseconds:10,\n continuedfailuredetails:[],\n lastcompletionresult:[],\n identity:cadence-cli@linuxkit-025000000001,\n attempt:0,\n firstdecisiontaskbackoffseconds:0}\n 2 decisiontaskscheduled {tasklist:{name:helloworldtasklist},\n starttoclosetimeoutseconds:10,\n attempt:0}\n 3 decisiontaskstarted {scheduledeventid:2,\n identity:45937@maxim-c02xd0aajgh6,\n requestid:481a14e5-67a4-436e-9a23-7f7fb7f87ef3}\n 4 decisiontaskcompleted {executioncontext:[],\n scheduledeventid:2,\n startedeventid:3,\n identity:45937@maxim-c02xd0aajgh6}\n 5 workflowexecutioncompleted {result:[],\n decisiontaskcompletedeventid:4}\n\n\neven for such a trivial , the history gives a lot of useful information. for complex this is a really useful tool for production and development troubleshooting. history can be automatically archived to a long-term blob store (for example amazon s3) upon completion for compliance, analytical, and troubleshooting purposes.\n\n\n# what is next\n\nnow you have completed the tutorials. you can continue to explore the key concepts in cadence, and also how to use them with java client',charsets:{cjk:!0}},{title:"Golang hello world",frontmatter:{layout:"default",title:"Golang hello world",permalink:"/docs/get-started/golang-hello-world",readingShow:"top"},regularPath:"/docs/01-get-started/03-golang-hello-world.html",relativePath:"docs/01-get-started/03-golang-hello-world.md",key:"v-5261e03c",path:"/docs/get-started/golang-hello-world/",headers:[{level:2,title:"Prerequisite",slug:"prerequisite",normalizedTitle:"prerequisite",charIndex:388},{level:2,title:"Step 1. Implement A Cadence Worker Service",slug:"step-1-implement-a-cadence-worker-service",normalizedTitle:"step 1. implement a cadence worker service",charIndex:922},{level:2,title:"Step 2. Write a simple Cadence hello world activity and workflow",slug:"step-2-write-a-simple-cadence-hello-world-activity-and-workflow",normalizedTitle:"step 2. write a simple cadence hello world activity and workflow",charIndex:4615},{level:2,title:"Step 3. Run the workflow with Cadence CLI",slug:"step-3-run-the-workflow-with-cadence-cli",normalizedTitle:"step 3. run the workflow with cadence cli",charIndex:5904},{level:2,title:"(Optional) Step 4. Monitor Cadence workflow with Cadence web UI",slug:"optional-step-4-monitor-cadence-workflow-with-cadence-web-ui",normalizedTitle:"(optional) step 4. monitor cadence workflow with cadence web ui",charIndex:6701},{level:2,title:"What is Next",slug:"what-is-next",normalizedTitle:"what is next",charIndex:7153}],codeSwitcherOptions:{},headersStr:"Prerequisite Step 1. Implement A Cadence Worker Service Step 2. Write a simple Cadence hello world activity and workflow Step 3. Run the workflow with Cadence CLI (Optional) Step 4. Monitor Cadence workflow with Cadence web UI What is Next",content:'# Golang Hello World\n\nThis section provides step-by-step instructions on how to write and run a HelloWorld workflow in Cadence with Golang. You will learn two critical building blocks of Cadence: activities and workflows. First, you will write an activity function that prints a "Hello World!" message in the log. Then, you will write a workflow function that executes this activity.\n\n\n# Prerequisite\n\nTo successfully run this hello world sample, follow this checklist of setting up Cadence environment\n\n 1. Your worker is running properly and you have registered the hello world activity and workflow to the worker\n 2. Your Cadence server is running (check your background docker container process)\n 3. You have successfully registered a domain for this workflow\n\nYou must finish part 2 and 3 by following the first section to proceed the next steps. We are using domain called test-domain for this tutorial project.\n\n\n# Step 1. Implement A Cadence Worker Service\n\nCreate a new main.go file in your local directory and paste the basic worker service layout.\n\npackage main\n\nimport (\n "net/http"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\nvar HostPort = "127.0.0.1:7833"\nvar Domain = "test-domain"\nvar TaskListName = "test-worker"\nvar ClientName = "test-worker"\nvar CadenceService = "cadence-frontend"\n\nfunc main() {\n startWorker(buildLogger(), buildCadenceClient())\n err := http.ListenAndServe(":8080", nil)\n if err != nil {\n panic(err)\n }\n}\n\nfunc buildLogger() *zap.Logger {\n config := zap.NewDevelopmentConfig()\n config.Level.SetLevel(zapcore.InfoLevel)\n\n var err error\n logger, err := config.Build()\n if err != nil {\n panic("Failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n\t\tName: ClientName,\n\t\tOutbounds: yarpc.Outbounds{\n\t\t CadenceService: {Unary: grpc.NewTransport().NewSingleOutbound(HostPort)},\n\t\t},\n\t })\n\t if err := dispatcher.Start(); err != nil {\n\t\tpanic("Failed to start dispatcher")\n\t }\n \n\t clientConfig := dispatcher.ClientConfig(CadenceService)\n \n\t return compatibility.NewThrift2ProtoAdapter(\n\t\tapiv1.NewDomainAPIYARPCClient(clientConfig),\n\t\tapiv1.NewWorkflowAPIYARPCClient(clientConfig),\n\t\tapiv1.NewWorkerAPIYARPCClient(clientConfig),\n\t\tapiv1.NewVisibilityAPIYARPCClient(clientConfig),\n\t )\n}\n\nfunc startWorker(logger *zap.Logger, service workflowserviceclient.Interface) {\n // TaskListName identifies set of client workflows, activities, and workers.\n // It could be your group or client or application name.\n workerOptions := worker.Options{\n Logger: logger,\n MetricsScope: tally.NewTestScope(TaskListName, map[string]string{}),\n }\n\n worker := worker.New(\n service,\n Domain,\n TaskListName,\n workerOptions)\n err := worker.Start()\n if err != nil {\n panic("Failed to start worker")\n }\n\n logger.Info("Started Worker.", zap.String("worker", TaskListName))\n}\n\n\nIn this worker service, we start a HTTP server and create a new Cadence client running continuously at the background. Then start the server on your local, you may see logs such like\n\n2023-07-03T11:46:46.266-0700 INFO internal/internal_worker.go:826 Worker has no workflows registered, so workflow worker will not be started. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03T11:46:46.267-0700 INFO internal/internal_worker.go:834 Started Workflow Worker {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03T11:46:46.267-0700 INFO internal/internal_worker.go:838 Worker has no activities registered, so activity worker will not be started. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03T11:46:46.267-0700 INFO cadence-worker/main.go:75 Started Worker. {"worker": "test-worker"}\n\n\nYou may see this because there are no activities and workflows registered to the worker. Let\'s proceed to next steps to write a hello world activity and workflow.\n\n\n# Step 2. Write a simple Cadence hello world activity and workflow\n\nLet\'s write a hello world activity, which take a single input called name and greet us after the workflow is finished.\n\nfunc helloWorldWorkflow(ctx workflow.Context, name string) error {\n\tao := workflow.ActivityOptions{\n\t\tScheduleToStartTimeout: time.Minute,\n\t\tStartToCloseTimeout: time.Minute,\n\t\tHeartbeatTimeout: time.Second * 20,\n\t}\n\tctx = workflow.WithActivityOptions(ctx, ao)\n\n\tlogger := workflow.GetLogger(ctx)\n\tlogger.Info("helloworld workflow started")\n\tvar helloworldResult string\n\terr := workflow.ExecuteActivity(ctx, helloWorldActivity, name).Get(ctx, &helloworldResult)\n\tif err != nil {\n\t\tlogger.Error("Activity failed.", zap.Error(err))\n\t\treturn err\n\t}\n\n\tlogger.Info("Workflow completed.", zap.String("Result", helloworldResult))\n\n\treturn nil\n}\n\nfunc helloWorldActivity(ctx context.Context, name string) (string, error) {\n\tlogger := activity.GetLogger(ctx)\n\tlogger.Info("helloworld activity started")\n\treturn "Hello " + name + "!", nil\n}\n\n\nDon\'t forget to register the workflow and activity to the worker.\n\nfunc init() {\n workflow.Register(helloWorldWorkflow)\n activity.Register(helloWorldActivity)\n}\n\n\nImport the context module if it was not automatically added.\n\nimport (\n "context"\n)\n\n\n\n# Step 3. Run the workflow with Cadence CLI\n\nRestart your worker and run the following command to interact with your workflow.\n\ncadence --domain test-domain workflow start --et 60 --tl test-worker --workflow_type main.helloWorldWorkflow --input \'"World"\'\n\n\nYou should see logs in your worker terminal like\n\n2023-07-16T11:30:02.717-0700 INFO cadence-worker/code.go:104 Workflow completed. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "11294@uber-C02F18EQMD6R@test-worker@5829c68e-ace0-472f-b5f3-6ccfc7903dd5", "WorkflowType": "main.helloWorldWorkflow", "WorkflowID": "8acbda3c-d240-4f27-8388-97c866b8bfb5", "RunID": "4b91341f-056f-4f0b-ab64-83bcc3a53e5a", "Result": "Hello World!"}\n\n\nCongratulations! You just launched your very first Cadence workflow from scratch\n\n\n# (Optional) Step 4. Monitor Cadence workflow with Cadence web UI\n\nWhen you start the Cadence backend server, it also automatically starts a front end portal for your workflow. Open you browser and go to\n\nhttp://localhost:8088\n\nYou may see a dashboard below\n\nType the domain you used for the tutorial, in this case, we type test-domain and hit enter. Then you can see a complete history of the workflows you have triggered associated to this domain.\n\n\n# What is Next\n\nNow you have completed the tutorials. You can continue to explore the key concepts in Cadence, and also how to use them with Go Client\n\nFor complete, ready to build samples covering all the key Cadence concepts go to Cadence-Samples for more examples.\n\nYou can also review Cadence-Client and go-docs for more documentation.',normalizedContent:'# golang hello world\n\nthis section provides step-by-step instructions on how to write and run a helloworld workflow in cadence with golang. you will learn two critical building blocks of cadence: activities and workflows. first, you will write an activity function that prints a "hello world!" message in the log. then, you will write a workflow function that executes this activity.\n\n\n# prerequisite\n\nto successfully run this hello world sample, follow this checklist of setting up cadence environment\n\n 1. your worker is running properly and you have registered the hello world activity and workflow to the worker\n 2. your cadence server is running (check your background docker container process)\n 3. you have successfully registered a domain for this workflow\n\nyou must finish part 2 and 3 by following the first section to proceed the next steps. we are using domain called test-domain for this tutorial project.\n\n\n# step 1. implement a cadence worker service\n\ncreate a new main.go file in your local directory and paste the basic worker service layout.\n\npackage main\n\nimport (\n "net/http"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\nvar hostport = "127.0.0.1:7833"\nvar domain = "test-domain"\nvar tasklistname = "test-worker"\nvar clientname = "test-worker"\nvar cadenceservice = "cadence-frontend"\n\nfunc main() {\n startworker(buildlogger(), buildcadenceclient())\n err := http.listenandserve(":8080", nil)\n if err != nil {\n panic(err)\n }\n}\n\nfunc buildlogger() *zap.logger {\n config := zap.newdevelopmentconfig()\n config.level.setlevel(zapcore.infolevel)\n\n var err error\n logger, err := config.build()\n if err != nil {\n panic("failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n dispatcher := yarpc.newdispatcher(yarpc.config{\n\t\tname: clientname,\n\t\toutbounds: yarpc.outbounds{\n\t\t cadenceservice: {unary: grpc.newtransport().newsingleoutbound(hostport)},\n\t\t},\n\t })\n\t if err := dispatcher.start(); err != nil {\n\t\tpanic("failed to start dispatcher")\n\t }\n \n\t clientconfig := dispatcher.clientconfig(cadenceservice)\n \n\t return compatibility.newthrift2protoadapter(\n\t\tapiv1.newdomainapiyarpcclient(clientconfig),\n\t\tapiv1.newworkflowapiyarpcclient(clientconfig),\n\t\tapiv1.newworkerapiyarpcclient(clientconfig),\n\t\tapiv1.newvisibilityapiyarpcclient(clientconfig),\n\t )\n}\n\nfunc startworker(logger *zap.logger, service workflowserviceclient.interface) {\n // tasklistname identifies set of client workflows, activities, and workers.\n // it could be your group or client or application name.\n workeroptions := worker.options{\n logger: logger,\n metricsscope: tally.newtestscope(tasklistname, map[string]string{}),\n }\n\n worker := worker.new(\n service,\n domain,\n tasklistname,\n workeroptions)\n err := worker.start()\n if err != nil {\n panic("failed to start worker")\n }\n\n logger.info("started worker.", zap.string("worker", tasklistname))\n}\n\n\nin this worker service, we start a http server and create a new cadence client running continuously at the background. then start the server on your local, you may see logs such like\n\n2023-07-03t11:46:46.266-0700 info internal/internal_worker.go:826 worker has no workflows registered, so workflow worker will not be started. {"domain": "test-domain", "tasklist": "test-worker", "workerid": "35987@uber-c02f18eqmd6r@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03t11:46:46.267-0700 info internal/internal_worker.go:834 started workflow worker {"domain": "test-domain", "tasklist": "test-worker", "workerid": "35987@uber-c02f18eqmd6r@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03t11:46:46.267-0700 info internal/internal_worker.go:838 worker has no activities registered, so activity worker will not be started. {"domain": "test-domain", "tasklist": "test-worker", "workerid": "35987@uber-c02f18eqmd6r@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"}\n2023-07-03t11:46:46.267-0700 info cadence-worker/main.go:75 started worker. {"worker": "test-worker"}\n\n\nyou may see this because there are no activities and workflows registered to the worker. let\'s proceed to next steps to write a hello world activity and workflow.\n\n\n# step 2. write a simple cadence hello world activity and workflow\n\nlet\'s write a hello world activity, which take a single input called name and greet us after the workflow is finished.\n\nfunc helloworldworkflow(ctx workflow.context, name string) error {\n\tao := workflow.activityoptions{\n\t\tscheduletostarttimeout: time.minute,\n\t\tstarttoclosetimeout: time.minute,\n\t\theartbeattimeout: time.second * 20,\n\t}\n\tctx = workflow.withactivityoptions(ctx, ao)\n\n\tlogger := workflow.getlogger(ctx)\n\tlogger.info("helloworld workflow started")\n\tvar helloworldresult string\n\terr := workflow.executeactivity(ctx, helloworldactivity, name).get(ctx, &helloworldresult)\n\tif err != nil {\n\t\tlogger.error("activity failed.", zap.error(err))\n\t\treturn err\n\t}\n\n\tlogger.info("workflow completed.", zap.string("result", helloworldresult))\n\n\treturn nil\n}\n\nfunc helloworldactivity(ctx context.context, name string) (string, error) {\n\tlogger := activity.getlogger(ctx)\n\tlogger.info("helloworld activity started")\n\treturn "hello " + name + "!", nil\n}\n\n\ndon\'t forget to register the workflow and activity to the worker.\n\nfunc init() {\n workflow.register(helloworldworkflow)\n activity.register(helloworldactivity)\n}\n\n\nimport the context module if it was not automatically added.\n\nimport (\n "context"\n)\n\n\n\n# step 3. run the workflow with cadence cli\n\nrestart your worker and run the following command to interact with your workflow.\n\ncadence --domain test-domain workflow start --et 60 --tl test-worker --workflow_type main.helloworldworkflow --input \'"world"\'\n\n\nyou should see logs in your worker terminal like\n\n2023-07-16t11:30:02.717-0700 info cadence-worker/code.go:104 workflow completed. {"domain": "test-domain", "tasklist": "test-worker", "workerid": "11294@uber-c02f18eqmd6r@test-worker@5829c68e-ace0-472f-b5f3-6ccfc7903dd5", "workflowtype": "main.helloworldworkflow", "workflowid": "8acbda3c-d240-4f27-8388-97c866b8bfb5", "runid": "4b91341f-056f-4f0b-ab64-83bcc3a53e5a", "result": "hello world!"}\n\n\ncongratulations! you just launched your very first cadence workflow from scratch\n\n\n# (optional) step 4. monitor cadence workflow with cadence web ui\n\nwhen you start the cadence backend server, it also automatically starts a front end portal for your workflow. open you browser and go to\n\nhttp://localhost:8088\n\nyou may see a dashboard below\n\ntype the domain you used for the tutorial, in this case, we type test-domain and hit enter. then you can see a complete history of the workflows you have triggered associated to this domain.\n\n\n# what is next\n\nnow you have completed the tutorials. you can continue to explore the key concepts in cadence, and also how to use them with go client\n\nfor complete, ready to build samples covering all the key cadence concepts go to cadence-samples for more examples.\n\nyou can also review cadence-client and go-docs for more documentation.',charsets:{}},{title:"Server Installation",frontmatter:{layout:"default",title:"Server Installation",permalink:"/docs/get-started/installation",readingShow:"top"},regularPath:"/docs/01-get-started/01-server-installation.html",relativePath:"docs/01-get-started/01-server-installation.md",key:"v-4bb753c4",path:"/docs/get-started/installation/",headers:[{level:2,title:"0. Prerequisite - Install docker",slug:"_0-prerequisite-install-docker",normalizedTitle:"0. prerequisite - install docker",charIndex:322},{level:2,title:"1. Run Cadence Server Using Docker Compose",slug:"_1-run-cadence-server-using-docker-compose",normalizedTitle:"1. run cadence server using docker compose",charIndex:461},{level:2,title:"2. Register a Domain Using the CLI",slug:"_2-register-a-domain-using-the-cli",normalizedTitle:"2. register a domain using the cli",charIndex:849},{level:2,title:"What's Next",slug:"what-s-next",normalizedTitle:"what's next",charIndex:1771},{level:2,title:"Troubleshooting",slug:"troubleshooting",normalizedTitle:"troubleshooting",charIndex:2055}],codeSwitcherOptions:{},headersStr:"0. Prerequisite - Install docker 1. Run Cadence Server Using Docker Compose 2. Register a Domain Using the CLI What's Next Troubleshooting",content:"# Install Cadence Service Locally\n\nTo get started with Cadence, you need to set up three components successfully.\n\n * A Cadence server hosting dependencies that Cadence relies on such as Cassandra, Elastic Search, etc\n * A Cadence domain for you workflow application\n * A Cadence worker service hosting your workflows\n\n\n# 0. Prerequisite - Install docker\n\nFollow the Docker installation instructions found here: https://docs.docker.com/engine/installation/\n\n\n# 1. Run Cadence Server Using Docker Compose\n\nDownload the Cadence docker-compose file:\n\n\ncurl -O https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose.yml && curl -O https://raw.githubusercontent.com/uber/cadence/master/docker/prometheus/prometheus.yml\n\n\nThen start Cadence Service by running:\n\ndocker-compose up\n\n\nPlease keep this process running at background.\n\n\n# 2. Register a Domain Using the CLI\n\nIn a new terminal, create a new domain called test-domain (or choose whatever name you like) by running:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain domain register -rd 1\n\n\nCheck that the domain is indeed registered:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain domain describe\nName: test-domain\nDescription:\nOwnerEmail:\nDomainData: map[]\nStatus: REGISTERED\nRetentionInDays: 1\nEmitMetrics: false\nActiveClusterName: active\nClusters: active\nArchivalStatus: DISABLED\nBad binaries to reset:\n+-----------------+----------+------------+--------+\n| BINARY CHECKSUM | OPERATOR | START TIME | REASON |\n+-----------------+----------+------------+--------+\n+-----------------+----------+------------+--------+\n>\n\n\nPlease remember the domains you created because they will be used in your worker implementation and Cadence CLI commands.\n\n\n# What's Next\n\nSo far you've successfully finished two prerequisites to your Cadence application. The next steps are to implement a simple worker service that hosts your workflows and to run your very first hello world Cadence workflow.\n\nGo to Java HelloWorld or Golang HelloWorld.\n\n\n# Troubleshooting\n\nThere can be various reasons that docker-compose up cannot succeed:\n\n * In case of the image being too old, update the docker image by docker pull ubercadence/server:master-auto-setup and retry\n * In case of the local docker env is messed up: docker system prune --all and retry (see details about it )\n * See logs of different container:\n * If Cassandra is not able to get up: docker logs -f docker_cassandra_1\n * If Cadence is not able to get up: docker logs -f docker_cadence_1\n * If Cadence Web is not able to get up: docker logs -f docker_cadence-web_1\n\nIf the above is still not working, open an issue in Server(main) repo.",normalizedContent:"# install cadence service locally\n\nto get started with cadence, you need to set up three components successfully.\n\n * a cadence server hosting dependencies that cadence relies on such as cassandra, elastic search, etc\n * a cadence domain for you workflow application\n * a cadence worker service hosting your workflows\n\n\n# 0. prerequisite - install docker\n\nfollow the docker installation instructions found here: https://docs.docker.com/engine/installation/\n\n\n# 1. run cadence server using docker compose\n\ndownload the cadence docker-compose file:\n\n\ncurl -o https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose.yml && curl -o https://raw.githubusercontent.com/uber/cadence/master/docker/prometheus/prometheus.yml\n\n\nthen start cadence service by running:\n\ndocker-compose up\n\n\nplease keep this process running at background.\n\n\n# 2. register a domain using the cli\n\nin a new terminal, create a new domain called test-domain (or choose whatever name you like) by running:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain domain register -rd 1\n\n\ncheck that the domain is indeed registered:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain domain describe\nname: test-domain\ndescription:\nowneremail:\ndomaindata: map[]\nstatus: registered\nretentionindays: 1\nemitmetrics: false\nactiveclustername: active\nclusters: active\narchivalstatus: disabled\nbad binaries to reset:\n+-----------------+----------+------------+--------+\n| binary checksum | operator | start time | reason |\n+-----------------+----------+------------+--------+\n+-----------------+----------+------------+--------+\n>\n\n\nplease remember the domains you created because they will be used in your worker implementation and cadence cli commands.\n\n\n# what's next\n\nso far you've successfully finished two prerequisites to your cadence application. the next steps are to implement a simple worker service that hosts your workflows and to run your very first hello world cadence workflow.\n\ngo to java helloworld or golang helloworld.\n\n\n# troubleshooting\n\nthere can be various reasons that docker-compose up cannot succeed:\n\n * in case of the image being too old, update the docker image by docker pull ubercadence/server:master-auto-setup and retry\n * in case of the local docker env is messed up: docker system prune --all and retry (see details about it )\n * see logs of different container:\n * if cassandra is not able to get up: docker logs -f docker_cassandra_1\n * if cadence is not able to get up: docker logs -f docker_cadence_1\n * if cadence web is not able to get up: docker logs -f docker_cadence-web_1\n\nif the above is still not working, open an issue in server(main) repo.",charsets:{cjk:!0}},{title:"Video Tutorials",frontmatter:{layout:"default",title:"Video Tutorials",permalink:"/docs/get-started/video-tutorials",readingShow:"top"},regularPath:"/docs/01-get-started/04-video-tutorials.html",relativePath:"docs/01-get-started/04-video-tutorials.md",key:"v-696d6f80",path:"/docs/get-started/video-tutorials/",headers:[{level:2,title:"HelloWorld",slug:"helloworld",normalizedTitle:"helloworld",charIndex:88}],codeSwitcherOptions:{},headersStr:"HelloWorld",content:"# Overview\n\nAn Introduction to the Cadence programming model and value proposition.\n\n\n# HelloWorld\n\nA step-by-step video tutorial about how to install and run HellowWorld(Java).\n\n",normalizedContent:"# overview\n\nan introduction to the cadence programming model and value proposition.\n\n\n# helloworld\n\na step-by-step video tutorial about how to install and run hellowworld(java).\n\n",charsets:{}},{title:"Overview",frontmatter:{layout:"default",title:"Overview",description:"A large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous events, and communicate to external unreliable dependencies.",permalink:"/docs/get-started/",readingShow:"top"},regularPath:"/docs/01-get-started/",relativePath:"docs/01-get-started/index.md",key:"v-5ab4294a",path:"/docs/get-started/",headers:[{level:2,title:"What's Next",slug:"what-s-next",normalizedTitle:"what's next",charIndex:2059}],codeSwitcherOptions:{},headersStr:"What's Next",content:"# Overview\n\nA large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous , and communicate to external unreliable dependencies. The usual approach to building such applications is a hodgepodge of stateless services, databases, cron jobs, and queuing systems. This negatively impacts the developer productivity as most of the code is dedicated to plumbing, obscuring the actual business logic behind a myriad of low-level details. Such systems frequently have availability problems as it is hard to keep all the components healthy.\n\nThe Cadence solution is a fault-oblivious stateful programming model that obscures most of the complexities of building scalable distributed applications. In essence, Cadence provides a durable virtual memory that is not linked to a specific process, and preserves the full application state, including function stacks, with local variables across all sorts of host and software failures. This allows you to write code using the full power of a programming language while Cadence takes care of durability, availability, and scalability of the application.\n\nCadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate in familiar languages (Go and Java are supported officially, and Python and Ruby by the community).\n\nYou can also use iWF as a DSL framework on top of Cadence.\n\nThe Cadence backend service is stateless and relies on a persistent store. Currently, Cassandra and MySQL/Postgres storages are supported. An adapter to any other database that provides multi-row single shard transactions can be added. There are different service deployment models. At Uber, our team operates multitenant clusters that are shared by hundreds of applications. See service topology to understand the overall architecture. The GitHub repo for the Cadence server is uber/cadence. The docker image for the Cadence server is available on Docker Hub at ubercadence/server.\n\n\n# What's Next\n\nLet's try with some sample workflows. To start with, go to server installation to install cadence locally, and run a HelloWorld sample with Java or Golang.\n\nWhen you have any trouble with the instructions, you can watch the video tutorials, and reach out to us on Slack Channel, or raise any question on StackOverflow or open an Github issue.",normalizedContent:"# overview\n\na large number of use cases span beyond a single request-reply, require tracking of a complex state, respond to asynchronous , and communicate to external unreliable dependencies. the usual approach to building such applications is a hodgepodge of stateless services, databases, cron jobs, and queuing systems. this negatively impacts the developer productivity as most of the code is dedicated to plumbing, obscuring the actual business logic behind a myriad of low-level details. such systems frequently have availability problems as it is hard to keep all the components healthy.\n\nthe cadence solution is a fault-oblivious stateful programming model that obscures most of the complexities of building scalable distributed applications. in essence, cadence provides a durable virtual memory that is not linked to a specific process, and preserves the full application state, including function stacks, with local variables across all sorts of host and software failures. this allows you to write code using the full power of a programming language while cadence takes care of durability, availability, and scalability of the application.\n\ncadence consists of a programming framework (or client library) and a managed service (or backend). the framework enables developers to author and coordinate in familiar languages (go and java are supported officially, and python and ruby by the community).\n\nyou can also use iwf as a dsl framework on top of cadence.\n\nthe cadence backend service is stateless and relies on a persistent store. currently, cassandra and mysql/postgres storages are supported. an adapter to any other database that provides multi-row single shard transactions can be added. there are different service deployment models. at uber, our team operates multitenant clusters that are shared by hundreds of applications. see service topology to understand the overall architecture. the github repo for the cadence server is uber/cadence. the docker image for the cadence server is available on docker hub at ubercadence/server.\n\n\n# what's next\n\nlet's try with some sample workflows. to start with, go to server installation to install cadence locally, and run a helloworld sample with java or golang.\n\nwhen you have any trouble with the instructions, you can watch the video tutorials, and reach out to us on slack channel, or raise any question on stackoverflow or open an github issue.",charsets:{}},{title:"Periodic execution",frontmatter:{layout:"default",title:"Periodic execution",permalink:"/docs/use-cases/periodic-execution",readingShow:"top"},regularPath:"/docs/02-use-cases/01-periodic-execution.html",relativePath:"docs/02-use-cases/01-periodic-execution.md",key:"v-c2f362bc",path:"/docs/use-cases/periodic-execution/",codeSwitcherOptions:{},headersStr:null,content:"# Periodic execution (aka Distributed Cron)\n\nPeriodic execution, frequently referred to as distributed cron, is when you execute business logic periodically. The advantage of Cadence for these scenarios is that it guarantees execution, sophisticated error handling, retry policies, and visibility into execution history.\n\nAnother important dimension is scale. Some use cases require periodic execution for a large number of entities. At Uber, there are applications that create periodic per customer. Imagine 100+ million parallel cron jobs that don't require a separate batch processing framework.\n\nPeriodic execution is often part of other use cases. For example, once a month report generation is a periodic service orchestration. Or an event-driven that accumulates loyalty points for a customer and applies those points once a month.\n\nThere are many real-world examples of Cadence periodic executions. Such as the following:\n\n * An Uber backend service that recalculates various statistics for each hex in each city once a minute.\n * Monthly Uber for Business report generation.",normalizedContent:"# periodic execution (aka distributed cron)\n\nperiodic execution, frequently referred to as distributed cron, is when you execute business logic periodically. the advantage of cadence for these scenarios is that it guarantees execution, sophisticated error handling, retry policies, and visibility into execution history.\n\nanother important dimension is scale. some use cases require periodic execution for a large number of entities. at uber, there are applications that create periodic per customer. imagine 100+ million parallel cron jobs that don't require a separate batch processing framework.\n\nperiodic execution is often part of other use cases. for example, once a month report generation is a periodic service orchestration. or an event-driven that accumulates loyalty points for a customer and applies those points once a month.\n\nthere are many real-world examples of cadence periodic executions. such as the following:\n\n * an uber backend service that recalculates various statistics for each hex in each city once a minute.\n * monthly uber for business report generation.",charsets:{}},{title:"Orchestration",frontmatter:{layout:"default",title:"Orchestration",permalink:"/docs/use-cases/orchestration",readingShow:"top"},regularPath:"/docs/02-use-cases/02-orchestration.html",relativePath:"docs/02-use-cases/02-orchestration.md",key:"v-d5dcd2a0",path:"/docs/use-cases/orchestration/",codeSwitcherOptions:{},headersStr:null,content:"# Microservice Orchestration and Saga\n\nIt is common that some business processes are implemented as multiple microservice calls. And the implementation must guarantee that all of the calls must eventually succeed even with the occurrence of prolonged downstream service failures. In some cases, instead of trying to complete the process by retrying for a long time, compensation rollback logic should be executed. Saga Pattern is one way to standardize on compensation APIs.\n\nCadence is a perfect fit for such scenarios. It guarantees that code eventually completes, has built-in support for unlimited exponential retries and simplifies coding of the compensation logic. It also gives full visibility into the state of each , in contrast to an orchestration based on queues where getting a current status of each individual request is practically impossible.\n\nFollowing are some real-world examples of Cadence-based service orchestration scenarios:\n\n * Using Cadence workflows to spin up Kubernetes (Banzai Cloud Fork)\n * Improving the User Experience with Uber’s Customer Obsession Ticket Routing Workflow and Orchestration Engine\n * Enabling Faster Financial Partnership Integrations Using Cadence",normalizedContent:"# microservice orchestration and saga\n\nit is common that some business processes are implemented as multiple microservice calls. and the implementation must guarantee that all of the calls must eventually succeed even with the occurrence of prolonged downstream service failures. in some cases, instead of trying to complete the process by retrying for a long time, compensation rollback logic should be executed. saga pattern is one way to standardize on compensation apis.\n\ncadence is a perfect fit for such scenarios. it guarantees that code eventually completes, has built-in support for unlimited exponential retries and simplifies coding of the compensation logic. it also gives full visibility into the state of each , in contrast to an orchestration based on queues where getting a current status of each individual request is practically impossible.\n\nfollowing are some real-world examples of cadence-based service orchestration scenarios:\n\n * using cadence workflows to spin up kubernetes (banzai cloud fork)\n * improving the user experience with uber’s customer obsession ticket routing workflow and orchestration engine\n * enabling faster financial partnership integrations using cadence",charsets:{}},{title:"Event driven application",frontmatter:{layout:"default",title:"Event driven application",permalink:"/docs/use-cases/event-driven",readingShow:"top"},regularPath:"/docs/02-use-cases/04-event-driven.html",relativePath:"docs/02-use-cases/04-event-driven.md",key:"v-7a5c92a2",path:"/docs/use-cases/event-driven/",codeSwitcherOptions:{},headersStr:null,content:"# Event driven application\n\nMany applications listen to multiple sources, update the state of correspondent business entities, and have to execute actions if some state is reached. Cadence is a good fit for many of these. It has direct support for asynchronous (aka ), has a simple programming model that obscures a lot of complexity around state persistence, and ensures external action execution through built-in retries.\n\nReal-world examples:\n\n * Fraud detection where reacts to generated by consumer behavior\n * Customer loyalty program where the accumulates reward points and applies them when requested",normalizedContent:"# event driven application\n\nmany applications listen to multiple sources, update the state of correspondent business entities, and have to execute actions if some state is reached. cadence is a good fit for many of these. it has direct support for asynchronous (aka ), has a simple programming model that obscures a lot of complexity around state persistence, and ensures external action execution through built-in retries.\n\nreal-world examples:\n\n * fraud detection where reacts to generated by consumer behavior\n * customer loyalty program where the accumulates reward points and applies them when requested",charsets:{}},{title:"Polling",frontmatter:{layout:"default",title:"Polling",permalink:"/docs/use-cases/polling",readingShow:"top"},regularPath:"/docs/02-use-cases/03-polling.html",relativePath:"docs/02-use-cases/03-polling.md",key:"v-88def7ac",path:"/docs/use-cases/polling/",codeSwitcherOptions:{},headersStr:null,content:"# Polling\n\nPolling is executing a periodic action checking for a state change. Examples are pinging a host, calling a REST API, or listing an Amazon S3 bucket for newly uploaded files.\n\nCadence support for long running and unlimited retries makes it a good fit.\n\nSome real-world use cases:\n\n * Network, host and service monitoring\n * Processing files uploaded to FTP or S3\n * Cadence Polling Cookbook by Instaclustr: Polling an external API for a specific resource to become available:",normalizedContent:"# polling\n\npolling is executing a periodic action checking for a state change. examples are pinging a host, calling a rest api, or listing an amazon s3 bucket for newly uploaded files.\n\ncadence support for long running and unlimited retries makes it a good fit.\n\nsome real-world use cases:\n\n * network, host and service monitoring\n * processing files uploaded to ftp or s3\n * cadence polling cookbook by instaclustr: polling an external api for a specific resource to become available:",charsets:{}},{title:"Storage scan",frontmatter:{layout:"default",title:"Storage scan",permalink:"/docs/use-cases/partitioned-scan",readingShow:"top"},regularPath:"/docs/02-use-cases/05-partitioned-scan.html",relativePath:"docs/02-use-cases/05-partitioned-scan.md",key:"v-1bc7fd02",path:"/docs/use-cases/partitioned-scan/",codeSwitcherOptions:{},headersStr:null,content:"# Storage scan\n\nIt is common to have large data sets partitioned across a large number of hosts or databases, or having billions of files in an Amazon S3 bucket. Cadence is an ideal solution for implementing the full scan of such data in a scalable and resilient way. The standard pattern is to run an (or multiple parallel for partitioned data sets) that performs the scan and heartbeats its progress back to Cadence. In the case of a host failure, the is retried on a different host and continues execution from the last reported progress.\n\nA real-world example:\n\n * Cadence internal system that performs periodic scan of all records",normalizedContent:"# storage scan\n\nit is common to have large data sets partitioned across a large number of hosts or databases, or having billions of files in an amazon s3 bucket. cadence is an ideal solution for implementing the full scan of such data in a scalable and resilient way. the standard pattern is to run an (or multiple parallel for partitioned data sets) that performs the scan and heartbeats its progress back to cadence. in the case of a host failure, the is retried on a different host and continues execution from the last reported progress.\n\na real-world example:\n\n * cadence internal system that performs periodic scan of all records",charsets:{}},{title:"Batch job",frontmatter:{layout:"default",title:"Batch job",permalink:"/docs/use-cases/batch-job",readingShow:"top"},regularPath:"/docs/02-use-cases/06-batch-job.html",relativePath:"docs/02-use-cases/06-batch-job.md",key:"v-a14b6054",path:"/docs/use-cases/batch-job/",codeSwitcherOptions:{},headersStr:null,content:"# Batch job\n\nA lot of batch jobs are not pure data manipulation programs. For those, the existing big data frameworks are the best fit. But if processing a record requires external API calls that might fail and potentially take a long time, Cadence might be preferable.\n\nOne of our internal Uber customer uses Cadence for end of month statement generation. Each statement requires calls to multiple microservices and some statements can be really large. Cadence was chosen because it provides hard guarantees around durability of the financial data and seamlessly deals with long running operations, retries, and intermittent failures.",normalizedContent:"# batch job\n\na lot of batch jobs are not pure data manipulation programs. for those, the existing big data frameworks are the best fit. but if processing a record requires external api calls that might fail and potentially take a long time, cadence might be preferable.\n\none of our internal uber customer uses cadence for end of month statement generation. each statement requires calls to multiple microservices and some statements can be really large. cadence was chosen because it provides hard guarantees around durability of the financial data and seamlessly deals with long running operations, retries, and intermittent failures.",charsets:{}},{title:"Deployment",frontmatter:{layout:"default",title:"Deployment",permalink:"/docs/use-cases/deployment",readingShow:"top"},regularPath:"/docs/02-use-cases/08-deployment.html",relativePath:"docs/02-use-cases/08-deployment.md",key:"v-c99e5abc",path:"/docs/use-cases/deployment/",codeSwitcherOptions:{},headersStr:null,content:"# CI/CD and Deployment\n\nImplementing CI/CD pipelines and deployment of applications to containers or virtual or physical machines is a non-trivial process. Its business logic has to deal with complex requirements around rolling upgrades, canary deployments, and rollbacks. Cadence is a perfect platform for building a deployment solution because it provides all the necessary guarantees and abstractions allowing developers to focus on the business logic.\n\nExample production systems:\n\n * Uber internal deployment infrastructure\n * Update push to IoT devices",normalizedContent:"# ci/cd and deployment\n\nimplementing ci/cd pipelines and deployment of applications to containers or virtual or physical machines is a non-trivial process. its business logic has to deal with complex requirements around rolling upgrades, canary deployments, and rollbacks. cadence is a perfect platform for building a deployment solution because it provides all the necessary guarantees and abstractions allowing developers to focus on the business logic.\n\nexample production systems:\n\n * uber internal deployment infrastructure\n * update push to iot devices",charsets:{}},{title:"Infrastructure provisioning",frontmatter:{layout:"default",title:"Infrastructure provisioning",permalink:"/docs/use-cases/provisioning",readingShow:"top"},regularPath:"/docs/02-use-cases/07-provisioning.html",relativePath:"docs/02-use-cases/07-provisioning.md",key:"v-28bf3ec2",path:"/docs/use-cases/provisioning/",codeSwitcherOptions:{},headersStr:null,content:"# Infrastructure provisioning\n\nProvisioning a new datacenter or a pool of machines in a public cloud is a potentially long running operation with a lot of possibilities for intermittent failures. The scale is also a concern when tens or even hundreds of thousands of resources should be provisioned and configured. One useful feature for provisioning scenarios is Cadence support for routing execution to a specific process or host.\n\nA lot of operations require some sort of locking to ensure that no more than one mutation is executed on a resource at a time. Cadence provides strong guarantees of uniqueness by business ID. This can be used to implement such locking behavior in a fault tolerant and scalable manner.\n\nSome real-world use cases:\n\n * Using Cadence workflows to spin up Kubernetes, by Banzai Cloud\n * Using Cadence to orchestrate cluster life cycle in HashiCorp Consul, by HashiCorp",normalizedContent:"# infrastructure provisioning\n\nprovisioning a new datacenter or a pool of machines in a public cloud is a potentially long running operation with a lot of possibilities for intermittent failures. the scale is also a concern when tens or even hundreds of thousands of resources should be provisioned and configured. one useful feature for provisioning scenarios is cadence support for routing execution to a specific process or host.\n\na lot of operations require some sort of locking to ensure that no more than one mutation is executed on a resource at a time. cadence provides strong guarantees of uniqueness by business id. this can be used to implement such locking behavior in a fault tolerant and scalable manner.\n\nsome real-world use cases:\n\n * using cadence workflows to spin up kubernetes, by banzai cloud\n * using cadence to orchestrate cluster life cycle in hashicorp consul, by hashicorp",charsets:{}},{title:"Operational management",frontmatter:{layout:"default",title:"Operational management",permalink:"/docs/use-cases/operational-management",readingShow:"top"},regularPath:"/docs/02-use-cases/09-operational-management.html",relativePath:"docs/02-use-cases/09-operational-management.md",key:"v-36ed9422",path:"/docs/use-cases/operational-management/",codeSwitcherOptions:{},headersStr:null,content:"# Operational management\n\nImagine that you have to create a self operating database similar to Amazon RDS. Cadence is used in multiple projects that automate managing and automatic recovery of various products like MySQL, Elasticsearch and Apache Cassandra.\n\nSuch systems are usually a mixture of different use cases. They need to monitor the status of resources using polling. They have to execute orchestration API calls to administrative interfaces of a database. They have to provision new hardware or Docker instances if necessary. They need to push configuration updates and perform other actions like backups periodically.",normalizedContent:"# operational management\n\nimagine that you have to create a self operating database similar to amazon rds. cadence is used in multiple projects that automate managing and automatic recovery of various products like mysql, elasticsearch and apache cassandra.\n\nsuch systems are usually a mixture of different use cases. they need to monitor the status of resources using polling. they have to execute orchestration api calls to administrative interfaces of a database. they have to provision new hardware or docker instances if necessary. they need to push configuration updates and perform other actions like backups periodically.",charsets:{}},{title:"Interactive application",frontmatter:{layout:"default",title:"Interactive application",permalink:"/docs/use-cases/interactive",readingShow:"top"},regularPath:"/docs/02-use-cases/10-interactive.html",relativePath:"docs/02-use-cases/10-interactive.md",key:"v-6b66fa18",path:"/docs/use-cases/interactive/",codeSwitcherOptions:{},headersStr:null,content:"# Interactive application\n\nCadence is performant and scalable enough to support interactive applications. It can be used to track UI session state and at the same time execute background operations. For example, while placing an order a customer might need to go through several screens while a background evaluates the customer for fraudulent .",normalizedContent:"# interactive application\n\ncadence is performant and scalable enough to support interactive applications. it can be used to track ui session state and at the same time execute background operations. for example, while placing an order a customer might need to go through several screens while a background evaluates the customer for fraudulent .",charsets:{}},{title:"DSL workflows",frontmatter:{layout:"default",title:"DSL workflows",permalink:"/docs/use-cases/dsl",readingShow:"top"},regularPath:"/docs/02-use-cases/11-dsl.html",relativePath:"docs/02-use-cases/11-dsl.md",key:"v-611b8c3c",path:"/docs/use-cases/dsl/",codeSwitcherOptions:{},headersStr:null,content:'# DSL workflows\n\nCadence supports implementing business logic directly in programming languages like Java and Go. But there are cases when using a domain-specific language is more appropriate. Or there might be a legacy system that uses some form of DSL for process definition but it is not operationally stable and scalable. This also applies to more recent systems like Apache Airflow, various BPMN engines and AWS Step Functions.\n\nAn application that interprets the DSL definition can be written using the Cadence SDK. It automatically becomes highly fault tolerant, scalable, and durable when running on Cadence. Cadence has been used to deprecate several Uber internal DSL engines. The customers continue to use existing process definitions, but Cadence is used as an execution engine.\n\nThere are multiple benefits of unifying all company engines on top of Cadence. The most obvious one is that it is more efficient to support a single product instead of many. It is also difficult to beat the scalability and stability of Cadence which each of the integrations it comes with. Additionally, the ability to share across "engines" might be a huge benefit in some cases.',normalizedContent:'# dsl workflows\n\ncadence supports implementing business logic directly in programming languages like java and go. but there are cases when using a domain-specific language is more appropriate. or there might be a legacy system that uses some form of dsl for process definition but it is not operationally stable and scalable. this also applies to more recent systems like apache airflow, various bpmn engines and aws step functions.\n\nan application that interprets the dsl definition can be written using the cadence sdk. it automatically becomes highly fault tolerant, scalable, and durable when running on cadence. cadence has been used to deprecate several uber internal dsl engines. the customers continue to use existing process definitions, but cadence is used as an execution engine.\n\nthere are multiple benefits of unifying all company engines on top of cadence. the most obvious one is that it is more efficient to support a single product instead of many. it is also difficult to beat the scalability and stability of cadence which each of the integrations it comes with. additionally, the ability to share across "engines" might be a huge benefit in some cases.',charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/use-cases/",readingShow:"top"},regularPath:"/docs/02-use-cases/",relativePath:"docs/02-use-cases/index.md",key:"v-13d0c1ca",path:"/docs/use-cases/",codeSwitcherOptions:{},headersStr:null,content:'# Use cases\n\nAs Cadence developers, we face a difficult non-technical problem: How to position and describe the Cadence platform.\n\nWe call it workflow. But when most people hear the word "workflow" they think about low-code and UIs. While these might be useful for non technical users, they frequently bring more pain than value to software engineers. Most UIs and low-code DSLs are awesome for "hello world" demo applications, but any diagram with 100+ elements or a few thousand lines of JSON DSL is completely impractical. So positioning Cadence as a is not ideal as it turns away developers that would enjoy its code-only approach.\n\nWe call it orchestrator. But this term is pretty narrow and turns away customers that want to implement business process automation solutions.\n\nWe call it durable function platform. It is technically a correct term. But most developers outside of the Microsoft ecosystem have never heard of Durable Functions.\n\nWe believe that problem in naming comes from the fact that Cadence is indeed a new way to write distributed applications. It is generic enough that it can be applied to practically any use case that goes beyond a single request reply. It can be used to build applications that are in traditional areas of or orchestration platforms. But it is also huge developer productivity boost for multiple use cases that traditionally rely on databases and/or queues.\n\nThis section represents a far from complete list of use cases where Cadence is a good fit. All of them have been used by real production services inside and outside of Uber.\n\nDon\'t think of this list as exhaustive. It is common to employ multiple use types in a single application. For example, an operational management use case might need periodic execution, service orchestration, polling, driven, as well as interactive parts.',normalizedContent:'# use cases\n\nas cadence developers, we face a difficult non-technical problem: how to position and describe the cadence platform.\n\nwe call it workflow. but when most people hear the word "workflow" they think about low-code and uis. while these might be useful for non technical users, they frequently bring more pain than value to software engineers. most uis and low-code dsls are awesome for "hello world" demo applications, but any diagram with 100+ elements or a few thousand lines of json dsl is completely impractical. so positioning cadence as a is not ideal as it turns away developers that would enjoy its code-only approach.\n\nwe call it orchestrator. but this term is pretty narrow and turns away customers that want to implement business process automation solutions.\n\nwe call it durable function platform. it is technically a correct term. but most developers outside of the microsoft ecosystem have never heard of durable functions.\n\nwe believe that problem in naming comes from the fact that cadence is indeed a new way to write distributed applications. it is generic enough that it can be applied to practically any use case that goes beyond a single request reply. it can be used to build applications that are in traditional areas of or orchestration platforms. but it is also huge developer productivity boost for multiple use cases that traditionally rely on databases and/or queues.\n\nthis section represents a far from complete list of use cases where cadence is a good fit. all of them have been used by real production services inside and outside of uber.\n\ndon\'t think of this list as exhaustive. it is common to employ multiple use types in a single application. for example, an operational management use case might need periodic execution, service orchestration, polling, driven, as well as interactive parts.',charsets:{}},{title:"Big data and ML",frontmatter:{layout:"default",title:"Big data and ML",permalink:"/docs/use-cases/big-ml",readingShow:"top"},regularPath:"/docs/02-use-cases/12-big-ml.html",relativePath:"docs/02-use-cases/12-big-ml.md",key:"v-163bae3c",path:"/docs/use-cases/big-ml/",codeSwitcherOptions:{},headersStr:null,content:"# Big data and ML\n\nA lot of companies build custom ETL and ML training and deployment solutions. Cadence is a good fit for a control plane for such applications.\n\nOne important feature of Cadence is its ability to route execution to a specific process or host. It is useful to control how ML models and other large files are allocated to hosts. For example, if an ML model is partitioned by city, the requests should be routed to hosts that contain the corresponding city model.",normalizedContent:"# big data and ml\n\na lot of companies build custom etl and ml training and deployment solutions. cadence is a good fit for a control plane for such applications.\n\none important feature of cadence is its ability to route execution to a specific process or host. it is useful to control how ml models and other large files are allocated to hosts. for example, if an ml model is partitioned by city, the requests should be routed to hosts that contain the corresponding city model.",charsets:{}},{title:"Workflows",frontmatter:{layout:"default",title:"Workflows",permalink:"/docs/concepts/workflows",readingShow:"top"},regularPath:"/docs/03-concepts/01-workflows.html",relativePath:"docs/03-concepts/01-workflows.md",key:"v-8d905b7c",path:"/docs/concepts/workflows/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:45},{level:2,title:"Example",slug:"example",normalizedTitle:"example",charIndex:347},{level:2,title:"State Recovery and Determinism",slug:"state-recovery-and-determinism",normalizedTitle:"state recovery and determinism",charIndex:7821},{level:2,title:"ID Uniqueness",slug:"id-uniqueness",normalizedTitle:"id uniqueness",charIndex:8556},{level:2,title:"Child Workflow",slug:"child-workflow",normalizedTitle:"child workflow",charIndex:9681},{level:2,title:"Workflow Retries",slug:"workflow-retries",normalizedTitle:"workflow retries",charIndex:11254},{level:2,title:"How does workflow run",slug:"how-does-workflow-run",normalizedTitle:"how does workflow run",charIndex:12798}],codeSwitcherOptions:{},headersStr:"Overview Example State Recovery and Determinism ID Uniqueness Child Workflow Workflow Retries How does workflow run",content:"# Fault-oblivious stateful workflow code\n\n\n# Overview\n\nCadence core abstraction is a fault-oblivious stateful . The state of the code, including local variables and threads it creates, is immune to process and Cadence service failures. This is a very powerful concept as it encapsulates state, processing threads, durable timers and handlers.\n\n\n# Example\n\nLet's look at a use case. A customer signs up for an application with a trial period. After the period, if the customer has not cancelled, he should be charged once a month for the renewal. The customer has to be notified by email about the charges and should be able to cancel the subscription at any time.\n\nThe business logic of this use case is not very complicated and can be expressed in a few dozen lines of code. But any practical implementation has to ensure that the business process is fault tolerant and scalable. There are various ways to approach the design of such a system.\n\nOne approach is to center it around a database. An application process would periodically scan database tables for customers in specific states, execute necessary actions, and update the state to reflect that. While feasible, this approach has various drawbacks. The most obvious is that the state machine of the customer state quickly becomes extremely complicated. For example, charging a credit card or sending emails can fail due to a downstream system unavailability. The failed calls might need to be retried for a long time, ideally using an exponential retry policy. These calls should be throttled to not overload external systems. There should be support for poison pills to avoid blocking the whole process if a single customer record cannot be processed for whatever reason. The database-based approach also usually has performance problems. Databases are not efficient for scenarios that require constant polling for records in a specific state.\n\nAnother commonly employed approach is to use a timer service and queues. Any update is pushed to a queue and then a that consumes from it updates a database and possibly pushes more messages in downstream queues. For operations that require scheduling, an external timer service can be used. This approach usually scales much better because a database is not constantly polled for changes. But it makes the programming model more complex and error prone as usually there is no transactional update between a queuing system and a database.\n\nWith Cadence, the entire logic can be encapsulated in a simple durable function that directly implements the business logic. Because the function is stateful, the implementer doesn't need to employ any additional systems to ensure durability and fault tolerance.\n\nHere is an example that implements the subscription management use case. It is in Java, but Go is also supported. The Python and .NET libraries are under active development.\n\n// This SubscriptionWorkflow interface is an example of defining a workflow in Cadence\npublic interface SubscriptionWorkflow {\n @WorkflowMethod\n void manageSubscription(String customerId);\n @SignalMethod\n void cancelSubscription();\n @SignalMethod \n void updateBillingPeriodChargeAmount(int billingPeriodChargeAmount);\n @QueryMethod \n String queryCustomerId();\n @QueryMethod \n int queryBillingPeriodNumber();\n @QueryMethod \n int queryBillingPeriodChargeAmount();\n}\n\n// Workflow implementation is independent from interface. That way, application that start/signal/query workflows only need to know the interface\npublic class SubscriptionWorkflowImpl implements SubscriptionWorkflow {\n\n private int billingPeriodNum;\n private boolean subscriptionCancelled;\n private Customer customer;\n \n private final SubscriptionActivities activities =\n Workflow.newActivityStub(SubscriptionActivities.class);\n\n // This manageSubscription function is an example of a workflow using Cadence\n @Override\n public void manageSubscription(Customer customer) {\n // Set the Workflow customer to class properties so that it can be used by other methods like Query/Signal\n this.customer = customer;\n\n // sendWelcomeEmail is an activity in Cadence. It is implemented in user code and Cadence executes this activity on a worker node when needed.\n activities.sendWelcomeEmail(customer);\n\n // for this example, there are a fixed number of periods in the subscription\n // Cadence supports indefinitely running workflow but some advanced techniques are needed\n while (billingPeriodNum < customer.getSubscription().getPeriodsInSubcription()) {\n\n // Workflow.await tells Cadence to pause the workflow at this stage (saving it's state to the database)\n // Execution restarts when the billing period time has passed or the subscriptionCancelled event is received , whichever comes first\n Workflow.await(customer.getSubscription().getBillingPeriod(), () -> subscriptionCancelled);\n\n if (subscriptionCancelled) {\n activities.sendCancellationEmailDuringActiveSubscription(customer);\n break;\n }\n \n // chargeCustomerForBillingPeriod is another activity\n // Cadence will automatically handle issues such as your billing service being unavailable at the time\n // this activity is invoked\n activities.chargeCustomerForBillingPeriod(customer, billingPeriodNum);\n\n billingPeriodNum++;\n }\n\n if (!subscriptionCancelled) {\n activities.sendSubscriptionOverEmail(customer);\n }\n \n // the workflow is finished once this function returns\n }\n\n @Override\n public void cancelSubscription() {\n subscriptionCancelled = true;\n }\n\n @Override\n public void updateBillingPeriodChargeAmount(int billingPeriodChargeAmount) {\n customer.getSubscription().setBillingPeriodCharge(billingPeriodChargeAmount);\n }\n\n @Override\n public String queryCustomerId() {\n return customer.getId();\n }\n\n @Override\n public int queryBillingPeriodNumber() {\n return billingPeriodNum;\n }\n\n @Override\n public int queryBillingPeriodChargeAmount() {\n return customer.getSubscription().getBillingPeriodCharge();\n }\n}\n\n\n\nAgain, note that this code directly implements the business logic. If any of the invoked operations (aka ) takes a long time, the code is not going to change. It is okay to block on chargeCustomerForBillingPeriod for a day if the downstream processing service is down that long. The same way that blocking sleep for a billing period like 30 days is a normal operation inside the code.\n\nCadence has practically no scalability limits on the number of open instances. So even if your site has hundreds of millions of consumers, the above code is not going to change.\n\nThe commonly asked question by developers that learn Cadence is \"How do I handle process failure/restart in my \"? The answer is that you do not. The code is completely oblivious to any failures and downtime of or even the Cadence service itself. As soon as they are recovered and the needs to handle some , like timer or an completion, the current state of the is fully restored and the execution is continued. The only reason for a failure is the business code throwing an exception, not underlying infrastructure outages.\n\nAnother commonly asked question is whether a can handle more instances than its cache size or number of threads it can support. The answer is that a , when in a blocked state, can be safely removed from a . Later it can be resurrected on a different or the same when the need (in the form of an external ) arises. So a single can handle millions of open , assuming it can handle the update rate.\n\n\n# State Recovery and Determinism\n\nThe state recovery utilizes sourcing which puts a few restrictions on how the code is written. The main restriction is that the code must be deterministic which means that it must produce exactly the same result if executed multiple times. This rules out any external API calls from the code as external calls can fail intermittently or change its output any time. That is why all communication with the external world should happen through . For the same reason, code must use Cadence APIs to get current time, sleep, and create new threads.\n\nTo understand the Cadence execution model as well as the recovery mechanism, watch the following webcast. The animation covering recovery starts at 15:50.\n\n\n# ID Uniqueness\n\nis assigned by a client when starting a . It is usually a business level ID like customer ID or order ID.\n\nCadence guarantees that there could be only one (across all types) with a given ID open per at any time. An attempt to start a with the same ID is going to fail with WorkflowExecutionAlreadyStarted error.\n\nAn attempt to start a if there is a completed with the same ID depends on a WorkflowIdReusePolicy option:\n\n * AllowDuplicateFailedOnly means that it is allowed to start a only if a previously executed with the same ID failed.\n * AllowDuplicate means that it is allowed to start independently of the previous completion status.\n * RejectDuplicate means that it is not allowed to start a using the same at all.\n * TerminateIfRunning means terminating the current running workflow if one exists, and start a new one.\n\nThe default is AllowDuplicateFailedOnly.\n\nTo distinguish multiple runs of a with the same , Cadence identifies a with two IDs: Workflow ID and Run ID. Run ID is a service-assigned UUID. To be precise, any is uniquely identified by a triple: Domain Name, Workflow ID and Run ID.\n\n\n# Child Workflow\n\nA can execute other as child :workflow:workflows:. A child completion or failure is reported to its parent.\n\nSome reasons to use child are:\n\n * A child can be hosted by a separate set of which don't contain the parent code. So it would act as a separate service that can be invoked from multiple other .\n * A single has a limited size. For example, it cannot execute 100k . Child can be used to partition the problem into smaller chunks. One parent with 1000 children each executing 1000 is 1 million executed .\n * A child can be used to manage some resource using its ID to guarantee uniqueness. For example, a that manages host upgrades can have a child per host (host name being a ) and use them to ensure that all operations on the host are serialized.\n * A child can be used to execute some periodic logic without blowing up the parent history size. When a parent starts a child, it executes periodic logic calling that continues as many times as needed, then completes. From the parent point if view, it is just a single child invocation.\n\nThe main limitation of a child versus collocating all the application logic in a single is lack of the shared state. Parent and child can communicate only through asynchronous . But if there is a tight coupling between them, it might be simpler to use a single and just rely on a shared object state.\n\nWe recommended starting from a single implementation if your problem has bounded size in terms of number of executed and processed . It is more straightforward than multiple asynchronously communicating .\n\n\n# Workflow Retries\n\ncode is unaffected by infrastructure level downtime and failures. But it still can fail due to business logic level failures. For example, an can fail due to exceeding the retry interval and the error is not handled by application code, or the code having a bug.\n\nSome require a guarantee that they keep running even in presence of such failures. To support such use cases, an optional exponential retry policy can be specified when starting a . When it is specified, a failure restarts a from the beginning after the calculated retry interval. Following are the retry policy parameters:\n\n * InitialInterval is a delay before the first retry.\n * BackoffCoefficient. Retry policies are exponential. The coefficient specifies how fast the retry interval is growing. The coefficient of 1 means that the retry interval is always equal to the InitialInterval.\n * MaximumInterval specifies the maximum interval between retries. Useful for coefficients of more than 1.\n * MaximumAttempts specifies how many times to attempt to execute a in the presence of failures. If this limit is exceeded, the fails without retry. Not required if ExpirationInterval is specified.\n * ExpirationInterval specifies for how long to attempt executing a in the presence of failures. If this interval is exceeded, the fails without retry. Not required if MaximumAttempts is specified.\n * NonRetryableErrorReasons allows to specify errors that shouldn't be retried. For example, retrying invalid arguments error doesn't make sense in some scenarios.\n\n\n# How does workflow run\n\nYou may wonder how it works. Behind the scenes, workflow decision is driving the whole workflow running. It's the internal entities for client and server to run your workflows. If this is interesting to you, read this stack Overflow QA.",normalizedContent:"# fault-oblivious stateful workflow code\n\n\n# overview\n\ncadence core abstraction is a fault-oblivious stateful . the state of the code, including local variables and threads it creates, is immune to process and cadence service failures. this is a very powerful concept as it encapsulates state, processing threads, durable timers and handlers.\n\n\n# example\n\nlet's look at a use case. a customer signs up for an application with a trial period. after the period, if the customer has not cancelled, he should be charged once a month for the renewal. the customer has to be notified by email about the charges and should be able to cancel the subscription at any time.\n\nthe business logic of this use case is not very complicated and can be expressed in a few dozen lines of code. but any practical implementation has to ensure that the business process is fault tolerant and scalable. there are various ways to approach the design of such a system.\n\none approach is to center it around a database. an application process would periodically scan database tables for customers in specific states, execute necessary actions, and update the state to reflect that. while feasible, this approach has various drawbacks. the most obvious is that the state machine of the customer state quickly becomes extremely complicated. for example, charging a credit card or sending emails can fail due to a downstream system unavailability. the failed calls might need to be retried for a long time, ideally using an exponential retry policy. these calls should be throttled to not overload external systems. there should be support for poison pills to avoid blocking the whole process if a single customer record cannot be processed for whatever reason. the database-based approach also usually has performance problems. databases are not efficient for scenarios that require constant polling for records in a specific state.\n\nanother commonly employed approach is to use a timer service and queues. any update is pushed to a queue and then a that consumes from it updates a database and possibly pushes more messages in downstream queues. for operations that require scheduling, an external timer service can be used. this approach usually scales much better because a database is not constantly polled for changes. but it makes the programming model more complex and error prone as usually there is no transactional update between a queuing system and a database.\n\nwith cadence, the entire logic can be encapsulated in a simple durable function that directly implements the business logic. because the function is stateful, the implementer doesn't need to employ any additional systems to ensure durability and fault tolerance.\n\nhere is an example that implements the subscription management use case. it is in java, but go is also supported. the python and .net libraries are under active development.\n\n// this subscriptionworkflow interface is an example of defining a workflow in cadence\npublic interface subscriptionworkflow {\n @workflowmethod\n void managesubscription(string customerid);\n @signalmethod\n void cancelsubscription();\n @signalmethod \n void updatebillingperiodchargeamount(int billingperiodchargeamount);\n @querymethod \n string querycustomerid();\n @querymethod \n int querybillingperiodnumber();\n @querymethod \n int querybillingperiodchargeamount();\n}\n\n// workflow implementation is independent from interface. that way, application that start/signal/query workflows only need to know the interface\npublic class subscriptionworkflowimpl implements subscriptionworkflow {\n\n private int billingperiodnum;\n private boolean subscriptioncancelled;\n private customer customer;\n \n private final subscriptionactivities activities =\n workflow.newactivitystub(subscriptionactivities.class);\n\n // this managesubscription function is an example of a workflow using cadence\n @override\n public void managesubscription(customer customer) {\n // set the workflow customer to class properties so that it can be used by other methods like query/signal\n this.customer = customer;\n\n // sendwelcomeemail is an activity in cadence. it is implemented in user code and cadence executes this activity on a worker node when needed.\n activities.sendwelcomeemail(customer);\n\n // for this example, there are a fixed number of periods in the subscription\n // cadence supports indefinitely running workflow but some advanced techniques are needed\n while (billingperiodnum < customer.getsubscription().getperiodsinsubcription()) {\n\n // workflow.await tells cadence to pause the workflow at this stage (saving it's state to the database)\n // execution restarts when the billing period time has passed or the subscriptioncancelled event is received , whichever comes first\n workflow.await(customer.getsubscription().getbillingperiod(), () -> subscriptioncancelled);\n\n if (subscriptioncancelled) {\n activities.sendcancellationemailduringactivesubscription(customer);\n break;\n }\n \n // chargecustomerforbillingperiod is another activity\n // cadence will automatically handle issues such as your billing service being unavailable at the time\n // this activity is invoked\n activities.chargecustomerforbillingperiod(customer, billingperiodnum);\n\n billingperiodnum++;\n }\n\n if (!subscriptioncancelled) {\n activities.sendsubscriptionoveremail(customer);\n }\n \n // the workflow is finished once this function returns\n }\n\n @override\n public void cancelsubscription() {\n subscriptioncancelled = true;\n }\n\n @override\n public void updatebillingperiodchargeamount(int billingperiodchargeamount) {\n customer.getsubscription().setbillingperiodcharge(billingperiodchargeamount);\n }\n\n @override\n public string querycustomerid() {\n return customer.getid();\n }\n\n @override\n public int querybillingperiodnumber() {\n return billingperiodnum;\n }\n\n @override\n public int querybillingperiodchargeamount() {\n return customer.getsubscription().getbillingperiodcharge();\n }\n}\n\n\n\nagain, note that this code directly implements the business logic. if any of the invoked operations (aka ) takes a long time, the code is not going to change. it is okay to block on chargecustomerforbillingperiod for a day if the downstream processing service is down that long. the same way that blocking sleep for a billing period like 30 days is a normal operation inside the code.\n\ncadence has practically no scalability limits on the number of open instances. so even if your site has hundreds of millions of consumers, the above code is not going to change.\n\nthe commonly asked question by developers that learn cadence is \"how do i handle process failure/restart in my \"? the answer is that you do not. the code is completely oblivious to any failures and downtime of or even the cadence service itself. as soon as they are recovered and the needs to handle some , like timer or an completion, the current state of the is fully restored and the execution is continued. the only reason for a failure is the business code throwing an exception, not underlying infrastructure outages.\n\nanother commonly asked question is whether a can handle more instances than its cache size or number of threads it can support. the answer is that a , when in a blocked state, can be safely removed from a . later it can be resurrected on a different or the same when the need (in the form of an external ) arises. so a single can handle millions of open , assuming it can handle the update rate.\n\n\n# state recovery and determinism\n\nthe state recovery utilizes sourcing which puts a few restrictions on how the code is written. the main restriction is that the code must be deterministic which means that it must produce exactly the same result if executed multiple times. this rules out any external api calls from the code as external calls can fail intermittently or change its output any time. that is why all communication with the external world should happen through . for the same reason, code must use cadence apis to get current time, sleep, and create new threads.\n\nto understand the cadence execution model as well as the recovery mechanism, watch the following webcast. the animation covering recovery starts at 15:50.\n\n\n# id uniqueness\n\nis assigned by a client when starting a . it is usually a business level id like customer id or order id.\n\ncadence guarantees that there could be only one (across all types) with a given id open per at any time. an attempt to start a with the same id is going to fail with workflowexecutionalreadystarted error.\n\nan attempt to start a if there is a completed with the same id depends on a workflowidreusepolicy option:\n\n * allowduplicatefailedonly means that it is allowed to start a only if a previously executed with the same id failed.\n * allowduplicate means that it is allowed to start independently of the previous completion status.\n * rejectduplicate means that it is not allowed to start a using the same at all.\n * terminateifrunning means terminating the current running workflow if one exists, and start a new one.\n\nthe default is allowduplicatefailedonly.\n\nto distinguish multiple runs of a with the same , cadence identifies a with two ids: workflow id and run id. run id is a service-assigned uuid. to be precise, any is uniquely identified by a triple: domain name, workflow id and run id.\n\n\n# child workflow\n\na can execute other as child :workflow:workflows:. a child completion or failure is reported to its parent.\n\nsome reasons to use child are:\n\n * a child can be hosted by a separate set of which don't contain the parent code. so it would act as a separate service that can be invoked from multiple other .\n * a single has a limited size. for example, it cannot execute 100k . child can be used to partition the problem into smaller chunks. one parent with 1000 children each executing 1000 is 1 million executed .\n * a child can be used to manage some resource using its id to guarantee uniqueness. for example, a that manages host upgrades can have a child per host (host name being a ) and use them to ensure that all operations on the host are serialized.\n * a child can be used to execute some periodic logic without blowing up the parent history size. when a parent starts a child, it executes periodic logic calling that continues as many times as needed, then completes. from the parent point if view, it is just a single child invocation.\n\nthe main limitation of a child versus collocating all the application logic in a single is lack of the shared state. parent and child can communicate only through asynchronous . but if there is a tight coupling between them, it might be simpler to use a single and just rely on a shared object state.\n\nwe recommended starting from a single implementation if your problem has bounded size in terms of number of executed and processed . it is more straightforward than multiple asynchronously communicating .\n\n\n# workflow retries\n\ncode is unaffected by infrastructure level downtime and failures. but it still can fail due to business logic level failures. for example, an can fail due to exceeding the retry interval and the error is not handled by application code, or the code having a bug.\n\nsome require a guarantee that they keep running even in presence of such failures. to support such use cases, an optional exponential retry policy can be specified when starting a . when it is specified, a failure restarts a from the beginning after the calculated retry interval. following are the retry policy parameters:\n\n * initialinterval is a delay before the first retry.\n * backoffcoefficient. retry policies are exponential. the coefficient specifies how fast the retry interval is growing. the coefficient of 1 means that the retry interval is always equal to the initialinterval.\n * maximuminterval specifies the maximum interval between retries. useful for coefficients of more than 1.\n * maximumattempts specifies how many times to attempt to execute a in the presence of failures. if this limit is exceeded, the fails without retry. not required if expirationinterval is specified.\n * expirationinterval specifies for how long to attempt executing a in the presence of failures. if this interval is exceeded, the fails without retry. not required if maximumattempts is specified.\n * nonretryableerrorreasons allows to specify errors that shouldn't be retried. for example, retrying invalid arguments error doesn't make sense in some scenarios.\n\n\n# how does workflow run\n\nyou may wonder how it works. behind the scenes, workflow decision is driving the whole workflow running. it's the internal entities for client and server to run your workflows. if this is interesting to you, read this stack overflow qa.",charsets:{}},{title:"Activities",frontmatter:{layout:"default",title:"Activities",permalink:"/docs/concepts/activities",readingShow:"top"},regularPath:"/docs/03-concepts/02-activities.html",relativePath:"docs/03-concepts/02-activities.md",key:"v-e240404c",path:"/docs/concepts/activities/",headers:[{level:2,title:"Timeouts",slug:"timeouts",normalizedTitle:"timeouts",charIndex:854},{level:2,title:"Retries",slug:"retries",normalizedTitle:"retries",charIndex:1835},{level:2,title:"Long Running Activities",slug:"long-running-activities",normalizedTitle:"long running activities",charIndex:1601},{level:2,title:"Cancellation",slug:"cancellation",normalizedTitle:"cancellation",charIndex:4826},{level:2,title:"Activity Task Routing through Task Lists",slug:"activity-task-routing-through-task-lists",normalizedTitle:"activity task routing through task lists",charIndex:5435},{level:2,title:"Asynchronous Activity Completion",slug:"asynchronous-activity-completion",normalizedTitle:"asynchronous activity completion",charIndex:7240},{level:2,title:"Local Activities",slug:"local-activities",normalizedTitle:"local activities",charIndex:7860}],codeSwitcherOptions:{},headersStr:"Timeouts Retries Long Running Activities Cancellation Activity Task Routing through Task Lists Asynchronous Activity Completion Local Activities",content:"# Activities\n\nFault-oblivious stateful code is the core abstraction of Cadence. But, due to deterministic execution requirements, they are not allowed to call any external API directly. Instead they orchestrate execution of . In its simplest form, a Cadence is a function or an object method in one of the supported languages. Cadence does not recover state in case of failures. Therefore an function is allowed to contain any code without restrictions.\n\nare invoked asynchronously through . A is essentially a queue used to store an until it is picked up by an available . The processes an by invoking its implementation function. When the function returns, the reports the result back to the Cadence service which in turn notifies the about completion. It is possible to implement an fully asynchronously by completing it from a different process.\n\n\n# Timeouts\n\nCadence does not impose any system limit on duration. It is up to the application to choose the timeouts for its execution. These are the configurable timeouts:\n\n * ScheduleToStart is the maximum time from a requesting execution to a starting its execution. The usual reason for this timeout to fire is all being down or not being able to keep up with the request rate. We recommend setting this timeout to the maximum time a is willing to wait for an execution in the presence of all possible outages.\n * StartToClose is the maximum time an can execute after it was picked by a .\n * ScheduleToClose is the maximum time from the requesting an execution to its completion.\n * Heartbeat is the maximum time between heartbeat requests. See Long Running Activities.\n\nEither ScheduleToClose or both ScheduleToStart and StartToClose timeouts are required.\n\nTimeouts are the key to manage activities. For more tips of how to set proper timeout, read this Stack Overflow QA.\n\n\n# Retries\n\nAs Cadence doesn't recover an 's state and they can communicate to any external system, failures are expected. Therefore, Cadence supports automatic retries. Any when invoked can have an associated retry policy. Here are the retry policy parameters:\n\n * InitialInterval is a delay before the first retry.\n * BackoffCoefficient. Retry policies are exponential. The coefficient specifies how fast the retry interval is growing. The coefficient of 1 means that the retry interval is always equal to the InitialInterval.\n * MaximumInterval specifies the maximum interval between retries. Useful for coefficients more than 1.\n * MaximumAttempts specifies how many times to attempt to execute an in the presence of failures. If this limit is exceeded, the error is returned back to the that invoked the . Not required if ExpirationInterval is specified.\n * ExpirationInterval specifies for how long to attempt executing an in the presence of failures. If this interval is exceeded, the error is returned back to the that invoked the . Not required if MaximumAttempts is specified.\n * NonRetryableErrorReasons allows you to specify errors that shouldn't be retried. For example retrying invalid arguments error doesn't make sense in some scenarios.\n\nThere are scenarios when not a single but rather the whole part of a should be retried on failure. For example, a media encoding that downloads a file to a host, processes it, and then uploads the result back to storage. In this , if the host that hosts the dies, all three should be retried on a different host. Such retries should be handled by the code as they are very use case specific.\n\n\n# Long Running Activities\n\nFor long running , we recommended that you specify a relatively short heartbeat timeout and constantly heartbeat. This way failures for even very long running can be handled in a timely manner. An that specifies the heartbeat timeout is expected to call the heartbeat method periodically from its implementation.\n\nA heartbeat request can include application specific payload. This is useful to save execution progress. If an times out due to a missed heartbeat, the next attempt to execute it can access that progress and continue its execution from that point.\n\nLong running can be used as a special case of leader election. Cadence timeouts use second resolution. So it is not a solution for realtime applications. But if it is okay to react to the process failure within a few seconds, then a Cadence heartbeat is a good fit.\n\nOne common use case for such leader election is monitoring. An executes an internal loop that periodically polls some API and checks for some condition. It also heartbeats on every iteration. If the condition is satisfied, the completes which lets its to handle it. If the dies, the times out after the heartbeat interval is exceeded and is retried on a different . The same pattern works for polling for new files in Amazon S3 buckets or responses in REST or other synchronous APIs.\n\n\n# Cancellation\n\nA can request an cancellation. Currently the only way for an to learn that it was cancelled is through heart beating. The heartbeat request fails with a special error indicating that the was cancelled. Then it is up to the implementation to perform all the necessary cleanup and report that it is done with it. It is up to the implementation to decide if it wants to wait for the cancellation confirmation or just proceed without waiting.\n\nAnother common case for heartbeat failure is that the that invoked it is in a completed state. In this case an is expected to perform cleanup as well.\n\n\n# Activity Task Routing through Task Lists\n\nare dispatched to through . are queues that listen on. are highly dynamic and lightweight. They don't need to be explicitly registered. And it is okay to have one per process. It is normal to have more than one type to be invoked through a single . And it is normal in some cases (like host routing) to invoke the same type on multiple .\n\nHere are some use cases for employing multiple in a single workflow:\n\n * Flow control. A that consumes from a asks for an only when it has available capacity. So are never overloaded by request spikes. If executions are requested faster than can process them, they are backlogged in the .\n * Throttling. Each can specify the maximum rate it is allowed to processes on a . It does not exceed this limit even if it has spare capacity. There is also support for global rate limiting. This limit works across all for the given . It is frequently used to limit load on a downstream service that an calls into.\n * Deploying a set of independently. Think about a service that hosts and can be deployed independently from other and . To send to this service, a separate is needed.\n * with different capabilities. For example, on GPU boxes vs non GPU boxes. Having two separate in this case allows to pick which one to send an execution request to.\n * Routing to a specific host. For example, in the media encoding case the transform and upload have to run on the same host as the download one.\n * Routing to a specific process. For example, some load large data sets and caches it in the process. The that rely on this data set should be routed to the same process.\n * Multiple priorities. One per priority and having a pool per priority.\n * Versioning. A new backwards incompatible implementation of an might use a different .\n\n\n# Asynchronous Activity Completion\n\nBy default an is a function or a method depending on a client side library language. As soon as the function returns, an completes. But in some cases an implementation is asynchronous. For example it is forwarded to an external system through a message queue. And the reply comes through a different queue.\n\nTo support such use cases, Cadence allows implementations that do not complete upon function completions. A separate API should be used in this case to complete the . This API can be called from any process, even in a different programming language, that the original used.\n\n\n# Local Activities\n\nSome of the are very short lived and do not need the queing semantic, flow control, rate limiting and routing capabilities. For these Cadence supports so called feature. are executed in the same process as the that invoked them.\n\nWhat you will trade off by using local activities\n\n * Less Debuggability: There is no ActivityTaskScheduled and ActivityTaskStarted events. So you would not able to see the input.\n * No tasklist dispatching: The worker is always the same as the workflow decision worker. You don't have a choice of using activity workers.\n * More possibility of duplicated execution. Though regular activity could also execute multiple times when using retry policy, local activity has more chance of ocurring. Because local activity result is not recorded into history until DecisionTaskCompleted. Also when executing multiple local activities in a row, SDK(Java+Golang) would optimize recording in a way that only recording by interval(before current decision task timeout).\n * No long running capability with record heartbeat\n * No Tasklist global ratelimiting\n\nConsider using for functions that are:\n\n * idempotent\n * no longer than a few seconds\n * do not require global rate limiting\n * do not require routing to specific or pools of\n * can be implemented in the same binary as the that invokes them\n * non business critical so that losing some debuggability is okay(e.g. logging, loading config)\n * when you really need optimization. For example, if there are many timers firing at the same time to invoke activities, it could overload Cadence's server. Using local activities can help save the server capacity.\n\nThe main benefit of is that they are much more efficient in utilizing Cadence service resources and have much lower latency overhead comparing to the usual invocation.",normalizedContent:"# activities\n\nfault-oblivious stateful code is the core abstraction of cadence. but, due to deterministic execution requirements, they are not allowed to call any external api directly. instead they orchestrate execution of . in its simplest form, a cadence is a function or an object method in one of the supported languages. cadence does not recover state in case of failures. therefore an function is allowed to contain any code without restrictions.\n\nare invoked asynchronously through . a is essentially a queue used to store an until it is picked up by an available . the processes an by invoking its implementation function. when the function returns, the reports the result back to the cadence service which in turn notifies the about completion. it is possible to implement an fully asynchronously by completing it from a different process.\n\n\n# timeouts\n\ncadence does not impose any system limit on duration. it is up to the application to choose the timeouts for its execution. these are the configurable timeouts:\n\n * scheduletostart is the maximum time from a requesting execution to a starting its execution. the usual reason for this timeout to fire is all being down or not being able to keep up with the request rate. we recommend setting this timeout to the maximum time a is willing to wait for an execution in the presence of all possible outages.\n * starttoclose is the maximum time an can execute after it was picked by a .\n * scheduletoclose is the maximum time from the requesting an execution to its completion.\n * heartbeat is the maximum time between heartbeat requests. see long running activities.\n\neither scheduletoclose or both scheduletostart and starttoclose timeouts are required.\n\ntimeouts are the key to manage activities. for more tips of how to set proper timeout, read this stack overflow qa.\n\n\n# retries\n\nas cadence doesn't recover an 's state and they can communicate to any external system, failures are expected. therefore, cadence supports automatic retries. any when invoked can have an associated retry policy. here are the retry policy parameters:\n\n * initialinterval is a delay before the first retry.\n * backoffcoefficient. retry policies are exponential. the coefficient specifies how fast the retry interval is growing. the coefficient of 1 means that the retry interval is always equal to the initialinterval.\n * maximuminterval specifies the maximum interval between retries. useful for coefficients more than 1.\n * maximumattempts specifies how many times to attempt to execute an in the presence of failures. if this limit is exceeded, the error is returned back to the that invoked the . not required if expirationinterval is specified.\n * expirationinterval specifies for how long to attempt executing an in the presence of failures. if this interval is exceeded, the error is returned back to the that invoked the . not required if maximumattempts is specified.\n * nonretryableerrorreasons allows you to specify errors that shouldn't be retried. for example retrying invalid arguments error doesn't make sense in some scenarios.\n\nthere are scenarios when not a single but rather the whole part of a should be retried on failure. for example, a media encoding that downloads a file to a host, processes it, and then uploads the result back to storage. in this , if the host that hosts the dies, all three should be retried on a different host. such retries should be handled by the code as they are very use case specific.\n\n\n# long running activities\n\nfor long running , we recommended that you specify a relatively short heartbeat timeout and constantly heartbeat. this way failures for even very long running can be handled in a timely manner. an that specifies the heartbeat timeout is expected to call the heartbeat method periodically from its implementation.\n\na heartbeat request can include application specific payload. this is useful to save execution progress. if an times out due to a missed heartbeat, the next attempt to execute it can access that progress and continue its execution from that point.\n\nlong running can be used as a special case of leader election. cadence timeouts use second resolution. so it is not a solution for realtime applications. but if it is okay to react to the process failure within a few seconds, then a cadence heartbeat is a good fit.\n\none common use case for such leader election is monitoring. an executes an internal loop that periodically polls some api and checks for some condition. it also heartbeats on every iteration. if the condition is satisfied, the completes which lets its to handle it. if the dies, the times out after the heartbeat interval is exceeded and is retried on a different . the same pattern works for polling for new files in amazon s3 buckets or responses in rest or other synchronous apis.\n\n\n# cancellation\n\na can request an cancellation. currently the only way for an to learn that it was cancelled is through heart beating. the heartbeat request fails with a special error indicating that the was cancelled. then it is up to the implementation to perform all the necessary cleanup and report that it is done with it. it is up to the implementation to decide if it wants to wait for the cancellation confirmation or just proceed without waiting.\n\nanother common case for heartbeat failure is that the that invoked it is in a completed state. in this case an is expected to perform cleanup as well.\n\n\n# activity task routing through task lists\n\nare dispatched to through . are queues that listen on. are highly dynamic and lightweight. they don't need to be explicitly registered. and it is okay to have one per process. it is normal to have more than one type to be invoked through a single . and it is normal in some cases (like host routing) to invoke the same type on multiple .\n\nhere are some use cases for employing multiple in a single workflow:\n\n * flow control. a that consumes from a asks for an only when it has available capacity. so are never overloaded by request spikes. if executions are requested faster than can process them, they are backlogged in the .\n * throttling. each can specify the maximum rate it is allowed to processes on a . it does not exceed this limit even if it has spare capacity. there is also support for global rate limiting. this limit works across all for the given . it is frequently used to limit load on a downstream service that an calls into.\n * deploying a set of independently. think about a service that hosts and can be deployed independently from other and . to send to this service, a separate is needed.\n * with different capabilities. for example, on gpu boxes vs non gpu boxes. having two separate in this case allows to pick which one to send an execution request to.\n * routing to a specific host. for example, in the media encoding case the transform and upload have to run on the same host as the download one.\n * routing to a specific process. for example, some load large data sets and caches it in the process. the that rely on this data set should be routed to the same process.\n * multiple priorities. one per priority and having a pool per priority.\n * versioning. a new backwards incompatible implementation of an might use a different .\n\n\n# asynchronous activity completion\n\nby default an is a function or a method depending on a client side library language. as soon as the function returns, an completes. but in some cases an implementation is asynchronous. for example it is forwarded to an external system through a message queue. and the reply comes through a different queue.\n\nto support such use cases, cadence allows implementations that do not complete upon function completions. a separate api should be used in this case to complete the . this api can be called from any process, even in a different programming language, that the original used.\n\n\n# local activities\n\nsome of the are very short lived and do not need the queing semantic, flow control, rate limiting and routing capabilities. for these cadence supports so called feature. are executed in the same process as the that invoked them.\n\nwhat you will trade off by using local activities\n\n * less debuggability: there is no activitytaskscheduled and activitytaskstarted events. so you would not able to see the input.\n * no tasklist dispatching: the worker is always the same as the workflow decision worker. you don't have a choice of using activity workers.\n * more possibility of duplicated execution. though regular activity could also execute multiple times when using retry policy, local activity has more chance of ocurring. because local activity result is not recorded into history until decisiontaskcompleted. also when executing multiple local activities in a row, sdk(java+golang) would optimize recording in a way that only recording by interval(before current decision task timeout).\n * no long running capability with record heartbeat\n * no tasklist global ratelimiting\n\nconsider using for functions that are:\n\n * idempotent\n * no longer than a few seconds\n * do not require global rate limiting\n * do not require routing to specific or pools of\n * can be implemented in the same binary as the that invokes them\n * non business critical so that losing some debuggability is okay(e.g. logging, loading config)\n * when you really need optimization. for example, if there are many timers firing at the same time to invoke activities, it could overload cadence's server. using local activities can help save the server capacity.\n\nthe main benefit of is that they are much more efficient in utilizing cadence service resources and have much lower latency overhead comparing to the usual invocation.",charsets:{}},{title:"Event handling",frontmatter:{layout:"default",title:"Event handling",permalink:"/docs/concepts/events",readingShow:"top"},regularPath:"/docs/03-concepts/03-events.html",relativePath:"docs/03-concepts/03-events.md",key:"v-2d8e6278",path:"/docs/concepts/events/",headers:[{level:2,title:"Event Aggregation and Correlation",slug:"event-aggregation-and-correlation",normalizedTitle:"event aggregation and correlation",charIndex:248},{level:2,title:"Human Tasks",slug:"human-tasks",normalizedTitle:"human tasks",charIndex:1865},{level:2,title:"Process Execution Alteration",slug:"process-execution-alteration",normalizedTitle:"process execution alteration",charIndex:2447},{level:2,title:"Synchronization",slug:"synchronization",normalizedTitle:"synchronization",charIndex:2966}],codeSwitcherOptions:{},headersStr:"Event Aggregation and Correlation Human Tasks Process Execution Alteration Synchronization",content:"# Event handling\n\nFault-oblivious stateful can be about an external . A is always point to point destined to a specific instance. are always processed in the order in which they are received.\n\nThere are multiple scenarios for which are useful.\n\n\n# Event Aggregation and Correlation\n\nCadence is not a replacement for generic stream processing engines like Apache Flink or Apache Spark. But in certain scenarios it is a better fit. For example, when all that should be aggregated and correlated are always applied to some business entity with a clear ID. And then when a certain condition is met, actions should be executed.\n\nThe main limitation is that a single Cadence has a pretty limited throughput, while the number of is practically unlimited. So if you need to aggregate per customer, and your application has 100 million customers and each customer doesn't generate more than 20 per second, then Cadence would work fine. But if you want to aggregate all for US customers then the rate of these would be beyond the single capacity.\n\nFor example, an IoT device generates and a certain sequence of indicates that the device should be reprovisioned. A instance per device would be created and each instance would manage the state machine of the device and execute reprovision when necessary.\n\nAnother use case is a customer loyalty program. Every time a customer makes a purchase, an is generated into Apache Kafka for downstream systems to process. A loyalty service Kafka consumer receives the and a customer about the purchase using the Cadence signalWorkflowExecution API. The accumulates the count of the purchases. If a specified threshold is achieved, the executes an that notifies some external service that the customer has reached the next level of loyalty program. The also executes to periodically message the customer about their current status.\n\n\n# Human Tasks\n\nA lot of business processes involve human participants. The standard Cadence pattern for implementing an external interaction is to execute an that creates a human in an external system. It can be an email with a form, or a record in some external database, or a mobile app notification. When a user changes the status of the , a is sent to the corresponding . For example, when the form is submitted, or a mobile app notification is acknowledged. Some have multiple possible actions like claim, return, complete, reject. So multiple can be sent in relation to it.\n\n\n# Process Execution Alteration\n\nSome business processes should change their behavior if some external has happened. For example, while executing an order shipment , any change in item quantity could be delivered in a form of a .\n\nAnother example is a service deployment . While rolling out new software version to a Kubernetes cluster some problem was identified. A can be used to ask the to pause while the problem is investigated. Then either a continue or a rollback can be used to execute the appropriate action.\n\n\n# Synchronization\n\nCadence are strongly consistent so they can be used as a synchronization point for executing actions. For example, there is a requirement that all messages for a single user are processed sequentially but the underlying messaging infrastructure can deliver them in parallel. The Cadence solution would be to have a per user and it when an is received. Then the would buffer all in an internal data structure and then call an for every received. See the following Stack Overflow answer for an example.",normalizedContent:"# event handling\n\nfault-oblivious stateful can be about an external . a is always point to point destined to a specific instance. are always processed in the order in which they are received.\n\nthere are multiple scenarios for which are useful.\n\n\n# event aggregation and correlation\n\ncadence is not a replacement for generic stream processing engines like apache flink or apache spark. but in certain scenarios it is a better fit. for example, when all that should be aggregated and correlated are always applied to some business entity with a clear id. and then when a certain condition is met, actions should be executed.\n\nthe main limitation is that a single cadence has a pretty limited throughput, while the number of is practically unlimited. so if you need to aggregate per customer, and your application has 100 million customers and each customer doesn't generate more than 20 per second, then cadence would work fine. but if you want to aggregate all for us customers then the rate of these would be beyond the single capacity.\n\nfor example, an iot device generates and a certain sequence of indicates that the device should be reprovisioned. a instance per device would be created and each instance would manage the state machine of the device and execute reprovision when necessary.\n\nanother use case is a customer loyalty program. every time a customer makes a purchase, an is generated into apache kafka for downstream systems to process. a loyalty service kafka consumer receives the and a customer about the purchase using the cadence signalworkflowexecution api. the accumulates the count of the purchases. if a specified threshold is achieved, the executes an that notifies some external service that the customer has reached the next level of loyalty program. the also executes to periodically message the customer about their current status.\n\n\n# human tasks\n\na lot of business processes involve human participants. the standard cadence pattern for implementing an external interaction is to execute an that creates a human in an external system. it can be an email with a form, or a record in some external database, or a mobile app notification. when a user changes the status of the , a is sent to the corresponding . for example, when the form is submitted, or a mobile app notification is acknowledged. some have multiple possible actions like claim, return, complete, reject. so multiple can be sent in relation to it.\n\n\n# process execution alteration\n\nsome business processes should change their behavior if some external has happened. for example, while executing an order shipment , any change in item quantity could be delivered in a form of a .\n\nanother example is a service deployment . while rolling out new software version to a kubernetes cluster some problem was identified. a can be used to ask the to pause while the problem is investigated. then either a continue or a rollback can be used to execute the appropriate action.\n\n\n# synchronization\n\ncadence are strongly consistent so they can be used as a synchronization point for executing actions. for example, there is a requirement that all messages for a single user are processed sequentially but the underlying messaging infrastructure can deliver them in parallel. the cadence solution would be to have a per user and it when an is received. then the would buffer all in an internal data structure and then call an for every received. see the following stack overflow answer for an example.",charsets:{}},{title:"Synchronous query",frontmatter:{layout:"default",title:"Synchronous query",permalink:"/docs/concepts/queries",readingShow:"top"},regularPath:"/docs/03-concepts/04-queries.html",relativePath:"docs/03-concepts/04-queries.md",key:"v-7b43cf3c",path:"/docs/concepts/queries/",headers:[{level:2,title:"Stack Trace Query",slug:"stack-trace-query",normalizedTitle:"stack trace query",charIndex:1119}],codeSwitcherOptions:{},headersStr:"Stack Trace Query",content:'# Synchronous query\n\ncode is stateful with the Cadence framework preserving it over various software and hardware failures. The state is constantly mutated during . To expose this internal state to the external world Cadence provides a synchronous feature. From the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per type exposing different information to different external systems.\n\nTo execute a an external client calls a synchronous Cadence API providing , workflowID, name and optional arguments.\n\ncallbacks must be read-only not mutating the state in any way. The other limitation is that the callback cannot contain any blocking code. Both above limitations rule out ability to invoke from the handlers.\n\nCadence team is currently working on implementing update feature that would be similar to in the way it is invoked, but would support state mutation and invocations. From user\'s point of view, update is similar to signal + strong consistent query, but implemented in a much less expensive way in Cadence.\n\n\n# Stack Trace Query\n\nThe Cadence client libraries expose some predefined out of the box. Currently the only supported built-in is stack_trace. This returns stacks of all owned threads. This is a great way to troubleshoot any in production.\n\nExample\n\n$cadence --do samples-domain wf query -w -qt __stack_trace\n"coroutine 1 [blocked on selector-1.Select]:\\nmain.sampleSignalCounterWorkflow(0x1a99ae8, 0xc00009d700, 0x0, 0x0, 0x0)\\n\\t/Users/qlong/indeed/cadence-samples/cmd/samples/recipes/signalcounter/signal_counter_workflow.go:38 +0x1be\\nreflect.Value.call(0x1852ac0, 0x19cb608, 0x13, 0x1979180, 0x4, 0xc00045aa80, 0x2, 0x2, 0x2, 0x18, ...)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:476 +0x8e7\\nreflect.Value.Call(0x1852ac0, 0x19cb608, 0x13, 0xc00045aa80, 0x2, 0x2, 0x1, 0x2, 0xc00045a720)\\n\\t/usr/local/Cellar/go/1.16.3/libexec/src/reflect/value.go:337 +0xb9\\ngo.uber.org/cadence/internal.(*workflowEnvironmentInterceptor).ExecuteWorkflow(0xc00045a720, 0x1a99ae8, 0xc00009d700, 0xc0001ca820, 0x20, 0xc00007fad0, 0x1, 0x1, 0x1, 0x1, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/workflow.go:372 +0x2cb\\ngo.uber.org/cadence/internal.(*workflowExecutor).Execute(0xc000098d80, 0x1a99ae8, 0xc00009d700, 0xc0001b127e, 0x2, 0x2, 0xc00044cb01, 0xc000070101, 0xc000073738, 0x1729f25, ...)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_worker.go:699 +0x28d\\ngo.uber.org/cadence/internal.(*syncWorkflowDefinition).Execute.func1(0x1a99ce0, 0xc00045a9f0)\\n\\t/Users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_workflow.go:466 +0x106"\n',normalizedContent:'# synchronous query\n\ncode is stateful with the cadence framework preserving it over various software and hardware failures. the state is constantly mutated during . to expose this internal state to the external world cadence provides a synchronous feature. from the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. multiple such callbacks can be provided per type exposing different information to different external systems.\n\nto execute a an external client calls a synchronous cadence api providing , workflowid, name and optional arguments.\n\ncallbacks must be read-only not mutating the state in any way. the other limitation is that the callback cannot contain any blocking code. both above limitations rule out ability to invoke from the handlers.\n\ncadence team is currently working on implementing update feature that would be similar to in the way it is invoked, but would support state mutation and invocations. from user\'s point of view, update is similar to signal + strong consistent query, but implemented in a much less expensive way in cadence.\n\n\n# stack trace query\n\nthe cadence client libraries expose some predefined out of the box. currently the only supported built-in is stack_trace. this returns stacks of all owned threads. this is a great way to troubleshoot any in production.\n\nexample\n\n$cadence --do samples-domain wf query -w -qt __stack_trace\n"coroutine 1 [blocked on selector-1.select]:\\nmain.samplesignalcounterworkflow(0x1a99ae8, 0xc00009d700, 0x0, 0x0, 0x0)\\n\\t/users/qlong/indeed/cadence-samples/cmd/samples/recipes/signalcounter/signal_counter_workflow.go:38 +0x1be\\nreflect.value.call(0x1852ac0, 0x19cb608, 0x13, 0x1979180, 0x4, 0xc00045aa80, 0x2, 0x2, 0x2, 0x18, ...)\\n\\t/usr/local/cellar/go/1.16.3/libexec/src/reflect/value.go:476 +0x8e7\\nreflect.value.call(0x1852ac0, 0x19cb608, 0x13, 0xc00045aa80, 0x2, 0x2, 0x1, 0x2, 0xc00045a720)\\n\\t/usr/local/cellar/go/1.16.3/libexec/src/reflect/value.go:337 +0xb9\\ngo.uber.org/cadence/internal.(*workflowenvironmentinterceptor).executeworkflow(0xc00045a720, 0x1a99ae8, 0xc00009d700, 0xc0001ca820, 0x20, 0xc00007fad0, 0x1, 0x1, 0x1, 0x1, ...)\\n\\t/users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/workflow.go:372 +0x2cb\\ngo.uber.org/cadence/internal.(*workflowexecutor).execute(0xc000098d80, 0x1a99ae8, 0xc00009d700, 0xc0001b127e, 0x2, 0x2, 0xc00044cb01, 0xc000070101, 0xc000073738, 0x1729f25, ...)\\n\\t/users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_worker.go:699 +0x28d\\ngo.uber.org/cadence/internal.(*syncworkflowdefinition).execute.func1(0x1a99ce0, 0xc00045a9f0)\\n\\t/users/qlong/go/pkg/mod/go.uber.org/cadence@v0.17.1-0.20210708064625-c4a7e032cc13/internal/internal_workflow.go:466 +0x106"\n',charsets:{}},{title:"Archival",frontmatter:{layout:"default",title:"Archival",permalink:"/docs/concepts/archival",readingShow:"top"},regularPath:"/docs/03-concepts/07-archival.html",relativePath:"docs/03-concepts/07-archival.md",key:"v-eec246bc",path:"/docs/concepts/archival/",headers:[{level:2,title:"Concepts",slug:"concepts",normalizedTitle:"concepts",charIndex:1029},{level:2,title:"Configuring Archival",slug:"configuring-archival",normalizedTitle:"configuring archival",charIndex:1530},{level:3,title:"Cluster Level Archival Config",slug:"cluster-level-archival-config",normalizedTitle:"cluster level archival config",charIndex:1720},{level:3,title:"Domain Level Archival Config",slug:"domain-level-archival-config",normalizedTitle:"domain level archival config",charIndex:2401},{level:2,title:"Running Locally",slug:"running-locally",normalizedTitle:"running locally",charIndex:2837},{level:2,title:"Running in Production",slug:"running-in-production",normalizedTitle:"running in production",charIndex:3996},{level:2,title:"FAQ",slug:"faq",normalizedTitle:"faq",charIndex:970},{level:3,title:"When does archival happen?",slug:"when-does-archival-happen",normalizedTitle:"when does archival happen?",charIndex:4755},{level:3,title:"What's the query syntax for visibility archival?",slug:"what-s-the-query-syntax-for-visibility-archival",normalizedTitle:"what's the query syntax for visibility archival?",charIndex:5315},{level:3,title:"How does archival interact with global domains?",slug:"how-does-archival-interact-with-global-domains",normalizedTitle:"how does archival interact with global domains?",charIndex:5832},{level:3,title:"Can I specify multiple archival URIs?",slug:"can-i-specify-multiple-archival-uris",normalizedTitle:"can i specify multiple archival uris?",charIndex:6409},{level:3,title:"How does archival work with PII?",slug:"how-does-archival-work-with-pii",normalizedTitle:"how does archival work with pii?",charIndex:6591},{level:2,title:"Planned Future Work",slug:"planned-future-work",normalizedTitle:"planned future work",charIndex:6895}],codeSwitcherOptions:{},headersStr:"Concepts Configuring Archival Cluster Level Archival Config Domain Level Archival Config Running Locally Running in Production FAQ When does archival happen? What's the query syntax for visibility archival? How does archival interact with global domains? Can I specify multiple archival URIs? How does archival work with PII? Planned Future Work",content:'# Archival\n\nis a feature that automatically moves histories (history archival) and visibility records (visibility archival) from persistence to a secondary data store after the retention period, thus allowing users to keep workflow history and visibility records as long as necessary without overwhelming Cadence primary data store. There are two reasons you may consider turning on archival for your domain:\n\n 1. Compliance: For legal reasons histories may need to be stored for a long period of time.\n 2. Debugging: Old histories can still be accessed for debugging.\n\nThe current implementation of the feature has two limitations:\n\n 1. RunID Required: In order to retrieve an archived workflow history, both workflowID and runID are required.\n 2. Best Effort: It is possible that a history or visibility record is deleted from Cadence primary persistence without being archived first. These cases are rare but are possible with the current state of . Please check the FAQ section for how to get notified when this happens.\n\n\n# Concepts\n\n * Archiver: Archiver is the component that is responsible for archiving and retrieving histories and visibility records. Its interface is generic and supports different kinds of locations: local file system, S3, Kafka, etc. Check this README if you would like to add a new archiver implementation for your data store.\n * URI: An URI is used to specify the location. Based on the scheme part of an URI, the corresponding archiver will be selected by the system to perform the operation.\n\n\n# Configuring Archival\n\nis controlled by both level config and cluster level config. History and visibility archival have separate domain/cluster configs, but they share the same purpose.\n\n\n# Cluster Level Archival Config\n\nA Cadence cluster can be in one of three states:\n\n * Disabled: No will occur and the archivers will be not initialized on service startup.\n * Paused: This state is not yet implemented. Currently setting cluster to paused is the same as setting it to disabled.\n * Enabled: will occur.\n\nEnabling the cluster for simply means workflow histories will be archived. There is another config which controls whether archived histories or visibility records can be accessed. Both configs have defaults defined in the static yaml and can be overwritten via dynamic config. Note, however, dynamic config will take effect only when is enabled in static yaml.\n\n\n# Domain Level Archival Config\n\nA includes two pieces of related config:\n\n * Status: Either enabled or disabled. If a is in the disabled state, no will occur for that .\n * URI: The scheme and location where histories or visibility records will be archived to. When a enables for the first time URI is set and can never be changed. If URI is not specified when first enabling a for , a default URI from the static config will be used.\n\n\n# Running Locally\n\nYou can follow the steps below to run and test the feature locally:\n\n 1. ./cadence-server start\n 2. ./cadence --do samples-domain domain register --gd false --history_archival_status enabled --visibility_archival_status enabled --retention 0\n 3. Run the helloworld cadence-sample by following the README\n 4. Copy the workflowID the completed from log output\n 5. Retrieve runID through archived visibility record ./cadence --do samples-domain wf listarchived -q \'WorkflowID = ""\'\n 6. Retrieve archived history ./cadence --do samples-domain wf show --wid --rid \n\nIn step 2, we registered a new and enabled both history and visibility feature for that . Since we didn\'t provide an URI when registering the new , the default URI specified in config/development.yaml is used. The default URI is file:///tmp/cadence_archival/development for history archival and "file:///tmp/cadence_vis_archival/development" for visibility archival. You can find the archived history under the /tmp/cadence_archival/development directory and archived visibility record under the /tmp/cadence_vis_archival/development directory.\n\n\n# Running in Production\n\nCadence supports uploading workflow histories to Google Cloud and Amazon S3 for archival in production. Check documentation in GCloud archival component and S3 archival component.\n\nBelow is an example of Amazon S3 archival configuration:\n\narchival:\n history:\n status: "enabled"\n enableRead: true\n provider:\n s3store:\n region: "us-east-2"\n visibility:\n status: "enabled"\n enableRead: true\n provider:\n s3store:\n region: "us-east-2"\ndomainDefaults:\n archival:\n history:\n status: "enabled"\n URI: "s3://put-name-of-your-s3-bucket-here"\n visibility:\n status: "enabled"\n URI: "s3://put-name-of-your-s3-bucket-here" # most proably the same as the previous URI\n\n\n\n# FAQ\n\n\n# When does archival happen?\n\nIn theory, we would like both history and visibility archival happen after workflow closes and retention period passes. However, due to some limitations in the implementation, only history archival happens after the retention period, while visibility archival happens immediately after workflow closes. Please treat this as an implementation details inside Cadence and do not relay on this fact. Archived data should only be checked after the retention period, and we may change the way we do visibility archival in the future.\n\n\n# What\'s the query syntax for visibility archival?\n\nThe listArchived CLI command and API accept a SQL-like query for retrieving archived visibility records, similar to how the listWorkflow command works. Unfortunately, since different Archiver implementations have very different capability, there\'s currently no universal query syntax that works for all Archiver implementations. Please check the README (for example, S3 and GCP) of the Archiver used by your domain for the supported query syntax and limitations.\n\n\n# How does archival interact with global domains?\n\nIf you have a global domain, when occurs it will first run on the active cluster and some time later it will run on the standby cluster when replication happens. For history archival, Cadence will check if upload operation has been performed and skip duplicate efforts. For visibility archival, there\'s no such check and duplicated visibility records will be uploaded. Depending on the Archiver implementation, those duplicated upload may consume more space in the underlying storage and duplicated entries may be returned.\n\n\n# Can I specify multiple archival URIs?\n\nEach can only have one URI for history and one URI for visibility . Different , however, can have different URIs (with different schemes).\n\n\n# How does archival work with PII?\n\nNo cadence should ever operate on clear text PII. Cadence can be thought of as a database and just as one would not store PII in a database PII should not be stored in Cadence. This is even more important when is enabled because these histories can be kept forever.\n\n\n# Planned Future Work\n\n * Support retriving archived workflow histories without providing runID.\n * Provide guarantee that no history or visibility record is deleted from primary persistence before being archived.\n * Implement Paused state. In this state no will occur but histories or visibility record also will not be deleted from persistence. Once enabled again from paused state, all skipped will occur.',normalizedContent:'# archival\n\nis a feature that automatically moves histories (history archival) and visibility records (visibility archival) from persistence to a secondary data store after the retention period, thus allowing users to keep workflow history and visibility records as long as necessary without overwhelming cadence primary data store. there are two reasons you may consider turning on archival for your domain:\n\n 1. compliance: for legal reasons histories may need to be stored for a long period of time.\n 2. debugging: old histories can still be accessed for debugging.\n\nthe current implementation of the feature has two limitations:\n\n 1. runid required: in order to retrieve an archived workflow history, both workflowid and runid are required.\n 2. best effort: it is possible that a history or visibility record is deleted from cadence primary persistence without being archived first. these cases are rare but are possible with the current state of . please check the faq section for how to get notified when this happens.\n\n\n# concepts\n\n * archiver: archiver is the component that is responsible for archiving and retrieving histories and visibility records. its interface is generic and supports different kinds of locations: local file system, s3, kafka, etc. check this readme if you would like to add a new archiver implementation for your data store.\n * uri: an uri is used to specify the location. based on the scheme part of an uri, the corresponding archiver will be selected by the system to perform the operation.\n\n\n# configuring archival\n\nis controlled by both level config and cluster level config. history and visibility archival have separate domain/cluster configs, but they share the same purpose.\n\n\n# cluster level archival config\n\na cadence cluster can be in one of three states:\n\n * disabled: no will occur and the archivers will be not initialized on service startup.\n * paused: this state is not yet implemented. currently setting cluster to paused is the same as setting it to disabled.\n * enabled: will occur.\n\nenabling the cluster for simply means workflow histories will be archived. there is another config which controls whether archived histories or visibility records can be accessed. both configs have defaults defined in the static yaml and can be overwritten via dynamic config. note, however, dynamic config will take effect only when is enabled in static yaml.\n\n\n# domain level archival config\n\na includes two pieces of related config:\n\n * status: either enabled or disabled. if a is in the disabled state, no will occur for that .\n * uri: the scheme and location where histories or visibility records will be archived to. when a enables for the first time uri is set and can never be changed. if uri is not specified when first enabling a for , a default uri from the static config will be used.\n\n\n# running locally\n\nyou can follow the steps below to run and test the feature locally:\n\n 1. ./cadence-server start\n 2. ./cadence --do samples-domain domain register --gd false --history_archival_status enabled --visibility_archival_status enabled --retention 0\n 3. run the helloworld cadence-sample by following the readme\n 4. copy the workflowid the completed from log output\n 5. retrieve runid through archived visibility record ./cadence --do samples-domain wf listarchived -q \'workflowid = ""\'\n 6. retrieve archived history ./cadence --do samples-domain wf show --wid --rid \n\nin step 2, we registered a new and enabled both history and visibility feature for that . since we didn\'t provide an uri when registering the new , the default uri specified in config/development.yaml is used. the default uri is file:///tmp/cadence_archival/development for history archival and "file:///tmp/cadence_vis_archival/development" for visibility archival. you can find the archived history under the /tmp/cadence_archival/development directory and archived visibility record under the /tmp/cadence_vis_archival/development directory.\n\n\n# running in production\n\ncadence supports uploading workflow histories to google cloud and amazon s3 for archival in production. check documentation in gcloud archival component and s3 archival component.\n\nbelow is an example of amazon s3 archival configuration:\n\narchival:\n history:\n status: "enabled"\n enableread: true\n provider:\n s3store:\n region: "us-east-2"\n visibility:\n status: "enabled"\n enableread: true\n provider:\n s3store:\n region: "us-east-2"\ndomaindefaults:\n archival:\n history:\n status: "enabled"\n uri: "s3://put-name-of-your-s3-bucket-here"\n visibility:\n status: "enabled"\n uri: "s3://put-name-of-your-s3-bucket-here" # most proably the same as the previous uri\n\n\n\n# faq\n\n\n# when does archival happen?\n\nin theory, we would like both history and visibility archival happen after workflow closes and retention period passes. however, due to some limitations in the implementation, only history archival happens after the retention period, while visibility archival happens immediately after workflow closes. please treat this as an implementation details inside cadence and do not relay on this fact. archived data should only be checked after the retention period, and we may change the way we do visibility archival in the future.\n\n\n# what\'s the query syntax for visibility archival?\n\nthe listarchived cli command and api accept a sql-like query for retrieving archived visibility records, similar to how the listworkflow command works. unfortunately, since different archiver implementations have very different capability, there\'s currently no universal query syntax that works for all archiver implementations. please check the readme (for example, s3 and gcp) of the archiver used by your domain for the supported query syntax and limitations.\n\n\n# how does archival interact with global domains?\n\nif you have a global domain, when occurs it will first run on the active cluster and some time later it will run on the standby cluster when replication happens. for history archival, cadence will check if upload operation has been performed and skip duplicate efforts. for visibility archival, there\'s no such check and duplicated visibility records will be uploaded. depending on the archiver implementation, those duplicated upload may consume more space in the underlying storage and duplicated entries may be returned.\n\n\n# can i specify multiple archival uris?\n\neach can only have one uri for history and one uri for visibility . different , however, can have different uris (with different schemes).\n\n\n# how does archival work with pii?\n\nno cadence should ever operate on clear text pii. cadence can be thought of as a database and just as one would not store pii in a database pii should not be stored in cadence. this is even more important when is enabled because these histories can be kept forever.\n\n\n# planned future work\n\n * support retriving archived workflow histories without providing runid.\n * provide guarantee that no history or visibility record is deleted from primary persistence before being archived.\n * implement paused state. in this state no will occur but histories or visibility record also will not be deleted from persistence. once enabled again from paused state, all skipped will occur.',charsets:{}},{title:"Deployment topology",frontmatter:{layout:"default",title:"Deployment topology",permalink:"/docs/concepts/topology",readingShow:"top"},regularPath:"/docs/03-concepts/05-topology.html",relativePath:"docs/03-concepts/05-topology.md",key:"v-1c104a48",path:"/docs/concepts/topology/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:26},{level:2,title:"Cadence Service",slug:"cadence-service",normalizedTitle:"cadence service",charIndex:463},{level:2,title:"Workflow Worker",slug:"workflow-worker",normalizedTitle:"workflow worker",charIndex:2374},{level:2,title:"Activity Worker",slug:"activity-worker",normalizedTitle:"activity worker",charIndex:3445},{level:2,title:"External Clients",slug:"external-clients",normalizedTitle:"external clients",charIndex:4137}],codeSwitcherOptions:{},headersStr:"Overview Cadence Service Workflow Worker Activity Worker External Clients",content:"# Deployment topology\n\n\n# Overview\n\nCadence is a highly scalable fault-oblivious stateful code platform. The fault-oblivious code is a next level of abstraction over commonly used techniques to achieve fault tolerance and durability.\n\nA common Cadence-based application consists of a Cadence service, and , and external clients. Note that both types of as well as external clients are roles and can be collocated in a single application process if necessary.\n\n\n# Cadence Service\n\n\n\nAt the core of Cadence is a highly scalable multitentant service. The service exposes all of its functionality through a strongly typed gRPC API. A Cadence cluster include multiple services, each of which may run on multiple nodes for scalability and reliablity:\n\n * Front End: which is a stateless service used to handle incoming requests from Workers. It is expected that an external load balancing mechanism is used to distribute load between Front End instances.\n * History Service: where the core logic of orchestrating workflow steps and activities is implemented\n * Matching Service: matches workflow/activity tasks that need to be executed to workflow/activity workers that are able to execute them. Matching is assigned task for execution by the history service\n * Internal Worker Service: implements Cadence workflows and activities for internal requirements such as archiving\n * Workers: are effectively the client apps for Cadence. This is where user created workflow and activity logic is executed\n\nInternally it depends on a persistent store. Currently, Apache Cassandra, MySQL, PostgreSQL, CockroachDB (PostgreSQL compatible) and TiDB (MySQL compatible) stores are supported out of the box. For listing using complex predicates, ElasticSearch and OpenSearch cluster can be used.\n\nCadence service is responsible for keeping state and associated durable timers. It maintains internal queues (called ) which are used to dispatch to external .\n\nCadence service is multitentant. Therefore it is expected that multiple pools of implementing different use cases connect to the same service instance. For example, at Uber a single service is used by more than a hundred applications. At the same time some external customers deploy an instance of Cadence service per application. For local development, a local Cadence service instance configured through docker-compose is used.\n\n\n\n\n# Workflow Worker\n\nCadence reuses terminology from workflow automation . So fault-oblivious stateful code is called .\n\nThe Cadence service does not execute code directly. The code is hosted by an external (from the service point of view) process. These processes receive that contain that the is expected to handle from the Cadence service, delivers them to the code, and communicates back to the service.\n\nAs code is external to the service, it can be implemented in any language that can talk service Thrift API. Currently Java and Go clients are production ready. While Python and C# clients are under development. Let us know if you are interested in contributing a client in your preferred language.\n\nThe Cadence service API doesn't impose any specific definition language. So a specific can be implemented to execute practically any existing specification. The model the Cadence team chose to support out of the box is based on the idea of durable function. Durable functions are as close as possible to application business logic with minimal plumbing required.\n\n\n# Activity Worker\n\nfault-oblivious code is immune to infrastructure failures. But it has to communicate with the imperfect external world where failures are common. All communication to the external world is done through . are pieces of code that can perform any application-specific action like calling a service, updating a database record, or downloading a file from Amazon S3. Cadence are very feature-rich compared to queuing systems. Example features are routing to specific processes, infinite retries, heartbeats, and unlimited execution time.\n\nare hosted by processes that receive from the Cadence service, invoke correspondent implementations and report back completion statuses.\n\n\n# External Clients\n\nand host and code. But to create a instance (an execution in Cadence terminology) the StartWorkflowExecution Cadence service API call should be used. Usually, are started by outside entities like UIs, microservices or CLIs.\n\nThese entities can also:\n\n * notify about asynchronous external in the form of\n * synchronously state\n * synchronously wait for a completion\n * cancel, terminate, restart, and reset\n * search for specific using list API",normalizedContent:"# deployment topology\n\n\n# overview\n\ncadence is a highly scalable fault-oblivious stateful code platform. the fault-oblivious code is a next level of abstraction over commonly used techniques to achieve fault tolerance and durability.\n\na common cadence-based application consists of a cadence service, and , and external clients. note that both types of as well as external clients are roles and can be collocated in a single application process if necessary.\n\n\n# cadence service\n\n\n\nat the core of cadence is a highly scalable multitentant service. the service exposes all of its functionality through a strongly typed grpc api. a cadence cluster include multiple services, each of which may run on multiple nodes for scalability and reliablity:\n\n * front end: which is a stateless service used to handle incoming requests from workers. it is expected that an external load balancing mechanism is used to distribute load between front end instances.\n * history service: where the core logic of orchestrating workflow steps and activities is implemented\n * matching service: matches workflow/activity tasks that need to be executed to workflow/activity workers that are able to execute them. matching is assigned task for execution by the history service\n * internal worker service: implements cadence workflows and activities for internal requirements such as archiving\n * workers: are effectively the client apps for cadence. this is where user created workflow and activity logic is executed\n\ninternally it depends on a persistent store. currently, apache cassandra, mysql, postgresql, cockroachdb (postgresql compatible) and tidb (mysql compatible) stores are supported out of the box. for listing using complex predicates, elasticsearch and opensearch cluster can be used.\n\ncadence service is responsible for keeping state and associated durable timers. it maintains internal queues (called ) which are used to dispatch to external .\n\ncadence service is multitentant. therefore it is expected that multiple pools of implementing different use cases connect to the same service instance. for example, at uber a single service is used by more than a hundred applications. at the same time some external customers deploy an instance of cadence service per application. for local development, a local cadence service instance configured through docker-compose is used.\n\n\n\n\n# workflow worker\n\ncadence reuses terminology from workflow automation . so fault-oblivious stateful code is called .\n\nthe cadence service does not execute code directly. the code is hosted by an external (from the service point of view) process. these processes receive that contain that the is expected to handle from the cadence service, delivers them to the code, and communicates back to the service.\n\nas code is external to the service, it can be implemented in any language that can talk service thrift api. currently java and go clients are production ready. while python and c# clients are under development. let us know if you are interested in contributing a client in your preferred language.\n\nthe cadence service api doesn't impose any specific definition language. so a specific can be implemented to execute practically any existing specification. the model the cadence team chose to support out of the box is based on the idea of durable function. durable functions are as close as possible to application business logic with minimal plumbing required.\n\n\n# activity worker\n\nfault-oblivious code is immune to infrastructure failures. but it has to communicate with the imperfect external world where failures are common. all communication to the external world is done through . are pieces of code that can perform any application-specific action like calling a service, updating a database record, or downloading a file from amazon s3. cadence are very feature-rich compared to queuing systems. example features are routing to specific processes, infinite retries, heartbeats, and unlimited execution time.\n\nare hosted by processes that receive from the cadence service, invoke correspondent implementations and report back completion statuses.\n\n\n# external clients\n\nand host and code. but to create a instance (an execution in cadence terminology) the startworkflowexecution cadence service api call should be used. usually, are started by outside entities like uis, microservices or clis.\n\nthese entities can also:\n\n * notify about asynchronous external in the form of\n * synchronously state\n * synchronously wait for a completion\n * cancel, terminate, restart, and reset\n * search for specific using list api",charsets:{}},{title:"Cross DC replication",frontmatter:{layout:"default",title:"Cross DC replication",permalink:"/docs/concepts/cross-dc-replication",readingShow:"top"},regularPath:"/docs/03-concepts/08-cross-dc-replication.html",relativePath:"docs/03-concepts/08-cross-dc-replication.md",key:"v-5d616cea",path:"/docs/concepts/cross-dc-replication/",headers:[{level:2,title:"Global Domains Architecture",slug:"global-domains-architecture",normalizedTitle:"global domains architecture",charIndex:300},{level:3,title:"Conflict Resolution",slug:"conflict-resolution",normalizedTitle:"conflict resolution",charIndex:2309},{level:2,title:"Global Domain Concepts, Configuration and Operation",slug:"global-domain-concepts-configuration-and-operation",normalizedTitle:"global domain concepts, configuration and operation",charIndex:3148},{level:3,title:"Concepts",slug:"concepts",normalizedTitle:"concepts",charIndex:3162},{level:3,title:"Operate by CLI",slug:"operate-by-cli",normalizedTitle:"operate by cli",charIndex:4221},{level:2,title:"Running Locally",slug:"running-locally",normalizedTitle:"running locally",charIndex:5743},{level:2,title:"Running in Production",slug:"running-in-production",normalizedTitle:"running in production",charIndex:5865}],codeSwitcherOptions:{},headersStr:"Global Domains Architecture Conflict Resolution Global Domain Concepts, Configuration and Operation Concepts Operate by CLI Running Locally Running in Production",content:'# Cross-DC replication\n\nThe Cadence Global feature provides clients with the capability to continue their from another cluster in the event of a datacenter failover. Although you can configure a Global to be replicated to any number of clusters, it is only considered active in a single cluster.\n\n\n# Global Domains Architecture\n\nCadence has introduced a new top level entity, Global , which provides support for replication of execution across clusters. A global domain can be configured with more than one clusters, but can only be active in one of the clusters at any point of time. We call it passive or standby when not active in other clusters.\n\nThe number of standby clusters can be zero, if a global domain only configured to one cluster. This is preferred/recommended.\n\nAny workflow of a global domain can only make make progress in its active cluster. And the workflow progress is replicated to other standby clusters. For example, starting workflow by calling StartWorkflow, or starting activity(by PollForActivityTask API), can only be processed in its active cluster. After active cluster made progress, standby clusters (if any) will poll the history from active to replicate the workflow states.\n\nHowever, standby clusters can also receive the requests, e.g. for starting workflows or starting activities. They know which cluster the domain is active at. So the requests can be routed to the active clusters. This is called api-forwarding in Cadence. api-forwarding makes it possible to have no downtime during failover. There are two api-forwarding policy: selected-api-forwarding and all-domain-api-forwarding policy.\n\nWhen using selected-api-forwarding, applications need to run different set of activity & workflow polling on every cluster. Cadence will only dispatch tasks on the current active cluster; on the standby cluster will sit idle until the Global is failed over. This is recommended if XDC is being used in multiple clusters running in very remote data centers(regions), which forwarding is expensive to do.\n\nWhen using all-domain-api-forwarding, applications only need to run activity & workflow polling on one cluster. This makes it easier for the application setup. This is recommended when clusters are all in local or nearby datacenters. See more details in discussion.\n\n\n# Conflict Resolution\n\nUnlike local which provide at-most-once semantics for execution, Global can only support at-least-once semantics. Cadence global domain relies on asynchronous replication of across clusters, so in the event of a failover it is possible that gets dispatched again on the new active cluster due to a replication lag. This also means that whenever is updated after a failover by the new cluster, any previous replication for that execution cannot be applied. This results in loss of some progress made by the in the previous active cluster. During such conflict resolution, Cadence re-injects any external like to the new history before discarding replication . Even though some progress could rollback during failovers, Cadence provides the guarantee that won’t get stuck and will continue to make forward progress.\n\n\n# Global Domain Concepts, Configuration and Operation\n\n\n# Concepts\n\n# IsGlobal\n\nThis config is used to distinguish local to the cluster from the global . It controls the creation of replication on updates allowing the state to be replicated across clusters. This is a read-only setting that can only be set when the is provisioned.\n\n# Clusters\n\nA list of clusters where the can fail over to, including the current active cluster. This is also a read-only setting that can only be set when the is provisioned. A re-replication feature on the roadmap will allow updating this config to add/remove clusters in the future.\n\n# Active Cluster Name\n\nName of the current active cluster for the Global . This config is updated each time the Global is failed over to another cluster.\n\n# Failover Version\n\nUnique failover version which also represents the current active cluster for Global . Cadence allows failover to be triggered from any cluster, so failover version is designed in a way to not allow conflicts if failover is mistakenly triggered simultaneously on two clusters.\n\n\n# Operate by CLI\n\nThe Cadence can also be used to the config or perform failovers. Here are some useful commands.\n\n# Describe Global Domain\n\nThe following command can be used to describe Global metadata:\n\n$ cadence --do cadence-canary-xdc d desc\nName: cadence-canary-xdc\nDescription: cadence canary cross dc testing domain\nOwnerEmail: cadence-dev@cadenceworkflow.io\nDomainData:\nStatus: REGISTERED\nRetentionInDays: 7\nEmitMetrics: true\nActiveClusterName: dc1\nClusters: dc1, dc2\n\n\n# Failover Global Domain using domain update command(being deprecated in favor of managed graceful failover)\n\nThe following command can be used to failover Global my-domain-global to the dc2 cluster:\n\n$ cadence --do my-domain-global d up --ac dc2\n\n\n# Failover Global Domain using Managed Graceful Failover\n\nFirst of all, update the domain to enable this feature for the domain\n\n$ cadence --do test-global-domain-0 d update --domain_data IsManagedByCadence:true\n$ cadence --do test-global-domain-1 d update --domain_data IsManagedByCadence:true\n$ cadence --do test-global-domain-2 d update --domain_data IsManagedByCadence:true\n...\n\n\nThen you can start failover the those global domains using managed failover:\n\ncadence admin cluster failover start --source_cluster dc1 --target_cluster dc2\n\n\nThis will failover all the domains with IsManagedByCadence:true from dc1 to dc2.\n\nYou can provide more detailed options when using the command, and also watch the progress of the failover. Feel free to explore the cadence admin cluster failover tab.\n\n\n# Running Locally\n\nThe best way is to use Cadence docker-compose: docker-compose -f docker-compose-multiclusters.yml up\n\n\n# Running in Production\n\nEnable global domain feature needs to be enabled in static config.\n\nHere we use clusterDCA and clusterDCB as an example. We pick clusterDCA as the primary(used to called "master") cluster. The only difference of being a primary cluster is that it is responsible for domain registration. Primary can be changed later but it needs to be the same across all clusters.\n\nThe ClusterMeta config of clusterDCA should be\n\ndcRedirectionPolicy:\n policy: "selected-apis-forwarding"\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: "clusterDCA"\n currentClusterName: "clusterDCA"\n clusterInformation:\n clusterDCA:\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n clusterDCB:\n enabled: true\n initialFailoverVersion: 0\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n\n\nAnd ClusterMeta config of clusterDCB should be\n\ndcRedirectionPolicy:\n policy: "selected-apis-forwarding"\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: "clusterDCA"\n currentClusterName: "clusterDCB"\n clusterInformation:\n clusterDCA:\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n clusterDCB:\n enabled: true\n initialFailoverVersion: 0\n\n rpcName: "cadence-frontend"\n rpcAddress: "<>:<>"\n\n\nAfter the configuration is deployed:\n\n 1. Register a global domain cadence --do domain register --global_domain true --clusters clusterDCA clusterDCB --active_cluster clusterDCA\n\n 2. Run some workflow and failover domain from one to another cadence --do domain update --active_cluster clusterDCB\n\nThen the domain should be failed over to clusterDCB. Now worklfows are read-only in clusterDCA. So your workers polling tasks from clusterDCA will become idle.\n\nNote 1: that even though clusterDCA is standy/read-only for this domain, it can be active for another domain. So being active/standy is per domain basis not per clusters. In other words, for example if you use XDC in case of DC failure of clusterDCA, you need to failover all domains from clusterDCA to clusterDCB.\n\nNote 2: even though a domain is standy/read-only in a cluster, say clusterDCA, sending write requests(startWF, signalWF, etc) could still work because there is a forwarding component in the Frontend service. It will try to re-route the requests to an active cluster for the domain.',normalizedContent:'# cross-dc replication\n\nthe cadence global feature provides clients with the capability to continue their from another cluster in the event of a datacenter failover. although you can configure a global to be replicated to any number of clusters, it is only considered active in a single cluster.\n\n\n# global domains architecture\n\ncadence has introduced a new top level entity, global , which provides support for replication of execution across clusters. a global domain can be configured with more than one clusters, but can only be active in one of the clusters at any point of time. we call it passive or standby when not active in other clusters.\n\nthe number of standby clusters can be zero, if a global domain only configured to one cluster. this is preferred/recommended.\n\nany workflow of a global domain can only make make progress in its active cluster. and the workflow progress is replicated to other standby clusters. for example, starting workflow by calling startworkflow, or starting activity(by pollforactivitytask api), can only be processed in its active cluster. after active cluster made progress, standby clusters (if any) will poll the history from active to replicate the workflow states.\n\nhowever, standby clusters can also receive the requests, e.g. for starting workflows or starting activities. they know which cluster the domain is active at. so the requests can be routed to the active clusters. this is called api-forwarding in cadence. api-forwarding makes it possible to have no downtime during failover. there are two api-forwarding policy: selected-api-forwarding and all-domain-api-forwarding policy.\n\nwhen using selected-api-forwarding, applications need to run different set of activity & workflow polling on every cluster. cadence will only dispatch tasks on the current active cluster; on the standby cluster will sit idle until the global is failed over. this is recommended if xdc is being used in multiple clusters running in very remote data centers(regions), which forwarding is expensive to do.\n\nwhen using all-domain-api-forwarding, applications only need to run activity & workflow polling on one cluster. this makes it easier for the application setup. this is recommended when clusters are all in local or nearby datacenters. see more details in discussion.\n\n\n# conflict resolution\n\nunlike local which provide at-most-once semantics for execution, global can only support at-least-once semantics. cadence global domain relies on asynchronous replication of across clusters, so in the event of a failover it is possible that gets dispatched again on the new active cluster due to a replication lag. this also means that whenever is updated after a failover by the new cluster, any previous replication for that execution cannot be applied. this results in loss of some progress made by the in the previous active cluster. during such conflict resolution, cadence re-injects any external like to the new history before discarding replication . even though some progress could rollback during failovers, cadence provides the guarantee that won’t get stuck and will continue to make forward progress.\n\n\n# global domain concepts, configuration and operation\n\n\n# concepts\n\n# isglobal\n\nthis config is used to distinguish local to the cluster from the global . it controls the creation of replication on updates allowing the state to be replicated across clusters. this is a read-only setting that can only be set when the is provisioned.\n\n# clusters\n\na list of clusters where the can fail over to, including the current active cluster. this is also a read-only setting that can only be set when the is provisioned. a re-replication feature on the roadmap will allow updating this config to add/remove clusters in the future.\n\n# active cluster name\n\nname of the current active cluster for the global . this config is updated each time the global is failed over to another cluster.\n\n# failover version\n\nunique failover version which also represents the current active cluster for global . cadence allows failover to be triggered from any cluster, so failover version is designed in a way to not allow conflicts if failover is mistakenly triggered simultaneously on two clusters.\n\n\n# operate by cli\n\nthe cadence can also be used to the config or perform failovers. here are some useful commands.\n\n# describe global domain\n\nthe following command can be used to describe global metadata:\n\n$ cadence --do cadence-canary-xdc d desc\nname: cadence-canary-xdc\ndescription: cadence canary cross dc testing domain\nowneremail: cadence-dev@cadenceworkflow.io\ndomaindata:\nstatus: registered\nretentionindays: 7\nemitmetrics: true\nactiveclustername: dc1\nclusters: dc1, dc2\n\n\n# failover global domain using domain update command(being deprecated in favor of managed graceful failover)\n\nthe following command can be used to failover global my-domain-global to the dc2 cluster:\n\n$ cadence --do my-domain-global d up --ac dc2\n\n\n# failover global domain using managed graceful failover\n\nfirst of all, update the domain to enable this feature for the domain\n\n$ cadence --do test-global-domain-0 d update --domain_data ismanagedbycadence:true\n$ cadence --do test-global-domain-1 d update --domain_data ismanagedbycadence:true\n$ cadence --do test-global-domain-2 d update --domain_data ismanagedbycadence:true\n...\n\n\nthen you can start failover the those global domains using managed failover:\n\ncadence admin cluster failover start --source_cluster dc1 --target_cluster dc2\n\n\nthis will failover all the domains with ismanagedbycadence:true from dc1 to dc2.\n\nyou can provide more detailed options when using the command, and also watch the progress of the failover. feel free to explore the cadence admin cluster failover tab.\n\n\n# running locally\n\nthe best way is to use cadence docker-compose: docker-compose -f docker-compose-multiclusters.yml up\n\n\n# running in production\n\nenable global domain feature needs to be enabled in static config.\n\nhere we use clusterdca and clusterdcb as an example. we pick clusterdca as the primary(used to called "master") cluster. the only difference of being a primary cluster is that it is responsible for domain registration. primary can be changed later but it needs to be the same across all clusters.\n\nthe clustermeta config of clusterdca should be\n\ndcredirectionpolicy:\n policy: "selected-apis-forwarding"\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: "clusterdca"\n currentclustername: "clusterdca"\n clusterinformation:\n clusterdca:\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n clusterdcb:\n enabled: true\n initialfailoverversion: 0\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n\n\nand clustermeta config of clusterdcb should be\n\ndcredirectionpolicy:\n policy: "selected-apis-forwarding"\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: "clusterdca"\n currentclustername: "clusterdcb"\n clusterinformation:\n clusterdca:\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n clusterdcb:\n enabled: true\n initialfailoverversion: 0\n\n rpcname: "cadence-frontend"\n rpcaddress: "<>:<>"\n\n\nafter the configuration is deployed:\n\n 1. register a global domain cadence --do domain register --global_domain true --clusters clusterdca clusterdcb --active_cluster clusterdca\n\n 2. run some workflow and failover domain from one to another cadence --do domain update --active_cluster clusterdcb\n\nthen the domain should be failed over to clusterdcb. now worklfows are read-only in clusterdca. so your workers polling tasks from clusterdca will become idle.\n\nnote 1: that even though clusterdca is standy/read-only for this domain, it can be active for another domain. so being active/standy is per domain basis not per clusters. in other words, for example if you use xdc in case of dc failure of clusterdca, you need to failover all domains from clusterdca to clusterdcb.\n\nnote 2: even though a domain is standy/read-only in a cluster, say clusterdca, sending write requests(startwf, signalwf, etc) could still work because there is a forwarding component in the frontend service. it will try to re-route the requests to an active cluster for the domain.',charsets:{}},{title:"Search workflows(Advanced visibility)",frontmatter:{layout:"default",title:"Search workflows(Advanced visibility)",permalink:"/docs/concepts/search-workflows",readingShow:"top"},regularPath:"/docs/03-concepts/09-search-workflows.html",relativePath:"docs/03-concepts/09-search-workflows.md",key:"v-3c665d38",path:"/docs/concepts/search-workflows/",headers:[{level:2,title:"Introduction",slug:"introduction",normalizedTitle:"introduction",charIndex:47},{level:2,title:"Memo vs Search Attributes",slug:"memo-vs-search-attributes",normalizedTitle:"memo vs search attributes",charIndex:843},{level:2,title:"Search Attributes (Go Client Usage)",slug:"search-attributes-go-client-usage",normalizedTitle:"search attributes (go client usage)",charIndex:2531},{level:3,title:"Allow Listing Search Attributes",slug:"allow-listing-search-attributes",normalizedTitle:"allow listing search attributes",charIndex:2885},{level:3,title:"Value Types",slug:"value-types",normalizedTitle:"value types",charIndex:5087},{level:3,title:"Limit",slug:"limit",normalizedTitle:"limit",charIndex:5298},{level:3,title:"Upsert Search Attributes in Workflow",slug:"upsert-search-attributes-in-workflow",normalizedTitle:"upsert search attributes in workflow",charIndex:5631},{level:3,title:"ContinueAsNew and Cron",slug:"continueasnew-and-cron",normalizedTitle:"continueasnew and cron",charIndex:6932},{level:2,title:"Query Capabilities",slug:"query-capabilities",normalizedTitle:"query capabilities",charIndex:7084},{level:3,title:"Supported Operators",slug:"supported-operators",normalizedTitle:"supported operators",charIndex:7264},{level:3,title:"Default Attributes",slug:"default-attributes",normalizedTitle:"default attributes",charIndex:7364},{level:3,title:"General Notes About Queries",slug:"general-notes-about-queries",normalizedTitle:"general notes about queries",charIndex:9280},{level:2,title:"Tools Support",slug:"tools-support",normalizedTitle:"tools support",charIndex:9802},{level:3,title:"CLI",slug:"cli",normalizedTitle:"cli",charIndex:470},{level:3,title:"Web UI Support",slug:"web-ui-support",normalizedTitle:"web ui support",charIndex:11655},{level:3,title:"TLS Support for connecting to Elasticsearch",slug:"tls-support-for-connecting-to-elasticsearch",normalizedTitle:"tls support for connecting to elasticsearch",charIndex:11818},{level:2,title:"Running Locally",slug:"running-locally",normalizedTitle:"running locally",charIndex:12432},{level:2,title:"Running in Production",slug:"running-in-production",normalizedTitle:"running in production",charIndex:13237}],codeSwitcherOptions:{},headersStr:"Introduction Memo vs Search Attributes Search Attributes (Go Client Usage) Allow Listing Search Attributes Value Types Limit Upsert Search Attributes in Workflow ContinueAsNew and Cron Query Capabilities Supported Operators Default Attributes General Notes About Queries Tools Support CLI Web UI Support TLS Support for connecting to Elasticsearch Running Locally Running in Production",content:'# Searching Workflows(Advanced visibility)\n\n\n# Introduction\n\nCadence supports creating with customized key-value pairs, updating the information within the code, and then listing/searching with a SQL-like . For example, you can create with keys city and age, then search all with city = seattle and age > 22.\n\nAlso note that normal properties like start time and type can be queried as well. For example, the following could be specified when listing workflows from the CLI or using the list APIs (Go, Java):\n\nWorkflowType = "main.Workflow" AND CloseStatus != "completed" AND (StartTime > \n "2019-06-07T16:46:34-08:00" OR CloseTime > "2019-06-07T16:46:34-08:00") \n ORDER BY StartTime DESC \n\n\nIn other places, this is also called as advanced visibility. While basic visibility is referred to basic listing without being able to search.\n\n\n# Memo vs Search Attributes\n\nCadence offers two methods for creating with key-value pairs: memo and search attributes. Memo can only be provided on start. Also, memo data are not indexed, and are therefore not searchable. Memo data are visible when listing using the list APIs. Search attributes data are indexed so you can search by on these attributes. However, search attributes require the use of Elasticsearch.\n\nMemo and search attributes are available in the Go client in StartWorkflowOptions.\n\ntype StartWorkflowOptions struct {\n // ...\n\n // Memo - Optional non-indexed info that will be shown in list workflow.\n Memo map[string]interface{}\n\n // SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only\n // supported when Cadence server is using Elasticsearch). The key and value type must be registered on Cadence server side.\n // Use GetSearchAttributes API to get valid key and corresponding value type.\n SearchAttributes map[string]interface{}\n}\n\n\nIn the Java client, the WorkflowOptions.Builder has similar methods for memo and search attributes.\n\nSome important distinctions between memo and search attributes:\n\n * Memo can support all data types because it is not indexed. Search attributes only support basic data types (including String(aka Text), Int, Float, Bool, Datetime) because it is indexed by Elasticsearch.\n * Memo does not restrict on key names. Search attributes require that keys are allowlisted before using them because Elasticsearch has a limit on indexed keys.\n * Memo doesn\'t require Cadence clusters to depend on Elasticsearch while search attributes only works with Elasticsearch.\n\n\n# Search Attributes (Go Client Usage)\n\nWhen using the Cadence Go client, provide key-value pairs as SearchAttributes in StartWorkflowOptions.\n\nSearchAttributes is map[string]interface{} where the keys need to be allowlisted so that Cadence knows the attribute key name and value type. The value provided in the map must be the same type as registered.\n\n\n# Allow Listing Search Attributes\n\nStart by the list of search attributes using the\n\n$ cadence --domain samples-domain cl get-search-attr\n+---------------------+------------+\n| KEY | VALUE TYPE |\n+---------------------+------------+\n| CloseStatus | INT |\n| CloseTime | INT |\n| CustomBoolField | DOUBLE |\n| CustomDatetimeField | DATETIME |\n| CustomDomain | KEYWORD |\n| CustomDoubleField | BOOL |\n| CustomIntField | INT |\n| CustomKeywordField | KEYWORD |\n| CustomStringField | STRING |\n| DomainID | KEYWORD |\n| ExecutionTime | INT |\n| HistoryLength | INT |\n| RunID | KEYWORD |\n| StartTime | INT |\n| WorkflowID | KEYWORD |\n| WorkflowType | KEYWORD |\n+---------------------+------------+\n\n\nUse the admin to add a new search attribute:\n\ncadence --domain samples-domain adm cl asa --search_attr_key NewKey --search_attr_type 1\n\n\nThe numbers for the attribute types map as follows:\n\n * 0 = String(Text)\n * 1 = Keyword\n * 2 = Int\n * 3 = Double\n * 4 = Bool\n * 5 = DateTime\n\n# Keyword vs String(Text)\n\nNote 1: String has been renamed to Text in ElasticSearch. Cadence is also planning to rename it.\n\nNote 2: Keyword and String(Text) are concepts taken from Elasticsearch. Each word in a String(Text) is considered a searchable keyword. For a UUID, that can be problematic as Elasticsearch will index each portion of the UUID separately. To have the whole string considered as a searchable keyword, use the Keyword type.\n\nFor example, key RunID with value "2dd29ab7-2dd8-4668-83e0-89cae261cfb1"\n\n * as a Keyword will only be matched by RunID = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1" (or in the future with regular expressions)\n * as a String(Text) will be matched by RunID = "2dd8", which may cause unwanted matches\n\nNote: String(Text) type can not be used in Order By .\n\nThere are some pre-allowlisted search attributes that are handy for testing:\n\n * CustomKeywordField\n * CustomIntField\n * CustomDoubleField\n * CustomBoolField\n * CustomDatetimeField\n * CustomStringField\n\nTheir types are indicated in their names.\n\n\n# Value Types\n\nHere are the Search Attribute value types and their correspondent Golang types:\n\n * Keyword = string\n * Int = int64\n * Double = float64\n * Bool = bool\n * Datetime = time.Time\n * String = string\n\n\n# Limit\n\nWe recommend limiting the number of Elasticsearch indexes by enforcing limits on the following:\n\n * Number of keys: 100 per\n * Size of value: 2kb per value\n * Total size of key and values: 40kb per\n\nCadence reserves keys like DomainID, WorkflowID, and RunID. These can only be used in list . The values are not updatable.\n\n\n# Upsert Search Attributes in Workflow\n\nUpsertSearchAttributes is used to add or update search attributes from within the code.\n\nGo samples for search attributes can be found at github.com/uber-common/cadence-samples.\n\nUpsertSearchAttributes will merge attributes to the existing map in the . Consider this example code:\n\nfunc MyWorkflow(ctx workflow.Context, input string) error {\n\n attr1 := map[string]interface{}{\n "CustomIntField": 1,\n "CustomBoolField": true,\n }\n workflow.UpsertSearchAttributes(ctx, attr1)\n\n attr2 := map[string]interface{}{\n "CustomIntField": 2,\n "CustomKeywordField": "seattle",\n }\n workflow.UpsertSearchAttributes(ctx, attr2)\n}\n\n\nAfter the second call to UpsertSearchAttributes, the map will contain:\n\nmap[string]interface{}{\n "CustomIntField": 2,\n "CustomBoolField": true,\n "CustomKeywordField": "seattle",\n}\n\n\nThere is no support for removing a field. To achieve a similar effect, set the field to a sentinel value. For example, to remove “CustomKeywordField”, update it to “impossibleVal”. Then searching CustomKeywordField != ‘impossibleVal’ will match with CustomKeywordField not equal to "impossibleVal", which includes without the CustomKeywordField set.\n\nUse workflow.GetInfo to get current search attributes.\n\n\n# ContinueAsNew and Cron\n\nWhen performing a ContinueAsNew or using Cron, search attributes (and memo) will be carried over to the new run by default.\n\n\n# Query Capabilities\n\nby using a SQL-like where clause when listing workflows from the CLI or using the list APIs (Go, Java).\n\nNote that you will only see from one domain when .\n\n\n# Supported Operators\n\n * AND, OR, ()\n * =, !=, >, >=, <, <=\n * IN\n * BETWEEN ... AND\n * ORDER BY\n\n\n# Default Attributes\n\nMore and more default attributes are added in newer versions. Please get the by using the get-search-attr command or the GetSearchAttributes API. Some names and types are as follows:\n\nKEY VALUE TYPE\nCloseStatus INT\nCloseTime INT\nCustomBoolField DOUBLE\nCustomDatetimeField DATETIME\nCustomDomain KEYWORD\nCustomDoubleField BOOL\nCustomIntField INT\nCustomKeywordField KEYWORD\nCustomStringField STRING\nDomainID KEYWORD\nExecutionTime INT\nHistoryLength INT\nRunID KEYWORD\nStartTime INT\nWorkflowID KEYWORD\nWorkflowType KEYWORD\nTasklist KEYWORD\n\nThere are some special considerations for these attributes:\n\n * CloseStatus, CloseTime, DomainID, ExecutionTime, HistoryLength, RunID, StartTime, WorkflowID, WorkflowType are reserved by Cadence and are read-only\n * Starting from v0.18.0, Cadence automatically maps(case insensitive) string to CloseStatus so that you don\'t need to use integer in the query, to make it easier to use.\n * 0 = "completed"\n * 1 = "failed"\n * 2 = "canceled"\n * 3 = "terminated"\n * 4 = "continued_as_new"\n * 5 = "timed_out"\n * StartTime, CloseTime and ExecutionTime are stored as INT, but support using both EpochTime in nanoseconds, and string in RFC3339 format (ex. "2006-01-02T15:04:05+07:00")\n * CloseTime, CloseStatus, HistoryLength are only present in closed\n * ExecutionTime is for Retry/Cron user to a that will run in the future\n * To list only open , add CloseTime = missing to the end of the .\n\nIf you use retry or the cron feature to that will start execution in a certain time range, you can add predicates on ExecutionTime. For example: ExecutionTime > 2019-01-01T10:00:00-07:00. Note that if predicates on ExecutionTime are included, only cron or a that needs to retry will be returned.\n\n\n# General Notes About Queries\n\n * Pagesize default is 1000, and cannot be larger than 10k\n * Range on Cadence timestamp (StartTime, CloseTime, ExecutionTime) cannot be larger than 9223372036854775807 (maxInt64 - 1001)\n * by time range will have 1ms resolution\n * column names are case sensitive\n * ListWorkflow may take longer when retrieving a large number of (10M+)\n * To retrieve a large number of without caring about order, use the ScanWorkflow API\n * To efficiently count the number of , use the CountWorkflow API\n\n\n# Tools Support\n\n\n# CLI\n\nSupport for search attributes is available as of version 0.6.0 of the Cadence server. You can also use the from the latest CLI Docker image (supported on 0.6.4 or later).\n\n# Start Workflow with Search Attributes\n\ncadence --do samples-domain workflow start --tl helloWorldGroup --wt main.Workflow --et 60 --dt 10 -i \'"vancexu"\' -search_attr_key \'CustomIntField | CustomKeywordField | CustomStringField | CustomBoolField | CustomDatetimeField\' -search_attr_value \'5 | keyword1 | vancexu test | true | 2019-06-07T16:16:36-08:00\'\n\n\n# Search Workflows with List API/Command\n\ncadence --do samples-domain wf list -q \'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\' -psa\n\n\ncadence --do samples-domain wf list -q \'CustomKeywordField in ("keyword2", "keyword1") and CustomIntField >= 5 and CloseTime between "2018-06-07T16:16:36-08:00" and "2019-06-07T16:46:34-08:00" order by CustomDatetimeField desc\' -psa\n\n\nTo list only open , add CloseTime = missing to the end of the .\n\nNote that can support more than one type of filter:\n\ncadence --do samples-domain wf list -q \'WorkflowType = "main.Workflow" and (WorkflowID = "1645a588-4772-4dab-b276-5f9db108b3a8" or RunID = "be66519b-5f09-40cd-b2e8-20e4106244dc")\'\n\n\ncadence --do samples-domain wf list -q \'WorkflowType = "main.Workflow" StartTime > "2019-06-07T16:46:34-08:00" and CloseTime = missing\'\n\n\nAll above command can be done with ListWorkflowExecutions API.\n\n# Count Workflows with Count API/Command\n\ncadence --do samples-domain wf count -q \'(CustomKeywordField = "keyword1" and CustomIntField >= 5) or CustomKeywordField = "keyword2"\'\n\n\ncadence --do samples-domain wf count -q \'CloseStatus="failed"\'\n\n\ncadence --do samples-domain wf count -q \'CloseStatus!="completed"\'\n\n\nAll above command can be done with CountWorkflowExecutions API.\n\n\n# Web UI Support\n\nare supported in Cadence Web as of release 3.4.0. Use the "Basic/Advanced" button to switch to "Advanced" mode and type the in the search box.\n\n\n# TLS Support for connecting to Elasticsearch\n\nIf your elasticsearch deployment requires TLS to connect to it, you can add the following to your config template. The TLS config is optional and when not provided it defaults to tls.enabled to false\n\nelasticsearch:\n url:\n scheme: "https"\n host: "127.0.0.1:9200"\n indices:\n visibility: cadence-visibility-dev\n tls:\n enabled: true\n caFile: /secrets/cadence/elasticsearch_cert.pem\n enableHostVerification: true\n serverName: myServerName\n certFile: /secrets/cadence/certfile.crt\n keyFile: /secrets/cadence/keyfile.key\n sslmode: false\n\n\n\n# Running Locally\n\n 1. Increase Docker memory to higher than 6GB. Navigate to Docker -> Preferences -> Advanced -> Memory\n 2. Get the Cadence Docker compose file. Run curl -O https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose-es.yml\n 3. Start Cadence Docker (which contains Apache Kafka, Apache Zookeeper, and Elasticsearch) using docker-compose -f docker-compose-es.yml up\n 4. From the Docker output log, make sure Elasticsearch and Cadence started correctly. If you encounter an insufficient disk space error, try docker system prune -a --volumes\n 5. Register a local domain and start using it. cadence --do samples-domain d re\n 6. Add the key to ElasticSearch And also allowlist search attributes. cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1\n\n\n# Running in Production\n\nTo enable this feature in a Cadence cluster:\n\n * Register index schema on ElasticSearch. Run two CURL commands following this script.\n * Create a index template by using the schema , choose v6/v7 based on your ElasticSearch version\n * Create an index follow the index template, remember the name\n * Register topic on Kafka, and remember the name\n * Set up the right number of partitions based on your expected throughput(can be scaled up later)\n * Configure Cadence for ElasticSearch + Kafka like this documentation Based on the full static config, you may add some other fields like AuthN. Similarly for Kafka.\n\nTo add new search attributes:\n\n 1. Add the key to ElasticSearch cadence --do domain adm cl asa --search_attr_key NewKey --search_attr_type 1\n 2. Update the dynamic configuration to allowlist the new attribute\n\nNote: starting a with search attributes but without advanced visibility feature will succeed as normal, but will not be searchable and will not be shown in list results.',normalizedContent:'# searching workflows(advanced visibility)\n\n\n# introduction\n\ncadence supports creating with customized key-value pairs, updating the information within the code, and then listing/searching with a sql-like . for example, you can create with keys city and age, then search all with city = seattle and age > 22.\n\nalso note that normal properties like start time and type can be queried as well. for example, the following could be specified when listing workflows from the cli or using the list apis (go, java):\n\nworkflowtype = "main.workflow" and closestatus != "completed" and (starttime > \n "2019-06-07t16:46:34-08:00" or closetime > "2019-06-07t16:46:34-08:00") \n order by starttime desc \n\n\nin other places, this is also called as advanced visibility. while basic visibility is referred to basic listing without being able to search.\n\n\n# memo vs search attributes\n\ncadence offers two methods for creating with key-value pairs: memo and search attributes. memo can only be provided on start. also, memo data are not indexed, and are therefore not searchable. memo data are visible when listing using the list apis. search attributes data are indexed so you can search by on these attributes. however, search attributes require the use of elasticsearch.\n\nmemo and search attributes are available in the go client in startworkflowoptions.\n\ntype startworkflowoptions struct {\n // ...\n\n // memo - optional non-indexed info that will be shown in list workflow.\n memo map[string]interface{}\n\n // searchattributes - optional indexed info that can be used in query of list/scan/count workflow apis (only\n // supported when cadence server is using elasticsearch). the key and value type must be registered on cadence server side.\n // use getsearchattributes api to get valid key and corresponding value type.\n searchattributes map[string]interface{}\n}\n\n\nin the java client, the workflowoptions.builder has similar methods for memo and search attributes.\n\nsome important distinctions between memo and search attributes:\n\n * memo can support all data types because it is not indexed. search attributes only support basic data types (including string(aka text), int, float, bool, datetime) because it is indexed by elasticsearch.\n * memo does not restrict on key names. search attributes require that keys are allowlisted before using them because elasticsearch has a limit on indexed keys.\n * memo doesn\'t require cadence clusters to depend on elasticsearch while search attributes only works with elasticsearch.\n\n\n# search attributes (go client usage)\n\nwhen using the cadence go client, provide key-value pairs as searchattributes in startworkflowoptions.\n\nsearchattributes is map[string]interface{} where the keys need to be allowlisted so that cadence knows the attribute key name and value type. the value provided in the map must be the same type as registered.\n\n\n# allow listing search attributes\n\nstart by the list of search attributes using the\n\n$ cadence --domain samples-domain cl get-search-attr\n+---------------------+------------+\n| key | value type |\n+---------------------+------------+\n| closestatus | int |\n| closetime | int |\n| customboolfield | double |\n| customdatetimefield | datetime |\n| customdomain | keyword |\n| customdoublefield | bool |\n| customintfield | int |\n| customkeywordfield | keyword |\n| customstringfield | string |\n| domainid | keyword |\n| executiontime | int |\n| historylength | int |\n| runid | keyword |\n| starttime | int |\n| workflowid | keyword |\n| workflowtype | keyword |\n+---------------------+------------+\n\n\nuse the admin to add a new search attribute:\n\ncadence --domain samples-domain adm cl asa --search_attr_key newkey --search_attr_type 1\n\n\nthe numbers for the attribute types map as follows:\n\n * 0 = string(text)\n * 1 = keyword\n * 2 = int\n * 3 = double\n * 4 = bool\n * 5 = datetime\n\n# keyword vs string(text)\n\nnote 1: string has been renamed to text in elasticsearch. cadence is also planning to rename it.\n\nnote 2: keyword and string(text) are concepts taken from elasticsearch. each word in a string(text) is considered a searchable keyword. for a uuid, that can be problematic as elasticsearch will index each portion of the uuid separately. to have the whole string considered as a searchable keyword, use the keyword type.\n\nfor example, key runid with value "2dd29ab7-2dd8-4668-83e0-89cae261cfb1"\n\n * as a keyword will only be matched by runid = "2dd29ab7-2dd8-4668-83e0-89cae261cfb1" (or in the future with regular expressions)\n * as a string(text) will be matched by runid = "2dd8", which may cause unwanted matches\n\nnote: string(text) type can not be used in order by .\n\nthere are some pre-allowlisted search attributes that are handy for testing:\n\n * customkeywordfield\n * customintfield\n * customdoublefield\n * customboolfield\n * customdatetimefield\n * customstringfield\n\ntheir types are indicated in their names.\n\n\n# value types\n\nhere are the search attribute value types and their correspondent golang types:\n\n * keyword = string\n * int = int64\n * double = float64\n * bool = bool\n * datetime = time.time\n * string = string\n\n\n# limit\n\nwe recommend limiting the number of elasticsearch indexes by enforcing limits on the following:\n\n * number of keys: 100 per\n * size of value: 2kb per value\n * total size of key and values: 40kb per\n\ncadence reserves keys like domainid, workflowid, and runid. these can only be used in list . the values are not updatable.\n\n\n# upsert search attributes in workflow\n\nupsertsearchattributes is used to add or update search attributes from within the code.\n\ngo samples for search attributes can be found at github.com/uber-common/cadence-samples.\n\nupsertsearchattributes will merge attributes to the existing map in the . consider this example code:\n\nfunc myworkflow(ctx workflow.context, input string) error {\n\n attr1 := map[string]interface{}{\n "customintfield": 1,\n "customboolfield": true,\n }\n workflow.upsertsearchattributes(ctx, attr1)\n\n attr2 := map[string]interface{}{\n "customintfield": 2,\n "customkeywordfield": "seattle",\n }\n workflow.upsertsearchattributes(ctx, attr2)\n}\n\n\nafter the second call to upsertsearchattributes, the map will contain:\n\nmap[string]interface{}{\n "customintfield": 2,\n "customboolfield": true,\n "customkeywordfield": "seattle",\n}\n\n\nthere is no support for removing a field. to achieve a similar effect, set the field to a sentinel value. for example, to remove “customkeywordfield”, update it to “impossibleval”. then searching customkeywordfield != ‘impossibleval’ will match with customkeywordfield not equal to "impossibleval", which includes without the customkeywordfield set.\n\nuse workflow.getinfo to get current search attributes.\n\n\n# continueasnew and cron\n\nwhen performing a continueasnew or using cron, search attributes (and memo) will be carried over to the new run by default.\n\n\n# query capabilities\n\nby using a sql-like where clause when listing workflows from the cli or using the list apis (go, java).\n\nnote that you will only see from one domain when .\n\n\n# supported operators\n\n * and, or, ()\n * =, !=, >, >=, <, <=\n * in\n * between ... and\n * order by\n\n\n# default attributes\n\nmore and more default attributes are added in newer versions. please get the by using the get-search-attr command or the getsearchattributes api. some names and types are as follows:\n\nkey value type\nclosestatus int\nclosetime int\ncustomboolfield double\ncustomdatetimefield datetime\ncustomdomain keyword\ncustomdoublefield bool\ncustomintfield int\ncustomkeywordfield keyword\ncustomstringfield string\ndomainid keyword\nexecutiontime int\nhistorylength int\nrunid keyword\nstarttime int\nworkflowid keyword\nworkflowtype keyword\ntasklist keyword\n\nthere are some special considerations for these attributes:\n\n * closestatus, closetime, domainid, executiontime, historylength, runid, starttime, workflowid, workflowtype are reserved by cadence and are read-only\n * starting from v0.18.0, cadence automatically maps(case insensitive) string to closestatus so that you don\'t need to use integer in the query, to make it easier to use.\n * 0 = "completed"\n * 1 = "failed"\n * 2 = "canceled"\n * 3 = "terminated"\n * 4 = "continued_as_new"\n * 5 = "timed_out"\n * starttime, closetime and executiontime are stored as int, but support using both epochtime in nanoseconds, and string in rfc3339 format (ex. "2006-01-02t15:04:05+07:00")\n * closetime, closestatus, historylength are only present in closed\n * executiontime is for retry/cron user to a that will run in the future\n * to list only open , add closetime = missing to the end of the .\n\nif you use retry or the cron feature to that will start execution in a certain time range, you can add predicates on executiontime. for example: executiontime > 2019-01-01t10:00:00-07:00. note that if predicates on executiontime are included, only cron or a that needs to retry will be returned.\n\n\n# general notes about queries\n\n * pagesize default is 1000, and cannot be larger than 10k\n * range on cadence timestamp (starttime, closetime, executiontime) cannot be larger than 9223372036854775807 (maxint64 - 1001)\n * by time range will have 1ms resolution\n * column names are case sensitive\n * listworkflow may take longer when retrieving a large number of (10m+)\n * to retrieve a large number of without caring about order, use the scanworkflow api\n * to efficiently count the number of , use the countworkflow api\n\n\n# tools support\n\n\n# cli\n\nsupport for search attributes is available as of version 0.6.0 of the cadence server. you can also use the from the latest cli docker image (supported on 0.6.4 or later).\n\n# start workflow with search attributes\n\ncadence --do samples-domain workflow start --tl helloworldgroup --wt main.workflow --et 60 --dt 10 -i \'"vancexu"\' -search_attr_key \'customintfield | customkeywordfield | customstringfield | customboolfield | customdatetimefield\' -search_attr_value \'5 | keyword1 | vancexu test | true | 2019-06-07t16:16:36-08:00\'\n\n\n# search workflows with list api/command\n\ncadence --do samples-domain wf list -q \'(customkeywordfield = "keyword1" and customintfield >= 5) or customkeywordfield = "keyword2"\' -psa\n\n\ncadence --do samples-domain wf list -q \'customkeywordfield in ("keyword2", "keyword1") and customintfield >= 5 and closetime between "2018-06-07t16:16:36-08:00" and "2019-06-07t16:46:34-08:00" order by customdatetimefield desc\' -psa\n\n\nto list only open , add closetime = missing to the end of the .\n\nnote that can support more than one type of filter:\n\ncadence --do samples-domain wf list -q \'workflowtype = "main.workflow" and (workflowid = "1645a588-4772-4dab-b276-5f9db108b3a8" or runid = "be66519b-5f09-40cd-b2e8-20e4106244dc")\'\n\n\ncadence --do samples-domain wf list -q \'workflowtype = "main.workflow" starttime > "2019-06-07t16:46:34-08:00" and closetime = missing\'\n\n\nall above command can be done with listworkflowexecutions api.\n\n# count workflows with count api/command\n\ncadence --do samples-domain wf count -q \'(customkeywordfield = "keyword1" and customintfield >= 5) or customkeywordfield = "keyword2"\'\n\n\ncadence --do samples-domain wf count -q \'closestatus="failed"\'\n\n\ncadence --do samples-domain wf count -q \'closestatus!="completed"\'\n\n\nall above command can be done with countworkflowexecutions api.\n\n\n# web ui support\n\nare supported in cadence web as of release 3.4.0. use the "basic/advanced" button to switch to "advanced" mode and type the in the search box.\n\n\n# tls support for connecting to elasticsearch\n\nif your elasticsearch deployment requires tls to connect to it, you can add the following to your config template. the tls config is optional and when not provided it defaults to tls.enabled to false\n\nelasticsearch:\n url:\n scheme: "https"\n host: "127.0.0.1:9200"\n indices:\n visibility: cadence-visibility-dev\n tls:\n enabled: true\n cafile: /secrets/cadence/elasticsearch_cert.pem\n enablehostverification: true\n servername: myservername\n certfile: /secrets/cadence/certfile.crt\n keyfile: /secrets/cadence/keyfile.key\n sslmode: false\n\n\n\n# running locally\n\n 1. increase docker memory to higher than 6gb. navigate to docker -> preferences -> advanced -> memory\n 2. get the cadence docker compose file. run curl -o https://raw.githubusercontent.com/uber/cadence/master/docker/docker-compose-es.yml\n 3. start cadence docker (which contains apache kafka, apache zookeeper, and elasticsearch) using docker-compose -f docker-compose-es.yml up\n 4. from the docker output log, make sure elasticsearch and cadence started correctly. if you encounter an insufficient disk space error, try docker system prune -a --volumes\n 5. register a local domain and start using it. cadence --do samples-domain d re\n 6. add the key to elasticsearch and also allowlist search attributes. cadence --do domain adm cl asa --search_attr_key newkey --search_attr_type 1\n\n\n# running in production\n\nto enable this feature in a cadence cluster:\n\n * register index schema on elasticsearch. run two curl commands following this script.\n * create a index template by using the schema , choose v6/v7 based on your elasticsearch version\n * create an index follow the index template, remember the name\n * register topic on kafka, and remember the name\n * set up the right number of partitions based on your expected throughput(can be scaled up later)\n * configure cadence for elasticsearch + kafka like this documentation based on the full static config, you may add some other fields like authn. similarly for kafka.\n\nto add new search attributes:\n\n 1. add the key to elasticsearch cadence --do domain adm cl asa --search_attr_key newkey --search_attr_type 1\n 2. update the dynamic configuration to allowlist the new attribute\n\nnote: starting a with search attributes but without advanced visibility feature will succeed as normal, but will not be searchable and will not be shown in list results.',charsets:{cjk:!0}},{title:"Task lists",frontmatter:{layout:"default",title:"Task lists",permalink:"/docs/concepts/task-lists",readingShow:"top"},regularPath:"/docs/03-concepts/06-task-lists.html",relativePath:"docs/03-concepts/06-task-lists.md",key:"v-78a9ec22",path:"/docs/concepts/task-lists/",codeSwitcherOptions:{},headersStr:null,content:"# Task lists\n\nWhen a invokes an , it sends the ScheduleActivityTask to the Cadence service. As a result, the service updates the state and dispatches an to a that implements the . Instead of calling the directly, an intermediate queue is used. So the service adds an to this queue and a receives the using a long poll request. Cadence calls this queue used to dispatch an .\n\nSimilarly, when a needs to handle an external , a is created. A is used to deliver it to the (also called decider).\n\nWhile Cadence are queues, they have some differences from commonly used queuing technologies. The main one is that they do not require explicit registration and are created on demand. The number of is not limited. A common use case is to have a per process and use it to deliver to the process. Another use case is to have a per pool of .\n\nThere are multiple advantages of using a to deliver instead of invoking an through a synchronous RPC:\n\n * doesn't need to have any open ports, which is more secure.\n * doesn't need to advertise itself through DNS or any other network discovery mechanism.\n * When all are down, messages are persisted in a waiting for the to recover.\n * A polls for a message only when it has spare capacity, so it never gets overloaded.\n * Automatic load balancing across a large number of .\n * support server side throttling. This allows you to limit the dispatch rate to the pool of and still supports adding a with a higher rate when spikes happen.\n * can be used to route a request to specific pools of or even a specific process.",normalizedContent:"# task lists\n\nwhen a invokes an , it sends the scheduleactivitytask to the cadence service. as a result, the service updates the state and dispatches an to a that implements the . instead of calling the directly, an intermediate queue is used. so the service adds an to this queue and a receives the using a long poll request. cadence calls this queue used to dispatch an .\n\nsimilarly, when a needs to handle an external , a is created. a is used to deliver it to the (also called decider).\n\nwhile cadence are queues, they have some differences from commonly used queuing technologies. the main one is that they do not require explicit registration and are created on demand. the number of is not limited. a common use case is to have a per process and use it to deliver to the process. another use case is to have a per pool of .\n\nthere are multiple advantages of using a to deliver instead of invoking an through a synchronous rpc:\n\n * doesn't need to have any open ports, which is more secure.\n * doesn't need to advertise itself through dns or any other network discovery mechanism.\n * when all are down, messages are persisted in a waiting for the to recover.\n * a polls for a message only when it has spare capacity, so it never gets overloaded.\n * automatic load balancing across a large number of .\n * support server side throttling. this allows you to limit the dispatch rate to the pool of and still supports adding a with a higher rate when spikes happen.\n * can be used to route a request to specific pools of or even a specific process.",charsets:{}},{title:"HTTP API",frontmatter:{layout:"default",title:"HTTP API",permalink:"/docs/concepts/http-api",readingShow:"top"},regularPath:"/docs/03-concepts/10-http-api.html",relativePath:"docs/03-concepts/10-http-api.md",key:"v-c2670478",path:"/docs/concepts/http-api/",headers:[{level:2,title:"Introduction",slug:"introduction",normalizedTitle:"introduction",charIndex:21},{level:2,title:"Setup",slug:"setup",normalizedTitle:"setup",charIndex:765},{level:3,title:"Updating Cadence configuration files",slug:"updating-cadence-configuration-files",normalizedTitle:"updating cadence configuration files",charIndex:775},{level:3,title:"Using local binaries",slug:"using-local-binaries",normalizedTitle:"using local binaries",charIndex:1188},{level:3,title:"Using “docker run” command",slug:"using-docker-run-command",normalizedTitle:"using “docker run” command",charIndex:1281},{level:3,title:"Using docker-compose",slug:"using-docker-compose",normalizedTitle:"using docker-compose",charIndex:1682},{level:2,title:"Using HTTP API",slug:"using-http-api-2",normalizedTitle:"using http api",charIndex:2},{level:2,title:"HTTP API Reference",slug:"http-api-reference",normalizedTitle:"http api reference",charIndex:3184},{level:3,title:"Admin API",slug:"admin-api",normalizedTitle:"admin api",charIndex:3207},{level:3,title:"Domain API",slug:"domain-api",normalizedTitle:"domain api",charIndex:13442},{level:3,title:"Meta API",slug:"meta-api",normalizedTitle:"meta api",charIndex:18445},{level:3,title:"Visibility API",slug:"visibility-api",normalizedTitle:"visibility api",charIndex:19146},{level:3,title:"Workflow API",slug:"workflow-api",normalizedTitle:"workflow api",charIndex:25851}],codeSwitcherOptions:{},headersStr:"Introduction Setup Updating Cadence configuration files Using local binaries Using “docker run” command Using docker-compose Using HTTP API HTTP API Reference Admin API Domain API Meta API Visibility API Workflow API",content:'# Using HTTP API\n\n\n# Introduction\n\nFrom version 1.2.0 onwards, Cadence has introduced HTTP API support, which allows you to interact with the Cadence server using the HTTP protocol. To put this into perspective, HTTP/JSON communication is a flexible method for server interaction. In the context of Cadence, this implies that a range of RPC methods can be exposed and invoked using the HTTP protocol. This enhancement broadens the scope of interaction with the Cadence server, enabling the use of any programming language that supports HTTP. Consequently, you can leverage this functionality to initiate or terminate workflows from your bash scripts, monitor the status of your cluster, or execute any other operation that the Cadence RPC declaration supports.\n\n\n# Setup\n\n\n# Updating Cadence configuration files\n\nTo enable “start workflow” HTTP API, add http section to Cadence RPC configuration settings (e.g., in base.yaml or development.yaml):\n\nservices:\n frontend:\n rpc:\n <...>\n http:\n port: 8800\n procedures:\n - uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution \n\n\nThen you can run Cadence server in the following ways to use HTTP API.\n\n\n# Using local binaries\n\nBuild and run ./cadence-server as described in Developing Cadence.\n\n\n# Using “docker run” command\n\nRefer to instructions described in Using docker image for production.\n\nAdditionally add two more environment variables:\n\ndocker run\n<...>\n -e FRONTEND_HTTP_PORT=8800 -- HTTP PORT TO LISTEN \n -e FRONTEND_HTTP_PROCEDURES=uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution -- List of API methods exposed\n ubercadence/server: \n\n\n\n# Using docker-compose\n\nAdd HTTP environment variables to docker/docker-compose.yml configuration:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n - "8000:8000"\n - "8001:8001"\n - "8002:8002"\n - "8003:8003"\n - "7933:7933"\n - "7934:7934"\n - "7935:7935"\n - "7939:7939"\n - "7833:7833"\n - "8800:8800"\n environment:\n - "CASSANDRA_SEEDS=cassandra"\n - "PROMETHEUS_ENDPOINT_0=0.0.0.0:8000"\n - "PROMETHEUS_ENDPOINT_1=0.0.0.0:8001"\n - "PROMETHEUS_ENDPOINT_2=0.0.0.0:8002"\n - "PROMETHEUS_ENDPOINT_3=0.0.0.0:8003"\n - "DYNAMIC_CONFIG_FILE_PATH=config/dynamicconfig/development.yaml"\n - "FRONTEND_HTTP_PORT=8800"\n - "FRONTEND_HTTP_PROCEDURES=uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution" \n\n\n\n# Using HTTP API\n\nStart a workflow using curl command\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: rpc-client-name\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution\' \\\n -d @data.json \n\n\nWhere data.json content looks something like this:\n\n{\n "domain": "sample-domain",\n "workflowId": "workflowid123",\n "execution_start_to_close_timeout": "11s",\n "task_start_to_close_timeout": "10s",\n "workflowType": {\n "name": "workflow_type"\n },\n "taskList": {\n "name": "tasklist-name"\n },\n "identity": "My custom caller identity",\n "requestId": "4D1E4058-6FCF-4BA8-BF16-8FA8B02F9651"\n} \n\n\n\n# HTTP API Reference\n\n\n# Admin API\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::AddSearchAttribute\n\n# Add search attributes to whitelist\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIAddSearchAttribute\n\n# Example payload\n\n{\n "search_attribute": {\n "custom_key": 1\n }\n}\n\n\nSearch attribute types\n\nTYPE VALUE\nString 1\nKeyword 2\nInt 3\nDouble 4\nDateTime 5\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::AddSearchAttribute\' \\\n -d \\\n \'{\n "search_attribute": {\n "custom_key": 1\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::CloseShard\n\n# Close a shard given a shard ID\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPICloseShard\n\n# Example payload\n\n{\n "shard_id": 0\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::CloseShard\' \\\n -d \\\n \'{ \n "shard_id": 0\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::CountDLQMessages\n\n# Count DLQ messages\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPICountDLQMessages\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::CountDLQMessages\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "history": []\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeCluster\n\n# Describe cluster information\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeCluster\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeCluster\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "supportedClientVersions": {\n "goSdk": "1.7.0",\n "javaSdk": "1.5.0"\n },\n "membershipInfo": {\n "currentHost": {\n "identity": "127.0.0.1:7933"\n },\n "reachableMembers": [\n "127.0.0.1:7933",\n "127.0.0.1:7934",\n "127.0.0.1:7935",\n "127.0.0.1:7939"\n ],\n "rings": [\n {\n "role": "cadence-frontend",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7933"\n }\n ]\n },\n {\n "role": "cadence-history",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7934"\n }\n ]\n },\n {\n "role": "cadence-matching",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7935"\n }\n ]\n },\n {\n "role": "cadence-worker",\n "memberCount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7939"\n }\n ]\n }\n ]\n },\n "persistenceInfo": {\n "historyStore": {\n "backend": "shardedNosql"\n },\n "visibilityStore": {\n "backend": "cassandra",\n "features": [\n {\n "key": "advancedVisibilityEnabled"\n }\n ]\n }\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeHistoryHost\n\n# Describe internal information of history host\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeHistoryHost\n\n# Example payload\n\n{\n "host_address": "127.0.0.1:7934"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeHistoryHost\' \\\n -d \\\n \'{\n "host_address": "127.0.0.1:7934"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "numberOfShards": 4,\n "domainCache": {\n "numOfItemsInCacheByID": 5,\n "numOfItemsInCacheByName": 5\n },\n "shardControllerStatus": "started",\n "address": "127.0.0.1:7934"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeShardDistribution\n\n# List shard distribution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeShardDistribution\n\n# Example payload\n\n{\n "page_size": 100,\n "page_id": 0\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeShardDistribution\' \\\n -d \\\n \'{\n "page_size": 100,\n "page_id": 0\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "numberOfShards": 4,\n "shards": {\n "0": "127.0.0.1:7934",\n "1": "127.0.0.1:7934",\n "2": "127.0.0.1:7934",\n "3": "127.0.0.1:7934"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.admin.v1.AdminAPI::DescribeWorkflowExecution\n\n# Describe internal information of workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.AdminAPIDescribeWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.admin.v1.AdminAPI::DescribeWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n }\' | tr -d \'\\\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "shardId": 3,\n "historyAddr": "127.0.0.1:7934",\n "mutableStateInDatabase": {\n "ActivityInfos": {},\n "TimerInfos": {},\n "ChildExecutionInfos": {},\n "RequestCancelInfos": {},\n "SignalInfos": {},\n "SignalRequestedIDs": {},\n "ExecutionInfo": {\n "DomainID": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "WorkflowID": "sample-workflow-id",\n "RunID": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "FirstExecutionRunID": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "ParentDomainID": "",\n "ParentWorkflowID": "",\n "ParentRunID": "",\n "InitiatedID": -7,\n "CompletionEventBatchID": 3,\n "CompletionEvent": null,\n "TaskList": "sample-task-list",\n "WorkflowTypeName": "sample-workflow-type",\n "WorkflowTimeout": 11,\n "DecisionStartToCloseTimeout": 10,\n "ExecutionContext": null,\n "State": 2,\n "CloseStatus": 6,\n "LastFirstEventID": 3,\n "LastEventTaskID": 8388614,\n "NextEventID": 4,\n "LastProcessedEvent": -23,\n "StartTimestamp": "2023-09-08T05:13:04.24Z",\n "LastUpdatedTimestamp": "2023-09-08T05:13:15.247Z",\n "CreateRequestID": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "SignalCount": 0,\n "DecisionVersion": 0,\n "DecisionScheduleID": 2,\n "DecisionStartedID": -23,\n "DecisionRequestID": "emptyUuid",\n "DecisionTimeout": 10,\n "DecisionAttempt": 0,\n "DecisionStartedTimestamp": 0,\n "DecisionScheduledTimestamp": 1694149984240504000,\n "DecisionOriginalScheduledTimestamp": 1694149984240503000,\n "CancelRequested": false,\n "CancelRequestID": "",\n "StickyTaskList": "",\n "StickyScheduleToStartTimeout": 0,\n "ClientLibraryVersion": "",\n "ClientFeatureVersion": "",\n "ClientImpl": "",\n "AutoResetPoints": {},\n "Memo": null,\n "SearchAttributes": null,\n "PartitionConfig": null,\n "Attempt": 0,\n "HasRetryPolicy": false,\n "InitialInterval": 0,\n "BackoffCoefficient": 0,\n "MaximumInterval": 0,\n "ExpirationTime": "0001-01-01T00:00:00Z",\n "MaximumAttempts": 0,\n "NonRetriableErrors": null,\n "BranchToken": null,\n "CronSchedule": "",\n "IsCron": false,\n "ExpirationSeconds": 0\n },\n "ExecutionStats": null,\n "BufferedEvents": [],\n "VersionHistories": {\n "CurrentVersionHistoryIndex": 0,\n "Histories": [\n {\n "BranchToken": "WQsACgAAACRjYzA5ZDVkZC1iMmZhLTQ2ZDgtYjQyNi01NGM5NmIxMmQxOGYLABQAAAAkYWM5YmIwMmUtMjllYy00YWEyLTlkZGUtZWQ0YWU1NWRhMjlhDwAeDAAAAAAA",\n "Items": [\n {\n "EventID": 3,\n "Version": 0\n }\n ]\n }\n ]\n },\n "ReplicationState": null,\n "Checksum": {\n "Version": 0,\n "Flavor": 0,\n "Value": null\n }\n }\n}\n\n\n----------------------------------------\n\n\n# Domain API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.DomainAPI::DescribeDomain\n\n# Describe existing workflow domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.DomainAPIDescribeDomain\n\n# Example payload\n\n{\n "name": "sample-domain",\n "uuid": "d7aff879-f524-43a8-b340-5a223a69d75b"\n}\n\n\nuuid of the domain is optional.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.DomainAPI::DescribeDomain\' \\\n -d \\\n \'{\n "name": "sample-domain"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "domain": {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "DOMAIN_STATUS_REGISTERED",\n "data": {},\n "workflowExecutionRetentionPeriod": "259200s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "historyArchivalUri": "file:///tmp/cadence_archival/development",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "visibilityArchivalUri": "file:///tmp/cadence_vis_archival/development",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "isGlobalDomain": true,\n "isolationGroups": {}\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.DomainAPI::ListDomains\n\n# List all domains in the cluster\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.DomainAPIListDomains\n\n# Example payload\n\n{\n "page_size": 100\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.DomainAPI::ListDomains\' \\\n -d \\\n \'{\n "page_size": 100\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "domains": [\n {\n "id": "3116607e-419b-4783-85fc-47726a4c3fe9",\n "name": "cadence-batcher",\n "status": "DOMAIN_STATUS_REGISTERED",\n "description": "Cadence internal system domain",\n "data": {},\n "workflowExecutionRetentionPeriod": "604800s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "failoverVersion": "-24",\n "isolationGroups": {}\n },\n {\n "id": "59c51119-1b41-4a28-986d-d6e377716f82",\n "name": "cadence-shadower",\n "status": "DOMAIN_STATUS_REGISTERED",\n "description": "Cadence internal system domain",\n "data": {},\n "workflowExecutionRetentionPeriod": "604800s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "failoverVersion": "-24",\n "isolationGroups": {}\n },\n {\n "id": "32049b68-7872-4094-8e63-d0dd59896a83",\n "name": "cadence-system",\n "status": "DOMAIN_STATUS_REGISTERED",\n "description": "cadence system workflow domain",\n "ownerEmail": "cadence-dev-group@uber.com",\n "data": {},\n "workflowExecutionRetentionPeriod": "259200s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_DISABLED",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "failoverVersion": "-24",\n "isolationGroups": {}\n },\n {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "DOMAIN_STATUS_REGISTERED",\n "data": {},\n "workflowExecutionRetentionPeriod": "259200s",\n "badBinaries": {\n "binaries": {}\n },\n "historyArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "historyArchivalUri": "file:///tmp/cadence_archival/development",\n "visibilityArchivalStatus": "ARCHIVAL_STATUS_ENABLED",\n "visibilityArchivalUri": "file:///tmp/cadence_vis_archival/development",\n "activeClusterName": "cluster0",\n "clusters": [\n {\n "clusterName": "cluster0"\n }\n ],\n "isGlobalDomain": true,\n "isolationGroups": {}\n }\n ],\n "nextPageToken": ""\n}\n\n\n----------------------------------------\n\n\n# Meta API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.MetaAPI::Health\n\n# Health check\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.MetaAPIHealth\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.MetaAPI::Health\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "ok": true,\n "message": "OK"\n}\n\n\n----------------------------------------\n\n\n# Visibility API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.VisibilityAPI::GetSearchAttributes\n\n# Get search attributes\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.VisibilityAPIGetSearchAttributes\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.VisibilityAPI::GetSearchAttributes\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "keys": {\n "BinaryChecksums": "INDEXED_VALUE_TYPE_KEYWORD",\n "CadenceChangeVersion": "INDEXED_VALUE_TYPE_KEYWORD",\n "CloseStatus": "INDEXED_VALUE_TYPE_INT",\n "CloseTime": "INDEXED_VALUE_TYPE_INT",\n "CustomBoolField": "INDEXED_VALUE_TYPE_BOOL",\n "CustomDatetimeField": "INDEXED_VALUE_TYPE_DATETIME",\n "CustomDomain": "INDEXED_VALUE_TYPE_KEYWORD",\n "CustomDoubleField": "INDEXED_VALUE_TYPE_DOUBLE",\n "CustomIntField": "INDEXED_VALUE_TYPE_INT",\n "CustomKeywordField": "INDEXED_VALUE_TYPE_KEYWORD",\n "CustomStringField": "INDEXED_VALUE_TYPE_STRING",\n "DomainID": "INDEXED_VALUE_TYPE_KEYWORD",\n "ExecutionTime": "INDEXED_VALUE_TYPE_INT",\n "HistoryLength": "INDEXED_VALUE_TYPE_INT",\n "IsCron": "INDEXED_VALUE_TYPE_KEYWORD",\n "NewKey": "INDEXED_VALUE_TYPE_KEYWORD",\n "NumClusters": "INDEXED_VALUE_TYPE_INT",\n "Operator": "INDEXED_VALUE_TYPE_KEYWORD",\n "Passed": "INDEXED_VALUE_TYPE_BOOL",\n "RolloutID": "INDEXED_VALUE_TYPE_KEYWORD",\n "RunID": "INDEXED_VALUE_TYPE_KEYWORD",\n "ShardID": "INDEXED_VALUE_TYPE_INT",\n "StartTime": "INDEXED_VALUE_TYPE_INT",\n "TaskList": "INDEXED_VALUE_TYPE_KEYWORD",\n "TestNewKey": "INDEXED_VALUE_TYPE_STRING",\n "UpdateTime": "INDEXED_VALUE_TYPE_INT",\n "WorkflowID": "INDEXED_VALUE_TYPE_KEYWORD",\n "WorkflowType": "INDEXED_VALUE_TYPE_KEYWORD",\n "addon": "INDEXED_VALUE_TYPE_KEYWORD",\n "addon-type": "INDEXED_VALUE_TYPE_KEYWORD",\n "environment": "INDEXED_VALUE_TYPE_KEYWORD",\n "project": "INDEXED_VALUE_TYPE_KEYWORD",\n "service": "INDEXED_VALUE_TYPE_KEYWORD",\n "user": "INDEXED_VALUE_TYPE_KEYWORD"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.VisibilityAPI::ListClosedWorkflowExecutions\n\n# List closed workflow executions in a domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.VisibilityAPIListClosedWorkflowExecutions\n\n# Example payloads\n\nstartTimeFilter is required while executionFilter and typeFilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.VisibilityAPI::ListClosedWorkflowExecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "executions": [\n {\n "workflowExecution": {\n "workflowId": "sample-workflow-id",\n "runId": "71c3d47b-454a-4315-97c7-15355140094b"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "startTime": "2023-09-08T06:31:18.778Z",\n "closeTime": "2023-09-08T06:32:18.782Z",\n "closeStatus": "WORKFLOW_EXECUTION_CLOSE_STATUS_TIMED_OUT",\n "historyLength": "5",\n "executionTime": "2023-09-08T06:31:18.778Z",\n "memo": {},\n "searchAttributes": {\n "indexedFields": {}\n },\n "taskList": "sample-task-list"\n }\n ],\n "nextPageToken": ""\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.VisibilityAPI::ListOpenWorkflowExecutions\n\n# List open workflow executions in a domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.VisibilityAPIListOpenWorkflowExecutions\n\n# Example payloads\n\nstartTimeFilter is required while executionFilter and typeFilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.VisibilityAPI::ListOpenWorkflowExecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01T00:00:00Z",\n "latest_time": "2023-12-31T00:00:00Z"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "executions": [\n {\n "workflowExecution": {\n "workflowId": "sample-workflow-id",\n "runId": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "startTime": "2023-09-12T02:17:46.596Z",\n "executionTime": "2023-09-12T02:17:46.596Z",\n "memo": {},\n "searchAttributes": {\n "indexedFields": {}\n },\n "taskList": "sample-task-list"\n }\n ],\n "nextPageToken": ""\n}\n\n\n----------------------------------------\n\n\n# Workflow API\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::DescribeTaskList\n\n# Describe pollers info of tasklist\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIDescribeTaskList\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n}\n\n\ntask_list kind is optional.\n\nTask list kinds\n\nTYPE VALUE\nTaskListKindNormal 1\nTaskListKindSticky 2\n\nTask list types\n\nTYPE VALUE\nTaskListTypeDecision 1\nTaskListTypeActivity 2\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::DescribeTaskList\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "taskListStatus": {\n "readLevel": "200000",\n "ratePerSecond": 100000,\n "taskIdBlock": {\n "startId": "200001",\n "endId": "300000"\n }\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::DescribeWorkflowExecution\n\n# Describe a workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIDescribeWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::DescribeWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "executionConfiguration": {\n "taskList": {\n "name": "sample-task-list"\n },\n "executionStartToCloseTimeout": "11s",\n "taskStartToCloseTimeout": "10s"\n },\n "workflowExecutionInfo": {\n "workflowExecution": {\n "workflowId": "sample-workflow-id",\n "runId": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "startTime": "2023-09-12T02:17:46.596Z",\n "closeTime": "2023-09-12T02:17:57.602707Z",\n "closeStatus": "WORKFLOW_EXECUTION_CLOSE_STATUS_TIMED_OUT",\n "historyLength": "3",\n "executionTime": "2023-09-12T02:17:46.596Z",\n "memo": {},\n "searchAttributes": {},\n "autoResetPoints": {}\n },\n "pendingDecision": {\n "state": "PENDING_DECISION_STATE_SCHEDULED",\n "scheduledTime": "2023-09-12T02:17:46.596982Z",\n "originalScheduledTime": "2023-09-12T02:17:46.596982Z"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::GetClusterInfo\n\n# Get supported client versions for the cluster\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIGetClusterInfo\n\n# Example payload\n\nNone\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::GetClusterInfo\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "supportedClientVersions": {\n "goSdk": "1.7.0",\n "javaSdk": "1.5.0"\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::GetTaskListsByDomain\n\n# Get the task lists in a domain\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIGetTaskListsByDomain\n\n# Example payload\n\n{\n "domain": "sample-domain"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::GetTaskListsByDomain\' \\\n -d \\\n \'{\n "domain": "sample-domain"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "decisionTaskListMap": {},\n "activityTaskListMap": {}\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::GetWorkflowExecutionHistory\n\n# Get the history of workflow executions\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIGetWorkflowExecutionHistory\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::GetWorkflowExecutionHistory\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "history": {\n "events": [\n {\n "eventId": "1",\n "eventTime": "2023-09-12T05:34:46.107550Z",\n "taskId": "9437321",\n "workflowExecutionStartedEventAttributes": {\n "workflowType": {\n "name": "sample-workflow-type"\n },\n "taskList": {\n "name": "sample-task-list"\n },\n "input": {\n "data": "IkN1cmwhIg=="\n },\n "executionStartToCloseTimeout": "61s",\n "taskStartToCloseTimeout": "60s",\n "originalExecutionRunId": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "identity": "client-name-visible-in-history",\n "firstExecutionRunId": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "firstDecisionTaskBackoff": "0s"\n }\n },\n {\n "eventId": "2",\n "eventTime": "2023-09-12T05:34:46.107565Z",\n "taskId": "9437322",\n "decisionTaskScheduledEventAttributes": {\n "taskList": {\n "name": "sample-task-list"\n },\n "startToCloseTimeout": "60s"\n }\n },\n {\n "eventId": "3",\n "eventTime": "2023-09-12T05:34:59.184511Z",\n "taskId": "9437330",\n "workflowExecutionCancelRequestedEventAttributes": {\n "cause": "dummy",\n "identity": "client-name-visible-in-history"\n }\n },\n {\n "eventId": "4",\n "eventTime": "2023-09-12T05:35:47.112156Z",\n "taskId": "9437332",\n "workflowExecutionTimedOutEventAttributes": {\n "timeoutType": "TIMEOUT_TYPE_START_TO_CLOSE"\n }\n }\n ]\n }\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::ListTaskListPartitions\n\n# List all the task list partitions and the hostname for partitions\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIListTaskListPartitions\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::ListTaskListPartitions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "activityTaskListPartitions": [\n {\n "key": "sample-task-list",\n "ownerHostName": "127.0.0.1:7935"\n }\n ],\n "decisionTaskListPartitions": [\n {\n "key": "sample-task-list",\n "ownerHostName": "127.0.0.1:7935"\n }\n ]\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::RefreshWorkflowTasks\n\n# Refresh all the tasks of a workflow\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIRefreshWorkflowTasks\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::RefreshWorkflowTasks\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::RequestCancelWorkflowExecution\n\n# Cancel a workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIRequestCancelWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n },\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::RequestCancelWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n },\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::RestartWorkflowExecution\n\n# Restart a previous workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIRestartWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::RestartWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "runId": "82914458-3221-42b4-ae54-2e66dff864f7"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::SignalWithStartWorkflowExecution\n\n# Signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPISignalWithStartWorkflowExecution\n\n# Example payload\n\n{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::SignalWithStartWorkflowExecution\' \\\n -d \\\n \'{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "runId": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::SignalWorkflowExecution\n\n# Signal a workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPISignalWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n}\n\n\nrun_id is optional and allows to signal a specific run.\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::SignalWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n },\n "signal_name": "channelA",\n "signal_input": {\n "data": "MTA="\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution\n\n# Start a new workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPIStartWorkflowExecution\n\n# Example payload\n\n{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::StartWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049B932-6C2F-415A-9BB2-241DCF4CFC9C",\n "input": {\n "data": "IkN1cmwhIg=="\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{\n "runId": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\nPOST uber.cadence.api.v1.WorkflowAPI::TerminateWorkflowExecution\n\n# Terminate a new workflow execution\n\n# Headers\n\nNAME EXAMPLE\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.WorkflowAPITerminateWorkflowExecution\n\n# Example payloads\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "reason": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n}\n\n\n# Example cURL\n\ncurl -X POST http://0.0.0.0:8800 \\\n -H \'context-ttl-ms: 2000\' \\\n -H \'rpc-caller: curl-client\' \\\n -H \'rpc-service: cadence-frontend\' \\\n -H \'rpc-encoding: json\' \\\n -H \'rpc-procedure: uber.cadence.api.v1.WorkflowAPI::TerminateWorkflowExecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# Example successful response\n\nHTTP code: 200\n\n{}\n\n\n----------------------------------------',normalizedContent:'# using http api\n\n\n# introduction\n\nfrom version 1.2.0 onwards, cadence has introduced http api support, which allows you to interact with the cadence server using the http protocol. to put this into perspective, http/json communication is a flexible method for server interaction. in the context of cadence, this implies that a range of rpc methods can be exposed and invoked using the http protocol. this enhancement broadens the scope of interaction with the cadence server, enabling the use of any programming language that supports http. consequently, you can leverage this functionality to initiate or terminate workflows from your bash scripts, monitor the status of your cluster, or execute any other operation that the cadence rpc declaration supports.\n\n\n# setup\n\n\n# updating cadence configuration files\n\nto enable “start workflow” http api, add http section to cadence rpc configuration settings (e.g., in base.yaml or development.yaml):\n\nservices:\n frontend:\n rpc:\n <...>\n http:\n port: 8800\n procedures:\n - uber.cadence.api.v1.workflowapi::startworkflowexecution \n\n\nthen you can run cadence server in the following ways to use http api.\n\n\n# using local binaries\n\nbuild and run ./cadence-server as described in developing cadence.\n\n\n# using “docker run” command\n\nrefer to instructions described in using docker image for production.\n\nadditionally add two more environment variables:\n\ndocker run\n<...>\n -e frontend_http_port=8800 -- http port to listen \n -e frontend_http_procedures=uber.cadence.api.v1.workflowapi::startworkflowexecution -- list of api methods exposed\n ubercadence/server: \n\n\n\n# using docker-compose\n\nadd http environment variables to docker/docker-compose.yml configuration:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n - "8000:8000"\n - "8001:8001"\n - "8002:8002"\n - "8003:8003"\n - "7933:7933"\n - "7934:7934"\n - "7935:7935"\n - "7939:7939"\n - "7833:7833"\n - "8800:8800"\n environment:\n - "cassandra_seeds=cassandra"\n - "prometheus_endpoint_0=0.0.0.0:8000"\n - "prometheus_endpoint_1=0.0.0.0:8001"\n - "prometheus_endpoint_2=0.0.0.0:8002"\n - "prometheus_endpoint_3=0.0.0.0:8003"\n - "dynamic_config_file_path=config/dynamicconfig/development.yaml"\n - "frontend_http_port=8800"\n - "frontend_http_procedures=uber.cadence.api.v1.workflowapi::startworkflowexecution" \n\n\n\n# using http api\n\nstart a workflow using curl command\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: rpc-client-name\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::startworkflowexecution\' \\\n -d @data.json \n\n\nwhere data.json content looks something like this:\n\n{\n "domain": "sample-domain",\n "workflowid": "workflowid123",\n "execution_start_to_close_timeout": "11s",\n "task_start_to_close_timeout": "10s",\n "workflowtype": {\n "name": "workflow_type"\n },\n "tasklist": {\n "name": "tasklist-name"\n },\n "identity": "my custom caller identity",\n "requestid": "4d1e4058-6fcf-4ba8-bf16-8fa8b02f9651"\n} \n\n\n\n# http api reference\n\n\n# admin api\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::addsearchattribute\n\n# add search attributes to whitelist\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapiaddsearchattribute\n\n# example payload\n\n{\n "search_attribute": {\n "custom_key": 1\n }\n}\n\n\nsearch attribute types\n\ntype value\nstring 1\nkeyword 2\nint 3\ndouble 4\ndatetime 5\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::addsearchattribute\' \\\n -d \\\n \'{\n "search_attribute": {\n "custom_key": 1\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::closeshard\n\n# close a shard given a shard id\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapicloseshard\n\n# example payload\n\n{\n "shard_id": 0\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::closeshard\' \\\n -d \\\n \'{ \n "shard_id": 0\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::countdlqmessages\n\n# count dlq messages\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapicountdlqmessages\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::countdlqmessages\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "history": []\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describecluster\n\n# describe cluster information\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribecluster\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describecluster\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "supportedclientversions": {\n "gosdk": "1.7.0",\n "javasdk": "1.5.0"\n },\n "membershipinfo": {\n "currenthost": {\n "identity": "127.0.0.1:7933"\n },\n "reachablemembers": [\n "127.0.0.1:7933",\n "127.0.0.1:7934",\n "127.0.0.1:7935",\n "127.0.0.1:7939"\n ],\n "rings": [\n {\n "role": "cadence-frontend",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7933"\n }\n ]\n },\n {\n "role": "cadence-history",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7934"\n }\n ]\n },\n {\n "role": "cadence-matching",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7935"\n }\n ]\n },\n {\n "role": "cadence-worker",\n "membercount": 1,\n "members": [\n {\n "identity": "127.0.0.1:7939"\n }\n ]\n }\n ]\n },\n "persistenceinfo": {\n "historystore": {\n "backend": "shardednosql"\n },\n "visibilitystore": {\n "backend": "cassandra",\n "features": [\n {\n "key": "advancedvisibilityenabled"\n }\n ]\n }\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describehistoryhost\n\n# describe internal information of history host\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribehistoryhost\n\n# example payload\n\n{\n "host_address": "127.0.0.1:7934"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describehistoryhost\' \\\n -d \\\n \'{\n "host_address": "127.0.0.1:7934"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "numberofshards": 4,\n "domaincache": {\n "numofitemsincachebyid": 5,\n "numofitemsincachebyname": 5\n },\n "shardcontrollerstatus": "started",\n "address": "127.0.0.1:7934"\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describesharddistribution\n\n# list shard distribution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribesharddistribution\n\n# example payload\n\n{\n "page_size": 100,\n "page_id": 0\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describesharddistribution\' \\\n -d \\\n \'{\n "page_size": 100,\n "page_id": 0\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "numberofshards": 4,\n "shards": {\n "0": "127.0.0.1:7934",\n "1": "127.0.0.1:7934",\n "2": "127.0.0.1:7934",\n "3": "127.0.0.1:7934"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.admin.v1.adminapi::describeworkflowexecution\n\n# describe internal information of workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.admin.v1.adminapidescribeworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.admin.v1.adminapi::describeworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n }\n }\' | tr -d \'\\\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "shardid": 3,\n "historyaddr": "127.0.0.1:7934",\n "mutablestateindatabase": {\n "activityinfos": {},\n "timerinfos": {},\n "childexecutioninfos": {},\n "requestcancelinfos": {},\n "signalinfos": {},\n "signalrequestedids": {},\n "executioninfo": {\n "domainid": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "workflowid": "sample-workflow-id",\n "runid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "firstexecutionrunid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f",\n "parentdomainid": "",\n "parentworkflowid": "",\n "parentrunid": "",\n "initiatedid": -7,\n "completioneventbatchid": 3,\n "completionevent": null,\n "tasklist": "sample-task-list",\n "workflowtypename": "sample-workflow-type",\n "workflowtimeout": 11,\n "decisionstarttoclosetimeout": 10,\n "executioncontext": null,\n "state": 2,\n "closestatus": 6,\n "lastfirsteventid": 3,\n "lasteventtaskid": 8388614,\n "nexteventid": 4,\n "lastprocessedevent": -23,\n "starttimestamp": "2023-09-08t05:13:04.24z",\n "lastupdatedtimestamp": "2023-09-08t05:13:15.247z",\n "createrequestid": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "signalcount": 0,\n "decisionversion": 0,\n "decisionscheduleid": 2,\n "decisionstartedid": -23,\n "decisionrequestid": "emptyuuid",\n "decisiontimeout": 10,\n "decisionattempt": 0,\n "decisionstartedtimestamp": 0,\n "decisionscheduledtimestamp": 1694149984240504000,\n "decisionoriginalscheduledtimestamp": 1694149984240503000,\n "cancelrequested": false,\n "cancelrequestid": "",\n "stickytasklist": "",\n "stickyscheduletostarttimeout": 0,\n "clientlibraryversion": "",\n "clientfeatureversion": "",\n "clientimpl": "",\n "autoresetpoints": {},\n "memo": null,\n "searchattributes": null,\n "partitionconfig": null,\n "attempt": 0,\n "hasretrypolicy": false,\n "initialinterval": 0,\n "backoffcoefficient": 0,\n "maximuminterval": 0,\n "expirationtime": "0001-01-01t00:00:00z",\n "maximumattempts": 0,\n "nonretriableerrors": null,\n "branchtoken": null,\n "cronschedule": "",\n "iscron": false,\n "expirationseconds": 0\n },\n "executionstats": null,\n "bufferedevents": [],\n "versionhistories": {\n "currentversionhistoryindex": 0,\n "histories": [\n {\n "branchtoken": "wqsacgaaacrjyza5zdvkzc1immzhltq2zdgtyjqyni01ngm5nmixmmqxogylabqaaaakywm5ymiwmmutmjllyy00yweyltlkzgutzwq0ywu1nwrhmjlhdwaedaaaaaaa",\n "items": [\n {\n "eventid": 3,\n "version": 0\n }\n ]\n }\n ]\n },\n "replicationstate": null,\n "checksum": {\n "version": 0,\n "flavor": 0,\n "value": null\n }\n }\n}\n\n\n----------------------------------------\n\n\n# domain api\n\n----------------------------------------\n\npost uber.cadence.api.v1.domainapi::describedomain\n\n# describe existing workflow domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.domainapidescribedomain\n\n# example payload\n\n{\n "name": "sample-domain",\n "uuid": "d7aff879-f524-43a8-b340-5a223a69d75b"\n}\n\n\nuuid of the domain is optional.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.domainapi::describedomain\' \\\n -d \\\n \'{\n "name": "sample-domain"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "domain": {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "domain_status_registered",\n "data": {},\n "workflowexecutionretentionperiod": "259200s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_enabled",\n "historyarchivaluri": "file:///tmp/cadence_archival/development",\n "visibilityarchivalstatus": "archival_status_enabled",\n "visibilityarchivaluri": "file:///tmp/cadence_vis_archival/development",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "isglobaldomain": true,\n "isolationgroups": {}\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.domainapi::listdomains\n\n# list all domains in the cluster\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.domainapilistdomains\n\n# example payload\n\n{\n "page_size": 100\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.domainapi::listdomains\' \\\n -d \\\n \'{\n "page_size": 100\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "domains": [\n {\n "id": "3116607e-419b-4783-85fc-47726a4c3fe9",\n "name": "cadence-batcher",\n "status": "domain_status_registered",\n "description": "cadence internal system domain",\n "data": {},\n "workflowexecutionretentionperiod": "604800s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_disabled",\n "visibilityarchivalstatus": "archival_status_disabled",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "failoverversion": "-24",\n "isolationgroups": {}\n },\n {\n "id": "59c51119-1b41-4a28-986d-d6e377716f82",\n "name": "cadence-shadower",\n "status": "domain_status_registered",\n "description": "cadence internal system domain",\n "data": {},\n "workflowexecutionretentionperiod": "604800s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_disabled",\n "visibilityarchivalstatus": "archival_status_disabled",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "failoverversion": "-24",\n "isolationgroups": {}\n },\n {\n "id": "32049b68-7872-4094-8e63-d0dd59896a83",\n "name": "cadence-system",\n "status": "domain_status_registered",\n "description": "cadence system workflow domain",\n "owneremail": "cadence-dev-group@uber.com",\n "data": {},\n "workflowexecutionretentionperiod": "259200s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_disabled",\n "visibilityarchivalstatus": "archival_status_disabled",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "failoverversion": "-24",\n "isolationgroups": {}\n },\n {\n "id": "d7aff879-f524-43a8-b340-5a223a69d75b",\n "name": "sample-domain",\n "status": "domain_status_registered",\n "data": {},\n "workflowexecutionretentionperiod": "259200s",\n "badbinaries": {\n "binaries": {}\n },\n "historyarchivalstatus": "archival_status_enabled",\n "historyarchivaluri": "file:///tmp/cadence_archival/development",\n "visibilityarchivalstatus": "archival_status_enabled",\n "visibilityarchivaluri": "file:///tmp/cadence_vis_archival/development",\n "activeclustername": "cluster0",\n "clusters": [\n {\n "clustername": "cluster0"\n }\n ],\n "isglobaldomain": true,\n "isolationgroups": {}\n }\n ],\n "nextpagetoken": ""\n}\n\n\n----------------------------------------\n\n\n# meta api\n\n----------------------------------------\n\npost uber.cadence.api.v1.metaapi::health\n\n# health check\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.metaapihealth\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.metaapi::health\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "ok": true,\n "message": "ok"\n}\n\n\n----------------------------------------\n\n\n# visibility api\n\n----------------------------------------\n\npost uber.cadence.api.v1.visibilityapi::getsearchattributes\n\n# get search attributes\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.visibilityapigetsearchattributes\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.visibilityapi::getsearchattributes\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "keys": {\n "binarychecksums": "indexed_value_type_keyword",\n "cadencechangeversion": "indexed_value_type_keyword",\n "closestatus": "indexed_value_type_int",\n "closetime": "indexed_value_type_int",\n "customboolfield": "indexed_value_type_bool",\n "customdatetimefield": "indexed_value_type_datetime",\n "customdomain": "indexed_value_type_keyword",\n "customdoublefield": "indexed_value_type_double",\n "customintfield": "indexed_value_type_int",\n "customkeywordfield": "indexed_value_type_keyword",\n "customstringfield": "indexed_value_type_string",\n "domainid": "indexed_value_type_keyword",\n "executiontime": "indexed_value_type_int",\n "historylength": "indexed_value_type_int",\n "iscron": "indexed_value_type_keyword",\n "newkey": "indexed_value_type_keyword",\n "numclusters": "indexed_value_type_int",\n "operator": "indexed_value_type_keyword",\n "passed": "indexed_value_type_bool",\n "rolloutid": "indexed_value_type_keyword",\n "runid": "indexed_value_type_keyword",\n "shardid": "indexed_value_type_int",\n "starttime": "indexed_value_type_int",\n "tasklist": "indexed_value_type_keyword",\n "testnewkey": "indexed_value_type_string",\n "updatetime": "indexed_value_type_int",\n "workflowid": "indexed_value_type_keyword",\n "workflowtype": "indexed_value_type_keyword",\n "addon": "indexed_value_type_keyword",\n "addon-type": "indexed_value_type_keyword",\n "environment": "indexed_value_type_keyword",\n "project": "indexed_value_type_keyword",\n "service": "indexed_value_type_keyword",\n "user": "indexed_value_type_keyword"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.visibilityapi::listclosedworkflowexecutions\n\n# list closed workflow executions in a domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.visibilityapilistclosedworkflowexecutions\n\n# example payloads\n\nstarttimefilter is required while executionfilter and typefilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.visibilityapi::listclosedworkflowexecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "executions": [\n {\n "workflowexecution": {\n "workflowid": "sample-workflow-id",\n "runid": "71c3d47b-454a-4315-97c7-15355140094b"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "starttime": "2023-09-08t06:31:18.778z",\n "closetime": "2023-09-08t06:32:18.782z",\n "closestatus": "workflow_execution_close_status_timed_out",\n "historylength": "5",\n "executiontime": "2023-09-08t06:31:18.778z",\n "memo": {},\n "searchattributes": {\n "indexedfields": {}\n },\n "tasklist": "sample-task-list"\n }\n ],\n "nextpagetoken": ""\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.visibilityapi::listopenworkflowexecutions\n\n# list open workflow executions in a domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.visibilityapilistopenworkflowexecutions\n\n# example payloads\n\nstarttimefilter is required while executionfilter and typefilter are optional.\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "execution_filter": {\n "workflow_id": "sample-workflow-id",\n "run_id": "71c3d47b-454a-4315-97c7-15355140094b"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n },\n "type_filter": {\n "name": "sample-workflow-type"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.visibilityapi::listopenworkflowexecutions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "start_time_filter": {\n "earliest_time": "2023-01-01t00:00:00z",\n "latest_time": "2023-12-31t00:00:00z"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "executions": [\n {\n "workflowexecution": {\n "workflowid": "sample-workflow-id",\n "runid": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "starttime": "2023-09-12t02:17:46.596z",\n "executiontime": "2023-09-12t02:17:46.596z",\n "memo": {},\n "searchattributes": {\n "indexedfields": {}\n },\n "tasklist": "sample-task-list"\n }\n ],\n "nextpagetoken": ""\n}\n\n\n----------------------------------------\n\n\n# workflow api\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::describetasklist\n\n# describe pollers info of tasklist\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapidescribetasklist\n\n# example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n}\n\n\ntask_list kind is optional.\n\ntask list kinds\n\ntype value\ntasklistkindnormal 1\ntasklistkindsticky 2\n\ntask list types\n\ntype value\ntasklisttypedecision 1\ntasklisttypeactivity 2\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::describetasklist\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list",\n "kind": 1\n },\n "task_list_type": 1,\n "include_task_list_status": true\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "taskliststatus": {\n "readlevel": "200000",\n "ratepersecond": 100000,\n "taskidblock": {\n "startid": "200001",\n "endid": "300000"\n }\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::describeworkflowexecution\n\n# describe a workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapidescribeworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n}\n\n\nrun_id is optional and allows to describe a specific run.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::describeworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "executionconfiguration": {\n "tasklist": {\n "name": "sample-task-list"\n },\n "executionstarttoclosetimeout": "11s",\n "taskstarttoclosetimeout": "10s"\n },\n "workflowexecutioninfo": {\n "workflowexecution": {\n "workflowid": "sample-workflow-id",\n "runid": "5dbabeeb-82a2-41ed-bf55-dc732a4d46ce"\n },\n "type": {\n "name": "sample-workflow-type"\n },\n "starttime": "2023-09-12t02:17:46.596z",\n "closetime": "2023-09-12t02:17:57.602707z",\n "closestatus": "workflow_execution_close_status_timed_out",\n "historylength": "3",\n "executiontime": "2023-09-12t02:17:46.596z",\n "memo": {},\n "searchattributes": {},\n "autoresetpoints": {}\n },\n "pendingdecision": {\n "state": "pending_decision_state_scheduled",\n "scheduledtime": "2023-09-12t02:17:46.596982z",\n "originalscheduledtime": "2023-09-12t02:17:46.596982z"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::getclusterinfo\n\n# get supported client versions for the cluster\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapigetclusterinfo\n\n# example payload\n\nnone\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::getclusterinfo\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "supportedclientversions": {\n "gosdk": "1.7.0",\n "javasdk": "1.5.0"\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::gettasklistsbydomain\n\n# get the task lists in a domain\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapigettasklistsbydomain\n\n# example payload\n\n{\n "domain": "sample-domain"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::gettasklistsbydomain\' \\\n -d \\\n \'{\n "domain": "sample-domain"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "decisiontasklistmap": {},\n "activitytasklistmap": {}\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::getworkflowexecutionhistory\n\n# get the history of workflow executions\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapigetworkflowexecutionhistory\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::getworkflowexecutionhistory\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "history": {\n "events": [\n {\n "eventid": "1",\n "eventtime": "2023-09-12t05:34:46.107550z",\n "taskid": "9437321",\n "workflowexecutionstartedeventattributes": {\n "workflowtype": {\n "name": "sample-workflow-type"\n },\n "tasklist": {\n "name": "sample-task-list"\n },\n "input": {\n "data": "ikn1cmwhig=="\n },\n "executionstarttoclosetimeout": "61s",\n "taskstarttoclosetimeout": "60s",\n "originalexecutionrunid": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "identity": "client-name-visible-in-history",\n "firstexecutionrunid": "fd7c2283-79dd-458c-8306-e2d1d8217613",\n "firstdecisiontaskbackoff": "0s"\n }\n },\n {\n "eventid": "2",\n "eventtime": "2023-09-12t05:34:46.107565z",\n "taskid": "9437322",\n "decisiontaskscheduledeventattributes": {\n "tasklist": {\n "name": "sample-task-list"\n },\n "starttoclosetimeout": "60s"\n }\n },\n {\n "eventid": "3",\n "eventtime": "2023-09-12t05:34:59.184511z",\n "taskid": "9437330",\n "workflowexecutioncancelrequestedeventattributes": {\n "cause": "dummy",\n "identity": "client-name-visible-in-history"\n }\n },\n {\n "eventid": "4",\n "eventtime": "2023-09-12t05:35:47.112156z",\n "taskid": "9437332",\n "workflowexecutiontimedouteventattributes": {\n "timeouttype": "timeout_type_start_to_close"\n }\n }\n ]\n }\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::listtasklistpartitions\n\n# list all the task list partitions and the hostname for partitions\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapilisttasklistpartitions\n\n# example payload\n\n{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::listtasklistpartitions\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "task_list": {\n "name": "sample-task-list"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "activitytasklistpartitions": [\n {\n "key": "sample-task-list",\n "ownerhostname": "127.0.0.1:7935"\n }\n ],\n "decisiontasklistpartitions": [\n {\n "key": "sample-task-list",\n "ownerhostname": "127.0.0.1:7935"\n }\n ]\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::refreshworkflowtasks\n\n# refresh all the tasks of a workflow\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapirefreshworkflowtasks\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::refreshworkflowtasks\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::requestcancelworkflowexecution\n\n# cancel a workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapirequestcancelworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n },\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "b7973fb8-2229-4fe7-ad70-c919c1ae8774"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::requestcancelworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n },\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "cause": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "fd7c2283-79dd-458c-8306-e2d1d8217613"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::restartworkflowexecution\n\n# restart a previous workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapirestartworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::restartworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "identity": "client-name-visible-in-history",\n "reason": "dummy"\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "runid": "82914458-3221-42b4-ae54-2e66dff864f7"\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::signalwithstartworkflowexecution\n\n# signal the current open workflow if exists, or attempt to start a new run based on idresuepolicy and signals it\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapisignalwithstartworkflowexecution\n\n# example payload\n\n{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::signalwithstartworkflowexecution\' \\\n -d \\\n \'{\n "start_request": {\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "runid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::signalworkflowexecution\n\n# signal a workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapisignalworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n}\n\n\nrun_id is optional and allows to signal a specific run.\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::signalworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n },\n "signal_name": "channela",\n "signal_input": {\n "data": "mta="\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::startworkflowexecution\n\n# start a new workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapistartworkflowexecution\n\n# example payload\n\n{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::startworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_id": "sample-workflow-id",\n "execution_start_to_close_timeout": "61s",\n "task_start_to_close_timeout": "60s",\n "workflow_type": {\n "name": "sample-workflow-type"\n },\n "task_list": {\n "name": "sample-task-list"\n },\n "identity": "client-name-visible-in-history",\n "request_id": "8049b932-6c2f-415a-9bb2-241dcf4cfc9c",\n "input": {\n "data": "ikn1cmwhig=="\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{\n "runid": "cc09d5dd-b2fa-46d8-b426-54c96b12d18f"\n}\n\n\n----------------------------------------\n\npost uber.cadence.api.v1.workflowapi::terminateworkflowexecution\n\n# terminate a new workflow execution\n\n# headers\n\nname example\ncontext-ttl-ms 2000\nrpc-caller curl-client\nrpc-service cadence-frontend\nrpc-encoding json\nrpc-procedure uber.cadence.api.v1.workflowapiterminateworkflowexecution\n\n# example payloads\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n}\n\n\n{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id",\n "run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n },\n "reason": "dummy",\n "identity": "client-name-visible-in-history",\n "first_execution_run_id": "0f95ad5b-03bc-4c6b-8cf0-1f3ea08eb86a"\n}\n\n\n# example curl\n\ncurl -x post http://0.0.0.0:8800 \\\n -h \'context-ttl-ms: 2000\' \\\n -h \'rpc-caller: curl-client\' \\\n -h \'rpc-service: cadence-frontend\' \\\n -h \'rpc-encoding: json\' \\\n -h \'rpc-procedure: uber.cadence.api.v1.workflowapi::terminateworkflowexecution\' \\\n -d \\\n \'{\n "domain": "sample-domain",\n "workflow_execution": {\n "workflow_id": "sample-workflow-id"\n }\n }\'\n\n\n# example successful response\n\nhttp code: 200\n\n{}\n\n\n----------------------------------------',charsets:{cjk:!0}},{title:"Client SDK Overview",frontmatter:{layout:"default",title:"Client SDK Overview",permalink:"/docs/java-client/client-overview",readingShow:"top"},regularPath:"/docs/04-java-client/01-client-overview.html",relativePath:"docs/04-java-client/01-client-overview.md",key:"v-2f3b4398",path:"/docs/java-client/client-overview/",headers:[{level:2,title:"JavaDoc Packages",slug:"javadoc-packages",normalizedTitle:"javadoc packages",charIndex:169},{level:3,title:"com.uber.cadence.activity",slug:"com-uber-cadence-activity",normalizedTitle:"com.uber.cadence.activity",charIndex:190},{level:3,title:"com.uber.cadence.client",slug:"com-uber-cadence-client",normalizedTitle:"com.uber.cadence.client",charIndex:296},{level:3,title:"com.uber.cadence.workflow",slug:"com-uber-cadence-workflow",normalizedTitle:"com.uber.cadence.workflow",charIndex:446},{level:3,title:"com.uber.cadence.worker",slug:"com-uber-cadence-worker",normalizedTitle:"com.uber.cadence.worker",charIndex:506},{level:3,title:"com.uber.cadence.testing",slug:"com-uber-cadence-testing",normalizedTitle:"com.uber.cadence.testing",charIndex:572},{level:2,title:"Samples",slug:"samples",normalizedTitle:"samples",charIndex:26},{level:3,title:"com.uber.cadence.samples.hello",slug:"com-uber-cadence-samples-hello",normalizedTitle:"com.uber.cadence.samples.hello",charIndex:654},{level:3,title:"com.uber.cadence.samples.bookingsaga",slug:"com-uber-cadence-samples-bookingsaga",normalizedTitle:"com.uber.cadence.samples.bookingsaga",charIndex:843},{level:3,title:"com.uber.cadence.samples.fileprocessing",slug:"com-uber-cadence-samples-fileprocessing",normalizedTitle:"com.uber.cadence.samples.fileprocessing",charIndex:942}],codeSwitcherOptions:{},headersStr:"JavaDoc Packages com.uber.cadence.activity com.uber.cadence.client com.uber.cadence.workflow com.uber.cadence.worker com.uber.cadence.testing Samples com.uber.cadence.samples.hello com.uber.cadence.samples.bookingsaga com.uber.cadence.samples.fileprocessing",content:"# Client SDK Overview\n\n * Samples: https://github.com/uber/cadence-java-samples\n * JavaDoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\n\n# JavaDoc Packages\n\n\n# com.uber.cadence.activity\n\nAPIs to implement activity: accessing activity info, or sending heartbeat.\n\n\n# com.uber.cadence.client\n\nAPIs for external application code to interact with Cadence workflows: start workflows, send signals or query workflows.\n\n\n# com.uber.cadence.workflow\n\nAPIs to implement workflows.\n\n\n# com.uber.cadence.worker\n\nAPIs to configure and start workers.\n\n\n# com.uber.cadence.testing\n\nAPIs to write unit tests for workflows.\n\n\n# Samples\n\n\n# com.uber.cadence.samples.hello\n\nSamples of how to use the basic feature: activity, local activity, ChildWorkflow, Query, etc. This is the most important package you need to start with.\n\n\n# com.uber.cadence.samples.bookingsaga\n\nAn end-to-end example to write workflow using SAGA APIs.\n\n\n# com.uber.cadence.samples.fileprocessing\n\nAn end-to-end example to write workflows to download a file, zips it, and uploads it to a destination.\n\nAn important requirement for such a workflow is that while a first activity can run on any host, the second and third must run on the same host as the first one. This is achieved through use of a host specific task list. The first activity returns the name of the host specific task list and all other activities are dispatched using the stub that is configured with it. This assumes that FileProcessingWorker has a worker running on the same task list.",normalizedContent:"# client sdk overview\n\n * samples: https://github.com/uber/cadence-java-samples\n * javadoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\n\n# javadoc packages\n\n\n# com.uber.cadence.activity\n\napis to implement activity: accessing activity info, or sending heartbeat.\n\n\n# com.uber.cadence.client\n\napis for external application code to interact with cadence workflows: start workflows, send signals or query workflows.\n\n\n# com.uber.cadence.workflow\n\napis to implement workflows.\n\n\n# com.uber.cadence.worker\n\napis to configure and start workers.\n\n\n# com.uber.cadence.testing\n\napis to write unit tests for workflows.\n\n\n# samples\n\n\n# com.uber.cadence.samples.hello\n\nsamples of how to use the basic feature: activity, local activity, childworkflow, query, etc. this is the most important package you need to start with.\n\n\n# com.uber.cadence.samples.bookingsaga\n\nan end-to-end example to write workflow using saga apis.\n\n\n# com.uber.cadence.samples.fileprocessing\n\nan end-to-end example to write workflows to download a file, zips it, and uploads it to a destination.\n\nan important requirement for such a workflow is that while a first activity can run on any host, the second and third must run on the same host as the first one. this is achieved through use of a host specific task list. the first activity returns the name of the host specific task list and all other activities are dispatched using the stub that is configured with it. this assumes that fileprocessingworker has a worker running on the same task list.",charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/concepts",readingShow:"top"},regularPath:"/docs/03-concepts/",relativePath:"docs/03-concepts/index.md",key:"v-347319df",path:"/docs/concepts/",codeSwitcherOptions:{},headersStr:null,content:"# Concepts\n\nCadence is a new developer friendly way to develop distributed applications.\n\nIt borrows the core terminology from the workflow-automation space. So its concepts include workflows and activities. can react to events and return internal state through queries.\n\nThe deployment topology explains how all these concepts are mapped to deployable software components.\n\nThe HTTP API reference describes how to use HTTP API to interact with Cadence server.",normalizedContent:"# concepts\n\ncadence is a new developer friendly way to develop distributed applications.\n\nit borrows the core terminology from the workflow-automation space. so its concepts include workflows and activities. can react to events and return internal state through queries.\n\nthe deployment topology explains how all these concepts are mapped to deployable software components.\n\nthe http api reference describes how to use http api to interact with cadence server.",charsets:{}},{title:"Workflow interface",frontmatter:{layout:"default",title:"Workflow interface",permalink:"/docs/java-client/workflow-interface",readingShow:"top"},regularPath:"/docs/04-java-client/02-workflow-interface.html",relativePath:"docs/04-java-client/02-workflow-interface.md",key:"v-44a96002",path:"/docs/java-client/workflow-interface/",codeSwitcherOptions:{},headersStr:null,content:'# Workflow interface\n\nencapsulates the orchestration of and child . It can also answer synchronous and receive external (also known as ).\n\nA must define an interface class. All of its methods must have one of the following annotations:\n\n * @WorkflowMethod indicates an entry point to a . It contains parameters such as timeouts and a . Required parameters (such as executionStartToCloseTimeoutSeconds) that are not specified through the annotation must be provided at runtime.\n * @SignalMethod indicates a method that reacts to external . It must have a void return type.\n * @QueryMethod indicates a method that reacts to synchronous requests.\n\nYou can have more than one method with the same annotation (except @WorkflowMethod). For example:\n\npublic interface FileProcessingWorkflow {\n\n @WorkflowMethod(executionStartToCloseTimeoutSeconds = 10, taskList = "file-processing")\n String processFile(Arguments args);\n\n @QueryMethod(name="history")\n List getHistory();\n\n @QueryMethod(name="status")\n String getStatus();\n\n @SignalMethod\n void retryNow();\n\n @SignalMethod\n void abandon();\n}\n\n\nWe recommended that you use a single value type argument for methods. In this way, adding new arguments as fields to the value type is a backwards-compatible change.',normalizedContent:'# workflow interface\n\nencapsulates the orchestration of and child . it can also answer synchronous and receive external (also known as ).\n\na must define an interface class. all of its methods must have one of the following annotations:\n\n * @workflowmethod indicates an entry point to a . it contains parameters such as timeouts and a . required parameters (such as executionstarttoclosetimeoutseconds) that are not specified through the annotation must be provided at runtime.\n * @signalmethod indicates a method that reacts to external . it must have a void return type.\n * @querymethod indicates a method that reacts to synchronous requests.\n\nyou can have more than one method with the same annotation (except @workflowmethod). for example:\n\npublic interface fileprocessingworkflow {\n\n @workflowmethod(executionstarttoclosetimeoutseconds = 10, tasklist = "file-processing")\n string processfile(arguments args);\n\n @querymethod(name="history")\n list gethistory();\n\n @querymethod(name="status")\n string getstatus();\n\n @signalmethod\n void retrynow();\n\n @signalmethod\n void abandon();\n}\n\n\nwe recommended that you use a single value type argument for methods. in this way, adding new arguments as fields to the value type is a backwards-compatible change.',charsets:{}},{title:"Implementing workflows",frontmatter:{layout:"default",title:"Implementing workflows",permalink:"/docs/java-client/implementing-workflows",readingShow:"top"},regularPath:"/docs/04-java-client/03-implementing-workflows.html",relativePath:"docs/04-java-client/03-implementing-workflows.md",key:"v-73f5d8c2",path:"/docs/java-client/implementing-workflows/",headers:[{level:2,title:"Calling Activities",slug:"calling-activities",normalizedTitle:"calling activities",charIndex:515},{level:2,title:"Calling Activities Asynchronously",slug:"calling-activities-asynchronously",normalizedTitle:"calling activities asynchronously",charIndex:2719},{level:2,title:"Workflow Implementation Constraints",slug:"workflow-implementation-constraints",normalizedTitle:"workflow implementation constraints",charIndex:5585}],codeSwitcherOptions:{},headersStr:"Calling Activities Calling Activities Asynchronously Workflow Implementation Constraints",content:"# Implementing workflows\n\nA implementation implements a interface. Each time a new is started, a new instance of the implementation object is created. Then, one of the methods (depending on which type has been started) annotated with @WorkflowMethod is invoked. As soon as this method returns, the is closed. While is open, it can receive calls to and methods. No additional calls to methods are allowed. The object is stateful, so and methods can communicate with the other parts of the through object fields.\n\n\n# Calling Activities\n\nWorkflow.newActivityStub returns a client-side stub that implements an interface. It takes type and options as arguments. options are needed only if some of the required timeouts are not specified through the @ActivityMethod annotation.\n\nCalling a method on this interface invokes an that implements this method. An invocation synchronously blocks until the completes, fails, or times out. Even if execution takes a few months, the code still sees it as a single synchronous invocation. It doesn't matter what happens to the processes that host the . The business logic code just sees a single method call.\n\npublic class FileProcessingWorkflowImpl implements FileProcessingWorkflow {\n\n private final FileProcessingActivities activities;\n\n public FileProcessingWorkflowImpl() {\n this.activities = Workflow.newActivityStub(FileProcessingActivities.class);\n }\n\n @Override\n public void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName);\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n }\n ...\n}\n\n\nIf different need different options, like timeouts or a , multiple client-side stubs can be created with different options.\n\npublic FileProcessingWorkflowImpl() {\n ActivityOptions options1 = new ActivityOptions.Builder()\n .setTaskList(\"taskList1\")\n .build();\n this.store1 = Workflow.newActivityStub(FileProcessingActivities.class, options1);\n\n ActivityOptions options2 = new ActivityOptions.Builder()\n .setTaskList(\"taskList2\")\n .build();\n this.store2 = Workflow.newActivityStub(FileProcessingActivities.class, options2);\n}\n\n\n\n# Calling Activities Asynchronously\n\nSometimes need to perform certain operations in parallel. The Async class static methods allow you to invoke any asynchronously. The calls return a Promise result immediately. Promise is similar to both Java Future and CompletionStage. The Promise get blocks until a result is available. It also exposes the thenApply and handle methods. See the Promise JavaDoc for technical details about differences with Future.\n\nTo convert a synchronous call:\n\nString localName = activities.download(sourceBucket, sourceFile);\n\n\nTo asynchronous style, the method reference is passed to Async.function or Async.procedure followed by arguments:\n\nPromise localNamePromise = Async.function(activities::download, sourceBucket, sourceFile);\n\n\nThen to wait synchronously for the result:\n\nString localName = localNamePromise.get();\n\n\nHere is the above example rewritten to call download and upload in parallel on multiple files:\n\npublic void processFile(Arguments args) {\n List> localNamePromises = new ArrayList<>();\n List processedNames = null;\n try {\n // Download all files in parallel.\n for (String sourceFilename : args.getSourceFilenames()) {\n Promise localName = Async.function(activities::download,\n args.getSourceBucketName(), sourceFilename);\n localNamePromises.add(localName);\n }\n // allOf converts a list of promises to a single promise that contains a list\n // of each promise value.\n Promise> localNamesPromise = Promise.allOf(localNamePromises);\n\n // All code until the next line wasn't blocking.\n // The promise get is a blocking call.\n List localNames = localNamesPromise.get();\n processedNames = activities.processFiles(localNames);\n\n // Upload all results in parallel.\n List> uploadedList = new ArrayList<>();\n for (String processedName : processedNames) {\n Promise uploaded = Async.procedure(activities::upload,\n args.getTargetBucketName(), args.getTargetFilename(), processedName);\n uploadedList.add(uploaded);\n }\n // Wait for all uploads to complete.\n Promise allUploaded = Promise.allOf(uploadedList);\n allUploaded.get(); // blocks until all promises are ready.\n } finally {\n for (Promise localNamePromise : localNamePromises) {\n // Skip files that haven't completed downloading.\n if (localNamePromise.isCompleted()) {\n activities.deleteLocalFile(localNamePromise.get());\n }\n }\n if (processedNames != null) {\n for (String processedName : processedNames) {\n activities.deleteLocalFile(processedName);\n }\n }\n }\n}\n\n\n\n# Workflow Implementation Constraints\n\nCadence uses the Microsoft Azure Event Sourcing pattern to recover the state of a object including its threads and local variable values. In essence, every time a state has to be restored, its code is re-executed from the beginning. When replaying, side effects (such as invocations) are ignored because they are already recorded in the . When writing logic, the replay is not visible, so the code should be written since it executes only once. This design puts the following constraints on the implementation:\n\n * Do not use any mutable global variables because multiple instances of are executed in parallel.\n * Do not call any non-deterministic functions like non seeded random or UUID.randomUUID() directly from the code.\n\nAlways do the following in :\n\n * Don’t perform any IO or service calls as they are not usually deterministic. Use for this.\n * Only use Workflow.currentTimeMillis() to get the current time inside a .\n * Do not use native Java Thread or any other multi-threaded classes like ThreadPoolExecutor. Use Async.function or Async.procedure to execute code asynchronously.\n * Don't use any synchronization, locks, and other standard Java blocking concurrency-related classes besides those provided by the Workflow class. There is no need in explicit synchronization because multi-threaded code inside a is executed one thread at a time and under a global lock.\n * Call WorkflowThread.sleep instead of Thread.sleep.\n * Use Promise and CompletablePromise instead of Future and CompletableFuture.\n * Use WorkflowQueue instead of BlockingQueue.\n\n * Use Workflow.getVersion when making any changes to the code. Without this, any deployment of updated code might break already open .\n * Don’t access configuration APIs directly from a because changes in the configuration might affect a path. Pass it as an argument to a function or use an to load it.\n\nmethod arguments and return values are serializable to a byte array using the provided DataConverter interface. The default implementation uses JSON serializer, but you can use any alternative serialization mechanism.\n\nThe values passed to through invocation parameters or returned through a result value are recorded in the execution history. The entire execution history is transferred from the Cadence service to with every that the logic needs to process. A large execution history can thus adversely impact the performance of your . Therefore, be mindful of the amount of data that you transfer via invocation parameters or return values. Otherwise, no additional limitations exist on implementations.",normalizedContent:"# implementing workflows\n\na implementation implements a interface. each time a new is started, a new instance of the implementation object is created. then, one of the methods (depending on which type has been started) annotated with @workflowmethod is invoked. as soon as this method returns, the is closed. while is open, it can receive calls to and methods. no additional calls to methods are allowed. the object is stateful, so and methods can communicate with the other parts of the through object fields.\n\n\n# calling activities\n\nworkflow.newactivitystub returns a client-side stub that implements an interface. it takes type and options as arguments. options are needed only if some of the required timeouts are not specified through the @activitymethod annotation.\n\ncalling a method on this interface invokes an that implements this method. an invocation synchronously blocks until the completes, fails, or times out. even if execution takes a few months, the code still sees it as a single synchronous invocation. it doesn't matter what happens to the processes that host the . the business logic code just sees a single method call.\n\npublic class fileprocessingworkflowimpl implements fileprocessingworkflow {\n\n private final fileprocessingactivities activities;\n\n public fileprocessingworkflowimpl() {\n this.activities = workflow.newactivitystub(fileprocessingactivities.class);\n }\n\n @override\n public void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n activities.upload(args.gettargetbucketname(), args.gettargetfilename(), processedname);\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n }\n ...\n}\n\n\nif different need different options, like timeouts or a , multiple client-side stubs can be created with different options.\n\npublic fileprocessingworkflowimpl() {\n activityoptions options1 = new activityoptions.builder()\n .settasklist(\"tasklist1\")\n .build();\n this.store1 = workflow.newactivitystub(fileprocessingactivities.class, options1);\n\n activityoptions options2 = new activityoptions.builder()\n .settasklist(\"tasklist2\")\n .build();\n this.store2 = workflow.newactivitystub(fileprocessingactivities.class, options2);\n}\n\n\n\n# calling activities asynchronously\n\nsometimes need to perform certain operations in parallel. the async class static methods allow you to invoke any asynchronously. the calls return a promise result immediately. promise is similar to both java future and completionstage. the promise get blocks until a result is available. it also exposes the thenapply and handle methods. see the promise javadoc for technical details about differences with future.\n\nto convert a synchronous call:\n\nstring localname = activities.download(sourcebucket, sourcefile);\n\n\nto asynchronous style, the method reference is passed to async.function or async.procedure followed by arguments:\n\npromise localnamepromise = async.function(activities::download, sourcebucket, sourcefile);\n\n\nthen to wait synchronously for the result:\n\nstring localname = localnamepromise.get();\n\n\nhere is the above example rewritten to call download and upload in parallel on multiple files:\n\npublic void processfile(arguments args) {\n list> localnamepromises = new arraylist<>();\n list processednames = null;\n try {\n // download all files in parallel.\n for (string sourcefilename : args.getsourcefilenames()) {\n promise localname = async.function(activities::download,\n args.getsourcebucketname(), sourcefilename);\n localnamepromises.add(localname);\n }\n // allof converts a list of promises to a single promise that contains a list\n // of each promise value.\n promise> localnamespromise = promise.allof(localnamepromises);\n\n // all code until the next line wasn't blocking.\n // the promise get is a blocking call.\n list localnames = localnamespromise.get();\n processednames = activities.processfiles(localnames);\n\n // upload all results in parallel.\n list> uploadedlist = new arraylist<>();\n for (string processedname : processednames) {\n promise uploaded = async.procedure(activities::upload,\n args.gettargetbucketname(), args.gettargetfilename(), processedname);\n uploadedlist.add(uploaded);\n }\n // wait for all uploads to complete.\n promise alluploaded = promise.allof(uploadedlist);\n alluploaded.get(); // blocks until all promises are ready.\n } finally {\n for (promise localnamepromise : localnamepromises) {\n // skip files that haven't completed downloading.\n if (localnamepromise.iscompleted()) {\n activities.deletelocalfile(localnamepromise.get());\n }\n }\n if (processednames != null) {\n for (string processedname : processednames) {\n activities.deletelocalfile(processedname);\n }\n }\n }\n}\n\n\n\n# workflow implementation constraints\n\ncadence uses the microsoft azure event sourcing pattern to recover the state of a object including its threads and local variable values. in essence, every time a state has to be restored, its code is re-executed from the beginning. when replaying, side effects (such as invocations) are ignored because they are already recorded in the . when writing logic, the replay is not visible, so the code should be written since it executes only once. this design puts the following constraints on the implementation:\n\n * do not use any mutable global variables because multiple instances of are executed in parallel.\n * do not call any non-deterministic functions like non seeded random or uuid.randomuuid() directly from the code.\n\nalways do the following in :\n\n * don’t perform any io or service calls as they are not usually deterministic. use for this.\n * only use workflow.currenttimemillis() to get the current time inside a .\n * do not use native java thread or any other multi-threaded classes like threadpoolexecutor. use async.function or async.procedure to execute code asynchronously.\n * don't use any synchronization, locks, and other standard java blocking concurrency-related classes besides those provided by the workflow class. there is no need in explicit synchronization because multi-threaded code inside a is executed one thread at a time and under a global lock.\n * call workflowthread.sleep instead of thread.sleep.\n * use promise and completablepromise instead of future and completablefuture.\n * use workflowqueue instead of blockingqueue.\n\n * use workflow.getversion when making any changes to the code. without this, any deployment of updated code might break already open .\n * don’t access configuration apis directly from a because changes in the configuration might affect a path. pass it as an argument to a function or use an to load it.\n\nmethod arguments and return values are serializable to a byte array using the provided dataconverter interface. the default implementation uses json serializer, but you can use any alternative serialization mechanism.\n\nthe values passed to through invocation parameters or returned through a result value are recorded in the execution history. the entire execution history is transferred from the cadence service to with every that the logic needs to process. a large execution history can thus adversely impact the performance of your . therefore, be mindful of the amount of data that you transfer via invocation parameters or return values. otherwise, no additional limitations exist on implementations.",charsets:{}},{title:"Starting workflows",frontmatter:{layout:"default",title:"Starting workflows",permalink:"/docs/java-client/starting-workflow-executions",readingShow:"top"},regularPath:"/docs/04-java-client/04-starting-workflow-executions.html",relativePath:"docs/04-java-client/04-starting-workflow-executions.md",key:"v-7106a8e2",path:"/docs/java-client/starting-workflow-executions/",headers:[{level:2,title:"Creating a WorkflowClient",slug:"creating-a-workflowclient",normalizedTitle:"creating a workflowclient",charIndex:35},{level:2,title:"Executing Workflows",slug:"executing-workflows",normalizedTitle:"executing workflows",charIndex:2593}],codeSwitcherOptions:{},headersStr:"Creating a WorkflowClient Executing Workflows",content:'# Starting workflow executions\n\n\n# Creating a WorkflowClient\n\nA interface that executes a requires initializing a WorkflowClient instance, creating a client side stub to the , and then calling a method annotated with @WorkflowMethod.\n\nA simple WorkflowClient instance that utilises the communication protocol can be initialised as follows:\n\nWorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(\n ClientOptions.newBuilder().setHost(cadenceServiceHost).setPort(cadenceServicePort).build()),\n WorkflowClientOptions.newBuilder().setDomain(domain).build());\n// Create a workflow stub.\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(FileProcessingWorkflow.class);\n\n\nAlternatively, if wishing to create a WorkflowClient that uses TLS, we can initialise a client that uses the gRPC communication protocol instead. First, additions will need to be made to the project\'s pom.xml:\n\n\n io.grpc\n grpc-netty\n LATEST.RELEASE.VERSION\n\n\n io.netty\n netty-all\n LATEST.RELEASE.VERSION\n\n\n\nThen, use the following client implementation; provide a TLS certificate with which the cluster has also been configured (replace "/path/to/cert/file" in the sample):\n\nWorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new Thrift2ProtoAdapter(\n IGrpcServiceStubs.newInstance(\n ClientOptions.newBuilder().setGRPCChannel(\n NettyChannelBuilder.forAddress(cadenceServiceHost, cadenceServicePort)\n .useTransportSecurity()\n .defaultLoadBalancingPolicy("round_robin")\n .sslContext(GrpcSslContexts.forClient()\n .trustManager(new File("/path/to/cert/file"))\n .build()).build()).build())),\n WorkflowClientOptions.newBuilder().setDomain(domain).build());\n// Create a workflow stub.\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(FileProcessingWorkflow.class);\n\n\nOr, if you are using version prior to 3.0.0, a WorkflowClient can be created as follows:\n\nWorkflowClient workflowClient = WorkflowClient.newClient(cadenceServiceHost, cadenceServicePort, domain);\n// Create a workflow stub.\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(FileProcessingWorkflow.class);\n\n\n\n# Executing Workflows\n\nThere are two ways to start asynchronously and synchronously. Asynchronous start initiates a and immediately returns to the caller. This is the most common way to start in production code. Synchronous invocation starts a and then waits for its completion. If the process that started the crashes or stops waiting, the continues executing. Because are potentially long running, and crashes of clients happen, this is not very commonly found in production use.\n\nAsynchronous start:\n\n// Returns as soon as the workflow starts.\nWorkflowExecution workflowExecution = WorkflowClient.start(workflow::processFile, workflowArgs);\n\nSystem.out.println("Started process file workflow with workflowId=\\"" + workflowExecution.getWorkflowId()\n + "\\" and runId=\\"" + workflowExecution.getRunId() + "\\"");\n\n\nSynchronous start:\n\n// Start a workflow and then wait for a result.\n// Note that if the waiting process is killed, the workflow will continue execution.\nString result = workflow.processFile(workflowArgs);\n\n\nIf you need to wait for a completion after an asynchronous start, the most straightforward way is to call the blocking version again. If WorkflowOptions.WorkflowIdReusePolicy is not AllowDuplicate, then instead of throwing DuplicateWorkflowException, it reconnects to an existing and waits for its completion. The following example shows how to do this from a different process than the one that started the . All this process needs is a WorkflowID.\n\nWorkflowExecution execution = new WorkflowExecution().setWorkflowId(workflowId);\nFileProcessingWorkflow workflow = workflowClient.newWorkflowStub(execution);\n// Returns result potentially waiting for workflow to complete.\nString result = workflow.processFile(workflowArgs);\n',normalizedContent:'# starting workflow executions\n\n\n# creating a workflowclient\n\na interface that executes a requires initializing a workflowclient instance, creating a client side stub to the , and then calling a method annotated with @workflowmethod.\n\na simple workflowclient instance that utilises the communication protocol can be initialised as follows:\n\nworkflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(\n clientoptions.newbuilder().sethost(cadenceservicehost).setport(cadenceserviceport).build()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n// create a workflow stub.\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(fileprocessingworkflow.class);\n\n\nalternatively, if wishing to create a workflowclient that uses tls, we can initialise a client that uses the grpc communication protocol instead. first, additions will need to be made to the project\'s pom.xml:\n\n\n io.grpc\n grpc-netty\n latest.release.version\n\n\n io.netty\n netty-all\n latest.release.version\n\n\n\nthen, use the following client implementation; provide a tls certificate with which the cluster has also been configured (replace "/path/to/cert/file" in the sample):\n\nworkflowclient workflowclient =\n workflowclient.newinstance(\n new thrift2protoadapter(\n igrpcservicestubs.newinstance(\n clientoptions.newbuilder().setgrpcchannel(\n nettychannelbuilder.foraddress(cadenceservicehost, cadenceserviceport)\n .usetransportsecurity()\n .defaultloadbalancingpolicy("round_robin")\n .sslcontext(grpcsslcontexts.forclient()\n .trustmanager(new file("/path/to/cert/file"))\n .build()).build()).build())),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n// create a workflow stub.\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(fileprocessingworkflow.class);\n\n\nor, if you are using version prior to 3.0.0, a workflowclient can be created as follows:\n\nworkflowclient workflowclient = workflowclient.newclient(cadenceservicehost, cadenceserviceport, domain);\n// create a workflow stub.\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(fileprocessingworkflow.class);\n\n\n\n# executing workflows\n\nthere are two ways to start asynchronously and synchronously. asynchronous start initiates a and immediately returns to the caller. this is the most common way to start in production code. synchronous invocation starts a and then waits for its completion. if the process that started the crashes or stops waiting, the continues executing. because are potentially long running, and crashes of clients happen, this is not very commonly found in production use.\n\nasynchronous start:\n\n// returns as soon as the workflow starts.\nworkflowexecution workflowexecution = workflowclient.start(workflow::processfile, workflowargs);\n\nsystem.out.println("started process file workflow with workflowid=\\"" + workflowexecution.getworkflowid()\n + "\\" and runid=\\"" + workflowexecution.getrunid() + "\\"");\n\n\nsynchronous start:\n\n// start a workflow and then wait for a result.\n// note that if the waiting process is killed, the workflow will continue execution.\nstring result = workflow.processfile(workflowargs);\n\n\nif you need to wait for a completion after an asynchronous start, the most straightforward way is to call the blocking version again. if workflowoptions.workflowidreusepolicy is not allowduplicate, then instead of throwing duplicateworkflowexception, it reconnects to an existing and waits for its completion. the following example shows how to do this from a different process than the one that started the . all this process needs is a workflowid.\n\nworkflowexecution execution = new workflowexecution().setworkflowid(workflowid);\nfileprocessingworkflow workflow = workflowclient.newworkflowstub(execution);\n// returns result potentially waiting for workflow to complete.\nstring result = workflow.processfile(workflowargs);\n',charsets:{}},{title:"Activity interface",frontmatter:{layout:"default",title:"Activity interface",permalink:"/docs/java-client/activity-interface",readingShow:"top"},regularPath:"/docs/04-java-client/05-activity-interface.html",relativePath:"docs/04-java-client/05-activity-interface.md",key:"v-4af1f23c",path:"/docs/java-client/activity-interface/",codeSwitcherOptions:{},headersStr:null,content:"# Activity interface\n\nAn is a manifestation of a particular in the business logic.\n\nare defined as methods of a plain Java interface. Each method defines a single type. A single can use more than one interface and call more than one method from the same interface. The only requirement is that method arguments and return values are serializable to a byte array using the provided DataConverter interface. The default implementation uses a JSON serializer, but an alternative implementation can be easily configured.\n\nFollowing is an example of an interface that defines four activities:\n\npublic interface FileProcessingActivities {\n\n void upload(String bucketName, String localName, String targetName);\n\n String download(String bucketName, String remoteName);\n\n @ActivityMethod(scheduleToCloseTimeoutSeconds = 2)\n String processFile(String localName);\n\n void deleteLocalFile(String fileName);\n}\n\n\n\nWe recommend to use a single value type argument for methods. In this way, adding new arguments as fields to the value type is a backwards-compatible change.\n\nAn optional @ActivityMethod annotation can be used to specify options like timeouts or a . Required options that are not specified through the annotation must be specified at runtime.",normalizedContent:"# activity interface\n\nan is a manifestation of a particular in the business logic.\n\nare defined as methods of a plain java interface. each method defines a single type. a single can use more than one interface and call more than one method from the same interface. the only requirement is that method arguments and return values are serializable to a byte array using the provided dataconverter interface. the default implementation uses a json serializer, but an alternative implementation can be easily configured.\n\nfollowing is an example of an interface that defines four activities:\n\npublic interface fileprocessingactivities {\n\n void upload(string bucketname, string localname, string targetname);\n\n string download(string bucketname, string remotename);\n\n @activitymethod(scheduletoclosetimeoutseconds = 2)\n string processfile(string localname);\n\n void deletelocalfile(string filename);\n}\n\n\n\nwe recommend to use a single value type argument for methods. in this way, adding new arguments as fields to the value type is a backwards-compatible change.\n\nan optional @activitymethod annotation can be used to specify options like timeouts or a . required options that are not specified through the annotation must be specified at runtime.",charsets:{}},{title:"Implementing activities",frontmatter:{layout:"default",title:"Implementing activities",permalink:"/docs/java-client/implementing-activities",readingShow:"top"},regularPath:"/docs/04-java-client/06-implementing-activities.html",relativePath:"docs/04-java-client/06-implementing-activities.md",key:"v-b64a802c",path:"/docs/java-client/implementing-activities/",headers:[{level:2,title:"Accessing Activity Info",slug:"accessing-activity-info",normalizedTitle:"accessing activity info",charIndex:1518},{level:2,title:"Asynchronous Activity Completion",slug:"asynchronous-activity-completion",normalizedTitle:"asynchronous activity completion",charIndex:2514},{level:2,title:"Activity Heart Beating",slug:"activity-heart-beating",normalizedTitle:"activity heart beating",charIndex:3930}],codeSwitcherOptions:{},headersStr:"Accessing Activity Info Asynchronous Activity Completion Activity Heart Beating",content:'# Implementing activities\n\nimplementation is an implementation of an interface. A single instance of the implementation is shared across multiple simultaneous invocations. Therefore, the implementation code must be thread safe.\n\nThe values passed to through invocation parameters or returned through a result value are recorded in the execution history. The entire execution history is transferred from the Cadence service to when a state needs to recover. A large execution history can thus adversely impact the performance of your . Therefore, be mindful of the amount of data you transfer via invocation parameters or return values. Otherwise, no additional limitations exist on implementations.\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n private final AmazonS3 s3Client;\n\n private final String localDirectory;\n\n void upload(String bucketName, String localName, String targetName) {\n File f = new File(localName);\n s3Client.putObject(bucket, remoteName, f);\n }\n\n String download(String bucketName, String remoteName, String localName) {\n // Implementation omitted for brevity.\n return downloadFileFromS3(bucketName, remoteName, localDirectory + localName);\n }\n\n String processFile(String localName) {\n // Implementation omitted for brevity.\n return compressFile(localName);\n }\n\n void deleteLocalFile(String fileName) {\n File f = new File(localDirectory + fileName);\n f.delete();\n }\n}\n\n\n\n# Accessing Activity Info\n\nThe Activity class provides static getters to access information about the that invoked it. Note that this information is stored in a thread local variable. Therefore, calls to accessors succeed only in the thread that invoked the function.\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n @Override\n public String download(String bucketName, String remoteName, String localName) {\n log.info("domain=" + Activity.getDomain());\n WorkflowExecution execution = Activity.getWorkflowExecution();\n log.info("workflowId=" + execution.getWorkflowId());\n log.info("runId=" + execution.getRunId());\n ActivityTask activityTask = Activity.getTask();\n log.info("activityId=" + activityTask.getActivityId());\n log.info("activityTimeout=" + activityTask.getStartToCloseTimeoutSeconds());\n return downloadFileFromS3(bucketName, remoteName, localDirectory + localName);\n }\n ...\n}\n\n\n\n# Asynchronous Activity Completion\n\nSometimes an lifecycle goes beyond a synchronous method invocation. For example, a request can be put in a queue and later a reply comes and is picked up by a different process. The whole request-reply interaction can be modeled as a single Cadence .\n\nTo indicate that an should not be completed upon its method return, call Activity.doNotCompleteOnReturn() from the original thread. Then later, when replies come, complete the using ActivityCompletionClient. To correlate invocation with completion, use either TaskToken or and IDs.\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n public String download(String bucketName, String remoteName, String localName) {\n byte[] taskToken = Activity.getTaskToken(); // Used to correlate reply.\n asyncDownloadFileFromS3(taskToken, bucketName, remoteName, localDirectory + localName);\n Activity.doNotCompleteOnReturn();\n return "ignored"; // Return value is ignored when doNotCompleteOnReturn was called.\n }\n ...\n}\n\n\nWhen the download is complete, the download service potentially calls back from a different process:\n\npublic void completeActivity(byte[] taskToken, R result) {\n completionClient.complete(taskToken, result);\n}\n\npublic void failActivity(byte[] taskToken, Exception failure) {\n completionClient.completeExceptionally(taskToken, failure);\n}\n\n\n\n# Activity Heart Beating\n\nSome are long running. To react to a crash quickly, use a heartbeat mechanism. The Activity.heartbeat function lets the Cadence service know that the is still alive. You can piggyback details on an heartbeat. If an times out, the last value of details is included in the ActivityTimeoutException delivered to a . Then the can pass the details to the next invocation. This acts as a periodic checkpoint mechanism for the progress of an .\n\npublic class FileProcessingActivitiesImpl implements FileProcessingActivities {\n\n @Override\n public String download(String bucketName, String remoteName, String localName) {\n InputStream inputStream = openInputStream(file);\n try {\n byte[] bytes = new byte[MAX_BUFFER_SIZE];\n while ((read = inputStream.read(bytes)) != -1) {\n totalRead += read;\n f.write(bytes, 0, read);\n /*\n * Let the service know about the download progress.\n */\n Activity.heartbeat(totalRead);\n }\n } finally {\n inputStream.close();\n }\n }\n ...\n}\n',normalizedContent:'# implementing activities\n\nimplementation is an implementation of an interface. a single instance of the implementation is shared across multiple simultaneous invocations. therefore, the implementation code must be thread safe.\n\nthe values passed to through invocation parameters or returned through a result value are recorded in the execution history. the entire execution history is transferred from the cadence service to when a state needs to recover. a large execution history can thus adversely impact the performance of your . therefore, be mindful of the amount of data you transfer via invocation parameters or return values. otherwise, no additional limitations exist on implementations.\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n private final amazons3 s3client;\n\n private final string localdirectory;\n\n void upload(string bucketname, string localname, string targetname) {\n file f = new file(localname);\n s3client.putobject(bucket, remotename, f);\n }\n\n string download(string bucketname, string remotename, string localname) {\n // implementation omitted for brevity.\n return downloadfilefroms3(bucketname, remotename, localdirectory + localname);\n }\n\n string processfile(string localname) {\n // implementation omitted for brevity.\n return compressfile(localname);\n }\n\n void deletelocalfile(string filename) {\n file f = new file(localdirectory + filename);\n f.delete();\n }\n}\n\n\n\n# accessing activity info\n\nthe activity class provides static getters to access information about the that invoked it. note that this information is stored in a thread local variable. therefore, calls to accessors succeed only in the thread that invoked the function.\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n @override\n public string download(string bucketname, string remotename, string localname) {\n log.info("domain=" + activity.getdomain());\n workflowexecution execution = activity.getworkflowexecution();\n log.info("workflowid=" + execution.getworkflowid());\n log.info("runid=" + execution.getrunid());\n activitytask activitytask = activity.gettask();\n log.info("activityid=" + activitytask.getactivityid());\n log.info("activitytimeout=" + activitytask.getstarttoclosetimeoutseconds());\n return downloadfilefroms3(bucketname, remotename, localdirectory + localname);\n }\n ...\n}\n\n\n\n# asynchronous activity completion\n\nsometimes an lifecycle goes beyond a synchronous method invocation. for example, a request can be put in a queue and later a reply comes and is picked up by a different process. the whole request-reply interaction can be modeled as a single cadence .\n\nto indicate that an should not be completed upon its method return, call activity.donotcompleteonreturn() from the original thread. then later, when replies come, complete the using activitycompletionclient. to correlate invocation with completion, use either tasktoken or and ids.\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n public string download(string bucketname, string remotename, string localname) {\n byte[] tasktoken = activity.gettasktoken(); // used to correlate reply.\n asyncdownloadfilefroms3(tasktoken, bucketname, remotename, localdirectory + localname);\n activity.donotcompleteonreturn();\n return "ignored"; // return value is ignored when donotcompleteonreturn was called.\n }\n ...\n}\n\n\nwhen the download is complete, the download service potentially calls back from a different process:\n\npublic void completeactivity(byte[] tasktoken, r result) {\n completionclient.complete(tasktoken, result);\n}\n\npublic void failactivity(byte[] tasktoken, exception failure) {\n completionclient.completeexceptionally(tasktoken, failure);\n}\n\n\n\n# activity heart beating\n\nsome are long running. to react to a crash quickly, use a heartbeat mechanism. the activity.heartbeat function lets the cadence service know that the is still alive. you can piggyback details on an heartbeat. if an times out, the last value of details is included in the activitytimeoutexception delivered to a . then the can pass the details to the next invocation. this acts as a periodic checkpoint mechanism for the progress of an .\n\npublic class fileprocessingactivitiesimpl implements fileprocessingactivities {\n\n @override\n public string download(string bucketname, string remotename, string localname) {\n inputstream inputstream = openinputstream(file);\n try {\n byte[] bytes = new byte[max_buffer_size];\n while ((read = inputstream.read(bytes)) != -1) {\n totalread += read;\n f.write(bytes, 0, read);\n /*\n * let the service know about the download progress.\n */\n activity.heartbeat(totalread);\n }\n } finally {\n inputstream.close();\n }\n }\n ...\n}\n',charsets:{}},{title:"Distributed CRON",frontmatter:{layout:"default",title:"Distributed CRON",permalink:"/docs/java-client/distributed-cron",readingShow:"top"},regularPath:"/docs/04-java-client/08-distributed-cron.html",relativePath:"docs/04-java-client/08-distributed-cron.md",key:"v-423a333c",path:"/docs/java-client/distributed-cron/",headers:[{level:2,title:"Convert an existing cron workflow",slug:"convert-an-existing-cron-workflow",normalizedTitle:"convert an existing cron workflow",charIndex:2157},{level:2,title:"Retrieve last successful result",slug:"retrieve-last-successful-result",normalizedTitle:"retrieve last successful result",charIndex:2623}],codeSwitcherOptions:{},headersStr:"Convert an existing cron workflow Retrieve last successful result",content:'# Distributed CRON\n\nIt is relatively straightforward to turn any Cadence into a Cron . All you need is to supply a cron schedule when starting the using the CronSchedule parameter of StartWorkflowOptions.\n\nYou can also start a using the Cadence with an optional cron schedule using the --cron argument.\n\nFor with CronSchedule:\n\n * CronSchedule is based on UTC time. For example cron schedule "15 8 * * *" will run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * If a failed and a RetryPolicy is supplied to the StartWorkflowOptions as well, the will retry based on the RetryPolicy. While the is retrying, the server will not schedule the next cron run.\n * Cadence server only schedules the next cron run after the current run is completed. If the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * Cron will not stop until they are terminated or cancelled.\n\nCadence supports the standard cron spec:\n\n// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for the next run only happens\n// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won\'t\n// schedule its next run. If the next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).\n// The cron spec is as follows:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\nCronSchedule string\n\n\nCadence also supports more advanced cron expressions.\n\nThe crontab guru site is useful for testing your cron expressions.\n\n\n# Convert an existing cron workflow\n\nBefore CronSchedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return ContinueAsNew. One problem with that implementation is that if the fails or times out, the cron would stop.\n\nTo convert those to make use of Cadence CronSchedule, all you need is to remove the delay timer and return without using ContinueAsNew. Then start the with the desired CronSchedule.\n\n\n# Retrieve last successful result\n\nSometimes it is useful to obtain the progress of previous successful runs. This is supported by two new APIs in the client library: HasLastCompletionResult and GetLastCompletionResult. Below is an example of how to use this in Java:\n\npublic String cronWorkflow() {\n String lastProcessedFileName = Workflow.getLastCompletionResult(String.class);\n\n // Process work starting from the lastProcessedFileName.\n // Business logic implementation goes here.\n // Updates lastProcessedFileName to the new value.\n\n return lastProcessedFileName;\n}\n\n\nNote that this works even if one of the cron schedule runs failed. The next schedule will still get the last successful result if it ever successfully completed at least once. For example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these APIs.',normalizedContent:'# distributed cron\n\nit is relatively straightforward to turn any cadence into a cron . all you need is to supply a cron schedule when starting the using the cronschedule parameter of startworkflowoptions.\n\nyou can also start a using the cadence with an optional cron schedule using the --cron argument.\n\nfor with cronschedule:\n\n * cronschedule is based on utc time. for example cron schedule "15 8 * * *" will run daily at 8:15am utc. another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * if a failed and a retrypolicy is supplied to the startworkflowoptions as well, the will retry based on the retrypolicy. while the is retrying, the server will not schedule the next cron run.\n * cadence server only schedules the next cron run after the current run is completed. if the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * cron will not stop until they are terminated or cancelled.\n\ncadence supports the standard cron spec:\n\n// cronschedule - optional cron schedule for workflow. if a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. the scheduling will be based on utc time. the schedule for the next run only happens\n// after the current run is completed/failed/timeout. if a retrypolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. while the workflow is retrying, it won\'t\n// schedule its next run. if the next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. cron workflow will not stop until it is terminated or cancelled (by returning cadence.cancelederror).\n// the cron spec is as follows:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (sunday to saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\ncronschedule string\n\n\ncadence also supports more advanced cron expressions.\n\nthe crontab guru site is useful for testing your cron expressions.\n\n\n# convert an existing cron workflow\n\nbefore cronschedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return continueasnew. one problem with that implementation is that if the fails or times out, the cron would stop.\n\nto convert those to make use of cadence cronschedule, all you need is to remove the delay timer and return without using continueasnew. then start the with the desired cronschedule.\n\n\n# retrieve last successful result\n\nsometimes it is useful to obtain the progress of previous successful runs. this is supported by two new apis in the client library: haslastcompletionresult and getlastcompletionresult. below is an example of how to use this in java:\n\npublic string cronworkflow() {\n string lastprocessedfilename = workflow.getlastcompletionresult(string.class);\n\n // process work starting from the lastprocessedfilename.\n // business logic implementation goes here.\n // updates lastprocessedfilename to the new value.\n\n return lastprocessedfilename;\n}\n\n\nnote that this works even if one of the cron schedule runs failed. the next schedule will still get the last successful result if it ever successfully completed at least once. for example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these apis.',charsets:{}},{title:"Signals",frontmatter:{layout:"default",title:"Signals",permalink:"/docs/java-client/signals",readingShow:"top"},regularPath:"/docs/04-java-client/10-signals.html",relativePath:"docs/04-java-client/10-signals.md",key:"v-65cef250",path:"/docs/java-client/signals/",headers:[{level:2,title:"Implement Signal Handler in Workflow",slug:"implement-signal-handler-in-workflow",normalizedTitle:"implement signal handler in workflow",charIndex:1012},{level:2,title:"Signal From Command Line",slug:"signal-from-command-line",normalizedTitle:"signal from command line",charIndex:2494},{level:2,title:"SignalWithStart From Command Line",slug:"signalwithstart-from-command-line",normalizedTitle:"signalwithstart from command line",charIndex:6183},{level:2,title:"Signal from user/application code",slug:"signal-from-user-application-code",normalizedTitle:"signal from user/application code",charIndex:6851}],codeSwitcherOptions:{},headersStr:"Implement Signal Handler in Workflow Signal From Command Line SignalWithStart From Command Line Signal from user/application code",content:'# Signals\n\nprovide a mechanism to send data directly to a running . Previously, you had two options for passing data to the implementation:\n\n * Via start parameters\n * As return values from\n\nWith start parameters, we could only pass in values before began.\n\nReturn values from allowed us to pass information to a running , but this approach comes with its own complications. One major drawback is reliance on polling. This means that the data needs to be stored in a third-party location until it\'s ready to be picked up by the . Further, the lifecycle of this requires management, and the requires manual restart if it fails before acquiring the data.\n\n, on the other hand, provide a fully asynchronous and durable mechanism for providing data to a running . When a is received for a running , Cadence persists the and the payload in the history. The can then process the at any time afterwards without the risk of losing the information. The also has the option to stop execution by blocking on a channel.\n\n\n# Implement Signal Handler in Workflow\n\nSee the below example from sample.\n\npublic interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n\n @SignalMethod\n void updateGreeting(String greeting);\n}\n\npublic static class HelloWorldImpl implements HelloWorld {\n\n private String greeting = "Hello";\n\n @Override\n public void sayHello(String name) {\n int count = 0;\n while (!"Bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n String oldGreeting = greeting;\n Workflow.await(() -> !Objects.equals(greeting, oldGreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @Override\n public void updateGreeting(String greeting) {\n this.greeting = greeting;\n }\n}\n\n\nThe interface now has a new method annotated with @SignalMethod. It is a callback method that is invoked every time a new of "HelloWorldupdateGreeting" is delivered to a . The interface can have only one @WorkflowMethod which is a main function of the and as many methods as needed.\n\nThe updated implementation demonstrates a few important Cadence concepts. The first is that is stateful and can have fields of any complex type. Another is that the Workflow.await function that blocks until the function it receives as a parameter evaluates to true. The condition is going to be evaluated only on state changes, so it is not a busy wait in traditional sense.\n\n\n# Signal From Command Line\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "HelloSignal" --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"World\\"\nStarted Workflow Id: HelloSignal, run Id: 6fa204cb-f478-469a-9432-78060b83b6cd\n\n\nProgram output:\n\n16:53:56.120 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n\n\nLet\'s send a using\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloSignal" --name "HelloWorld::updateGreeting" --input \\"Hi\\"\nSignal workflow succeeded.\n\n\nProgram output:\n\n16:53:56.120 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n16:54:57.901 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 2: Hi World!\n\n\nTry sending the same with the same input again. Note that the output doesn\'t change. This happens because the await condition doesn\'t unblock when it sees the same value. But a new greeting unblocks it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloSignal" --name "HelloWorld::updateGreeting" --input \\"Welcome\\"\nSignal workflow succeeded.\n\n\nProgram output:\n\n16:53:56.120 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n16:54:57.901 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 2: Hi World!\n16:56:24.400 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 3: Welcome World!\n\n\nNow shut down the and send the same again:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloSignal" --name "HelloWorld::updateGreeting" --input \\"Welcome\\"\nSignal workflow succeeded.\n\n\nNote that sending as well as starting does not need a running. The requests are queued inside the Cadence service.\n\nNow bring the back. Note that it doesn\'t log anything besides the standard startup messages. This occurs because it ignores the queued that contains the same input as the current value of greeting. Note that the restart of the didn\'t affect the . It is still blocked on the same line of code as before the failure. This is the most important feature of Cadence. The code doesn\'t need to deal with failures at all. Its state is fully recovered to its current state that includes all the local variables and threads.\n\nLet\'s look at the line where the is blocked:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow stack --workflow_id "Hello2"\nQuery result:\n"workflow-root: (BLOCKED on await)\ncom.uber.cadence.internal.sync.SyncDecisionContext.await(SyncDecisionContext.java:546)\ncom.uber.cadence.internal.sync.WorkflowInternal.await(WorkflowInternal.java:243)\ncom.uber.cadence.workflow.Workflow.await(Workflow.java:611)\ncom.uber.cadence.samples.hello.GettingStarted$HelloWorldImpl.sayHello(GettingStarted.java:32)\nsun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nsun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)"\n\n\nYes, indeed the is blocked on await. This feature works for any open , greatly simplifying troubleshooting in production. Let\'s complete the by sending a with a "Bye" greeting:\n\n16:58:22.962 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 4: Bye World!\n\n\nNote that the value of the count variable was not lost during the restart.\n\nAlso note that while a single instance is used for this walkthrough, any real production deployment has multiple instances running. So any failure or restart does not delay any because it is just migrated to any other available .\n\n\n# SignalWithStart From Command Line\n\nYou may not know if a is running and can accept a . The signalWithStart feature allows you to send a to the current instance if one exists or to create a new run and then send the . SignalWithStartWorkflow therefore doesn\'t take a as a parameter.\n\nLearn more from the --help manual:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain workflow signalwithstart -h\nNAME:\n cadence workflow signalwithstart - signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n\nUSAGE:\n cadence workflow signalwithstart [command options] [arguments...]\n...\n...\n...\n\n\n\n# Signal from user/application code\n\nYou may want to signal workflows without running the command line.\n\nThe WorkflowClient API allows you to send signal (or SignalWithStartWorkflow) from outside of the workflow to send a to the current .\n\nNote that when using newWorkflowStub to signal a workflow, you MUST NOT passing WorkflowOptions.\n\nThe WorkflowStub with WorkflowOptions is only for starting workflows.\n\nThe WorkflowStub without WorkflowOptions is for signal or query',normalizedContent:'# signals\n\nprovide a mechanism to send data directly to a running . previously, you had two options for passing data to the implementation:\n\n * via start parameters\n * as return values from\n\nwith start parameters, we could only pass in values before began.\n\nreturn values from allowed us to pass information to a running , but this approach comes with its own complications. one major drawback is reliance on polling. this means that the data needs to be stored in a third-party location until it\'s ready to be picked up by the . further, the lifecycle of this requires management, and the requires manual restart if it fails before acquiring the data.\n\n, on the other hand, provide a fully asynchronous and durable mechanism for providing data to a running . when a is received for a running , cadence persists the and the payload in the history. the can then process the at any time afterwards without the risk of losing the information. the also has the option to stop execution by blocking on a channel.\n\n\n# implement signal handler in workflow\n\nsee the below example from sample.\n\npublic interface helloworld {\n @workflowmethod\n void sayhello(string name);\n\n @signalmethod\n void updategreeting(string greeting);\n}\n\npublic static class helloworldimpl implements helloworld {\n\n private string greeting = "hello";\n\n @override\n public void sayhello(string name) {\n int count = 0;\n while (!"bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n string oldgreeting = greeting;\n workflow.await(() -> !objects.equals(greeting, oldgreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @override\n public void updategreeting(string greeting) {\n this.greeting = greeting;\n }\n}\n\n\nthe interface now has a new method annotated with @signalmethod. it is a callback method that is invoked every time a new of "helloworldupdategreeting" is delivered to a . the interface can have only one @workflowmethod which is a main function of the and as many methods as needed.\n\nthe updated implementation demonstrates a few important cadence concepts. the first is that is stateful and can have fields of any complex type. another is that the workflow.await function that blocks until the function it receives as a parameter evaluates to true. the condition is going to be evaluated only on state changes, so it is not a busy wait in traditional sense.\n\n\n# signal from command line\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "hellosignal" --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"world\\"\nstarted workflow id: hellosignal, run id: 6fa204cb-f478-469a-9432-78060b83b6cd\n\n\nprogram output:\n\n16:53:56.120 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n\n\nlet\'s send a using\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "hellosignal" --name "helloworld::updategreeting" --input \\"hi\\"\nsignal workflow succeeded.\n\n\nprogram output:\n\n16:53:56.120 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n16:54:57.901 [workflow-root] info c.u.c.samples.hello.gettingstarted - 2: hi world!\n\n\ntry sending the same with the same input again. note that the output doesn\'t change. this happens because the await condition doesn\'t unblock when it sees the same value. but a new greeting unblocks it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "hellosignal" --name "helloworld::updategreeting" --input \\"welcome\\"\nsignal workflow succeeded.\n\n\nprogram output:\n\n16:53:56.120 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n16:54:57.901 [workflow-root] info c.u.c.samples.hello.gettingstarted - 2: hi world!\n16:56:24.400 [workflow-root] info c.u.c.samples.hello.gettingstarted - 3: welcome world!\n\n\nnow shut down the and send the same again:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "hellosignal" --name "helloworld::updategreeting" --input \\"welcome\\"\nsignal workflow succeeded.\n\n\nnote that sending as well as starting does not need a running. the requests are queued inside the cadence service.\n\nnow bring the back. note that it doesn\'t log anything besides the standard startup messages. this occurs because it ignores the queued that contains the same input as the current value of greeting. note that the restart of the didn\'t affect the . it is still blocked on the same line of code as before the failure. this is the most important feature of cadence. the code doesn\'t need to deal with failures at all. its state is fully recovered to its current state that includes all the local variables and threads.\n\nlet\'s look at the line where the is blocked:\n\n> docker run --network=host --rm ubercadence/cli:master --do test-domain workflow stack --workflow_id "hello2"\nquery result:\n"workflow-root: (blocked on await)\ncom.uber.cadence.internal.sync.syncdecisioncontext.await(syncdecisioncontext.java:546)\ncom.uber.cadence.internal.sync.workflowinternal.await(workflowinternal.java:243)\ncom.uber.cadence.workflow.workflow.await(workflow.java:611)\ncom.uber.cadence.samples.hello.gettingstarted$helloworldimpl.sayhello(gettingstarted.java:32)\nsun.reflect.nativemethodaccessorimpl.invoke0(native method)\nsun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)"\n\n\nyes, indeed the is blocked on await. this feature works for any open , greatly simplifying troubleshooting in production. let\'s complete the by sending a with a "bye" greeting:\n\n16:58:22.962 [workflow-root] info c.u.c.samples.hello.gettingstarted - 4: bye world!\n\n\nnote that the value of the count variable was not lost during the restart.\n\nalso note that while a single instance is used for this walkthrough, any real production deployment has multiple instances running. so any failure or restart does not delay any because it is just migrated to any other available .\n\n\n# signalwithstart from command line\n\nyou may not know if a is running and can accept a . the signalwithstart feature allows you to send a to the current instance if one exists or to create a new run and then send the . signalwithstartworkflow therefore doesn\'t take a as a parameter.\n\nlearn more from the --help manual:\n\ndocker run --network=host --rm ubercadence/cli:master --do test-domain workflow signalwithstart -h\nname:\n cadence workflow signalwithstart - signal the current open workflow if exists, or attempt to start a new run based on idresuepolicy and signals it\n\nusage:\n cadence workflow signalwithstart [command options] [arguments...]\n...\n...\n...\n\n\n\n# signal from user/application code\n\nyou may want to signal workflows without running the command line.\n\nthe workflowclient api allows you to send signal (or signalwithstartworkflow) from outside of the workflow to send a to the current .\n\nnote that when using newworkflowstub to signal a workflow, you must not passing workflowoptions.\n\nthe workflowstub with workflowoptions is only for starting workflows.\n\nthe workflowstub without workflowoptions is for signal or query',charsets:{}},{title:"Versioning",frontmatter:{layout:"default",title:"Versioning",permalink:"/docs/java-client/versioning",readingShow:"top"},regularPath:"/docs/04-java-client/07-versioning.html",relativePath:"docs/04-java-client/07-versioning.md",key:"v-3c541bc2",path:"/docs/java-client/versioning/",codeSwitcherOptions:{},headersStr:null,content:'# Versioning\n\nAs outlined in the Workflow Implementation Constraints section, code has to be deterministic by taking the same code path when replaying history . Any code change that affects the order in which are generated breaks this assumption. The solution that allows updating code of already running is to keep both the old and new code. When replaying, use the code version that the were generated with and when executing a new code path, always take the new code.\n\nUse the Workflow.getVersion function to return a version of the code that should be executed and then use the returned value to pick a correct branch. Let\'s look at an example.\n\npublic void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName);\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n}\n\n\nNow we decide to calculate the processed file checksum and pass it to upload. The correct way to implement this change is:\n\npublic void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n int version = Workflow.getVersion("checksumAdded", Workflow.DEFAULT_VERSION, 1);\n if (version == Workflow.DEFAULT_VERSION) {\n activities.upload(args.getTargetBucketName(), args.getTargetFilename(), processedName);\n } else {\n long checksum = activities.calculateChecksum(processedName);\n activities.uploadWithChecksum(\n args.getTargetBucketName(), args.getTargetFilename(), processedName, checksum);\n }\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n}\n\n\nLater, when all that use the old version are completed, the old branch can be removed.\n\npublic void processFile(Arguments args) {\n String localName = null;\n String processedName = null;\n try {\n localName = activities.download(args.getSourceBucketName(), args.getSourceFilename());\n processedName = activities.processFile(localName);\n // getVersion call is left here to ensure that any attempt to replay history\n // for a different version fails. It can be removed later when there is no possibility\n // of this happening.\n Workflow.getVersion("checksumAdded", 1, 1);\n long checksum = activities.calculateChecksum(processedName);\n activities.uploadWithChecksum(\n args.getTargetBucketName(), args.getTargetFilename(), processedName, checksum);\n } finally {\n if (localName != null) { // File was downloaded.\n activities.deleteLocalFile(localName);\n }\n if (processedName != null) { // File was processed.\n activities.deleteLocalFile(processedName);\n }\n }\n}\n\n\nThe ID that is passed to the getVersion call identifies the change. Each change is expected to have its own ID. But if a change spawns multiple places in the code and the new code should be either executed in all of them or in none of them, then they have to share the ID.',normalizedContent:'# versioning\n\nas outlined in the workflow implementation constraints section, code has to be deterministic by taking the same code path when replaying history . any code change that affects the order in which are generated breaks this assumption. the solution that allows updating code of already running is to keep both the old and new code. when replaying, use the code version that the were generated with and when executing a new code path, always take the new code.\n\nuse the workflow.getversion function to return a version of the code that should be executed and then use the returned value to pick a correct branch. let\'s look at an example.\n\npublic void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n activities.upload(args.gettargetbucketname(), args.gettargetfilename(), processedname);\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n}\n\n\nnow we decide to calculate the processed file checksum and pass it to upload. the correct way to implement this change is:\n\npublic void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n int version = workflow.getversion("checksumadded", workflow.default_version, 1);\n if (version == workflow.default_version) {\n activities.upload(args.gettargetbucketname(), args.gettargetfilename(), processedname);\n } else {\n long checksum = activities.calculatechecksum(processedname);\n activities.uploadwithchecksum(\n args.gettargetbucketname(), args.gettargetfilename(), processedname, checksum);\n }\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n}\n\n\nlater, when all that use the old version are completed, the old branch can be removed.\n\npublic void processfile(arguments args) {\n string localname = null;\n string processedname = null;\n try {\n localname = activities.download(args.getsourcebucketname(), args.getsourcefilename());\n processedname = activities.processfile(localname);\n // getversion call is left here to ensure that any attempt to replay history\n // for a different version fails. it can be removed later when there is no possibility\n // of this happening.\n workflow.getversion("checksumadded", 1, 1);\n long checksum = activities.calculatechecksum(processedname);\n activities.uploadwithchecksum(\n args.gettargetbucketname(), args.gettargetfilename(), processedname, checksum);\n } finally {\n if (localname != null) { // file was downloaded.\n activities.deletelocalfile(localname);\n }\n if (processedname != null) { // file was processed.\n activities.deletelocalfile(processedname);\n }\n }\n}\n\n\nthe id that is passed to the getversion call identifies the change. each change is expected to have its own id. but if a change spawns multiple places in the code and the new code should be either executed in all of them or in none of them, then they have to share the id.',charsets:{}},{title:"Retries",frontmatter:{layout:"default",title:"Retries",permalink:"/docs/java-client/retries",readingShow:"top"},regularPath:"/docs/04-java-client/12-retries.html",relativePath:"docs/04-java-client/12-retries.md",key:"v-2ef7ad44",path:"/docs/java-client/retries/",headers:[{level:2,title:"RetryOptions",slug:"retryoptions",normalizedTitle:"retryoptions",charIndex:282},{level:3,title:"InitialInterval",slug:"initialinterval",normalizedTitle:"initialinterval",charIndex:339},{level:3,title:"BackoffCoefficient",slug:"backoffcoefficient",normalizedTitle:"backoffcoefficient",charIndex:481},{level:3,title:"MaximumInterval",slug:"maximuminterval",normalizedTitle:"maximuminterval",charIndex:682},{level:3,title:"ExpirationInterval",slug:"expirationinterval",normalizedTitle:"expirationinterval",charIndex:869},{level:3,title:"MaximumAttempts",slug:"maximumattempts",normalizedTitle:"maximumattempts",charIndex:941},{level:3,title:"NonRetriableErrorReasons(via setDoNotRetry)",slug:"nonretriableerrorreasons-via-setdonotretry",normalizedTitle:"nonretriableerrorreasons(via setdonotretry)",charIndex:1466},{level:2,title:"Activity Timeout Usage",slug:"activity-timeout-usage",normalizedTitle:"activity timeout usage",charIndex:2113},{level:2,title:"Activity Timeout Internals",slug:"activity-timeout-internals",normalizedTitle:"activity timeout internals",charIndex:3466},{level:3,title:"Basics without Retry",slug:"basics-without-retry",normalizedTitle:"basics without retry",charIndex:3497},{level:3,title:"Heartbeat timeout",slug:"heartbeat-timeout",normalizedTitle:"heartbeat timeout",charIndex:2519},{level:3,title:"RetryOptions and Activity with Retry",slug:"retryoptions-and-activity-with-retry",normalizedTitle:"retryoptions and activity with retry",charIndex:6151}],codeSwitcherOptions:{},headersStr:"RetryOptions InitialInterval BackoffCoefficient MaximumInterval ExpirationInterval MaximumAttempts NonRetriableErrorReasons(via setDoNotRetry) Activity Timeout Usage Activity Timeout Internals Basics without Retry Heartbeat timeout RetryOptions and Activity with Retry",content:"# Activity and workflow retries\n\nand can fail due to various intermediate conditions. In those cases, we want to retry the failed or child or even the parent . This can be achieved by supplying an optional retry options.\n\n> Note that sometimes it's also referred as RetryPolicy\n\n\n# RetryOptions\n\nA RetryOptions includes the following.\n\n\n# InitialInterval\n\nBackoff interval for the first retry. If coefficient is 1.0 then it is used for all retries. Required, no default value.\n\n\n# BackoffCoefficient\n\nCoefficient used to calculate the next retry backoff interval. The next retry interval is previous interval multiplied by this coefficient. Must be 1 or larger. Default is 2.0.\n\n\n# MaximumInterval\n\nMaximum backoff interval between retries. Exponential backoff leads to interval increase. This value is the cap of the interval. Default is 100x of initial interval.\n\n\n# ExpirationInterval\n\nMaximum time to retry. Either ExpirationInterval or MaximumAttempts is required. When exceeded the retries stop even if maximum retries is not reached yet. First (non-retry) attempt is unaffected by this field and is guaranteed to run for the entirety of the workflow timeout duration (ExecutionStartToCloseTimeoutSeconds).\n\n\n# MaximumAttempts\n\nMaximum number of attempts. When exceeded the retries stop even if not expired yet. If not set or set to 0, it means unlimited, and relies on ExpirationInterval to stop. Either MaximumAttempts or ExpirationInterval is required.\n\n\n# NonRetriableErrorReasons(via setDoNotRetry)\n\nNon-Retriable errors. This is optional. Cadence server will stop retry if error reason matches this list. When matching an exact match is used. So adding RuntimeException.class to this list is going to include only RuntimeException itself, not all of its subclasses. The reason for such behaviour is to be able to support server side retries without knowledge of Java exception hierarchy. When considering an exception type a cause of ActivityFailureException and ChildWorkflowFailureException is looked at. Error and CancellationException are never retried and are not even passed to this filter.\n\n\n# Activity Timeout Usage\n\nIt's probably too complicated to learn how to set those timeouts by reading the above. There is an easy way to deal with it.\n\nLocalActivity without retry: Use ScheduleToClose for overall timeout\n\nRegular Activity without retry:\n\n 1. Use ScheduleToClose for overall timeout\n 2. Leave ScheduleToStart and StartToClose empty\n 3. If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.\n\nLocalActivity with retry:\n\n 1. Use ScheduleToClose as timeout of each attempt.\n 2. Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.\n 3. Use retryOptions.ExperiationInterval as overall timeout of all attempts.\n 4. Leave retryOptions.MaximumAttempts empty.\n\nRegular Activity with retry:\n\n 1. Use ScheduleToClose as timeout of each attempt\n 2. Leave ScheduleToStart and StartToClose empty\n 3. If ScheduleToClose is too large(like 10 mins), then set Heartbeat timeout to a smaller value like 10s. Call heartbeat API inside activity regularly.\n 4. Use retryOptions.InitialInterval, retryOptions.BackoffCoefficient, retryOptions.MaximumInterval to control backoff.\n 5. Use retryOptions.ExperiationInterval as overall timeout of all attempts.\n 6. Leave retryOptions.MaximumAttempts empty.\n\n\n# Activity Timeout Internals\n\n\n# Basics without Retry\n\nThings are easier to understand in the world without retry. Because Cadence started from it.\n\n * ScheduleToClose timeout is the overall end-to-end timeout from a workflow's perspective.\n\n * ScheduleToStart timeout is the time that activity worker needed to start an activity. Exceeding this timeout, activity will return an ScheduleToStart timeout error/exception to workflow\n\n * StartToClose timeout is the time that an activity needed to run. Exceeding this will return StartToClose to workflow.\n\n * Requirement and defaults:\n \n * Either ScheduleToClose is provided or both of ScheduleToStart and StartToClose are provided.\n * If only ScheduleToClose, then ScheduleToStart and StartToClose are default to it.\n * If only ScheduleToStart and StartToClose are provided, then ScheduleToClose = ScheduleToStart + StartToClose.\n * All of them are capped by workflowTimeout. (e.g. if workflowTimeout is 1hour, set 2 hour for ScheduleToClose will still get 1 hour :ScheduleToClose=Min(ScheduleToClose, workflowTimeout) )\n\nSo why are they?\n\nYou may notice that ScheduleToClose is only useful when ScheduleToClose < ScheduleToStart + StartToClose. Because if ScheduleToClose >= ScheduleToStart+StartToClose the ScheduleToClose timeout is already enforced by the combination of the other two, and it become meaningless.\n\nSo the main use case of ScheduleToClose being less than the sum of two is that people want to limit the overall timeout of the activity but give more timeout for scheduleToStart or startToClose. This is extremely rare use case.\n\nAlso the main use case that people want to distinguish ScheduleToStart and StartToClose is that the workflow may need to do some special handling for ScheduleToStart timeout error. This is also very rare use case.\n\nTherefore, you can understand why in TL;DR that I recommend only using ScheduleToClose but leave the other two empty. Because only in some rare cases you may need it. If you can't think of the use case, then you do not need it.\n\nLocalActivity doesn't have ScheduleToStart/StartToClose because it's started directly inside workflow worker without server scheduling involved.\n\n\n# Heartbeat timeout\n\nHeartbeat is very important for long running activity, to prevent it from getting stuck. Not only bugs can cause activity getting stuck, regular deployment/host restart/failure could also cause it. Because without heartbeat, Cadence server couldn't know whether or not the activity is still being worked on. See more details about here https://stackoverflow.com/questions/65118584/solutions-to-stuck-timers-activities-in-cadence-swf-stepfunctions/65118585#65118585\n\n\n# RetryOptions and Activity with Retry\n\nFirst of all, here RetryOptions is for server side backoff retry -- meaning that the retry is managed automatically by Cadence without interacting with workflows. Because retry is managed by Cadence, the activity has to be specially handled in Cadence history that the started event can not written until the activity is closed. Here is some reference: https://stackoverflow.com/questions/65113363/why-an-activity-task-is-scheduled-but-not-started/65113365#65113365\n\nIn fact, workflow can do client side retry on their own. This means workflow will be managing the retry logic. You can write your own retry function, or there is some helper function in SDK, like Workflow.retry in Cadence-java-client. Client side retry will show all start events immediately, but there will be many events in the history when retrying for a single activity. It's not recommended because of performance issue.\n\nSo what do the options mean:\n\n * ExpirationInterval:\n \n * It replaces the ScheduleToClose timeout to become the actual overall timeout of the activity for all attempts.\n * It's also capped to workflow timeout like other three timeout options. ScheduleToClose = Min(ScheduleToClose, workflowTimeout)\n * The timeout of each attempt is StartToClose, but StartToClose defaults to ScheduleToClose like explanation above.\n * ScheduleToClose will be extended to ExpirationInterval: ScheduleToClose = Max(ScheduleToClose, ExpirationInterval), and this happens before ScheduleToClose is copied to ScheduleToClose and StartToClose.\n\n * InitialInterval: the interval of first retry\n\n * BackoffCoefficient: self explained\n\n * MaximumInterval: maximum of the interval during retry\n\n * MaximumAttempts: the maximum attempts. If existing with ExpirationInterval, then retry stops when either one of them is exceeded.\n\n * Requirements and defaults:\n\n * Either MaximumAttempts or ExpirationInterval is required. ExpirationInterval is set to workflowTimeout if not provided.\n\nSince ExpirationInterval is always there, and in fact it's more useful. And I think it's quite confusing to use MaximumAttempts, so I would recommend just use ExpirationInterval. Unless you really need it.",normalizedContent:"# activity and workflow retries\n\nand can fail due to various intermediate conditions. in those cases, we want to retry the failed or child or even the parent . this can be achieved by supplying an optional retry options.\n\n> note that sometimes it's also referred as retrypolicy\n\n\n# retryoptions\n\na retryoptions includes the following.\n\n\n# initialinterval\n\nbackoff interval for the first retry. if coefficient is 1.0 then it is used for all retries. required, no default value.\n\n\n# backoffcoefficient\n\ncoefficient used to calculate the next retry backoff interval. the next retry interval is previous interval multiplied by this coefficient. must be 1 or larger. default is 2.0.\n\n\n# maximuminterval\n\nmaximum backoff interval between retries. exponential backoff leads to interval increase. this value is the cap of the interval. default is 100x of initial interval.\n\n\n# expirationinterval\n\nmaximum time to retry. either expirationinterval or maximumattempts is required. when exceeded the retries stop even if maximum retries is not reached yet. first (non-retry) attempt is unaffected by this field and is guaranteed to run for the entirety of the workflow timeout duration (executionstarttoclosetimeoutseconds).\n\n\n# maximumattempts\n\nmaximum number of attempts. when exceeded the retries stop even if not expired yet. if not set or set to 0, it means unlimited, and relies on expirationinterval to stop. either maximumattempts or expirationinterval is required.\n\n\n# nonretriableerrorreasons(via setdonotretry)\n\nnon-retriable errors. this is optional. cadence server will stop retry if error reason matches this list. when matching an exact match is used. so adding runtimeexception.class to this list is going to include only runtimeexception itself, not all of its subclasses. the reason for such behaviour is to be able to support server side retries without knowledge of java exception hierarchy. when considering an exception type a cause of activityfailureexception and childworkflowfailureexception is looked at. error and cancellationexception are never retried and are not even passed to this filter.\n\n\n# activity timeout usage\n\nit's probably too complicated to learn how to set those timeouts by reading the above. there is an easy way to deal with it.\n\nlocalactivity without retry: use scheduletoclose for overall timeout\n\nregular activity without retry:\n\n 1. use scheduletoclose for overall timeout\n 2. leave scheduletostart and starttoclose empty\n 3. if scheduletoclose is too large(like 10 mins), then set heartbeat timeout to a smaller value like 10s. call heartbeat api inside activity regularly.\n\nlocalactivity with retry:\n\n 1. use scheduletoclose as timeout of each attempt.\n 2. use retryoptions.initialinterval, retryoptions.backoffcoefficient, retryoptions.maximuminterval to control backoff.\n 3. use retryoptions.experiationinterval as overall timeout of all attempts.\n 4. leave retryoptions.maximumattempts empty.\n\nregular activity with retry:\n\n 1. use scheduletoclose as timeout of each attempt\n 2. leave scheduletostart and starttoclose empty\n 3. if scheduletoclose is too large(like 10 mins), then set heartbeat timeout to a smaller value like 10s. call heartbeat api inside activity regularly.\n 4. use retryoptions.initialinterval, retryoptions.backoffcoefficient, retryoptions.maximuminterval to control backoff.\n 5. use retryoptions.experiationinterval as overall timeout of all attempts.\n 6. leave retryoptions.maximumattempts empty.\n\n\n# activity timeout internals\n\n\n# basics without retry\n\nthings are easier to understand in the world without retry. because cadence started from it.\n\n * scheduletoclose timeout is the overall end-to-end timeout from a workflow's perspective.\n\n * scheduletostart timeout is the time that activity worker needed to start an activity. exceeding this timeout, activity will return an scheduletostart timeout error/exception to workflow\n\n * starttoclose timeout is the time that an activity needed to run. exceeding this will return starttoclose to workflow.\n\n * requirement and defaults:\n \n * either scheduletoclose is provided or both of scheduletostart and starttoclose are provided.\n * if only scheduletoclose, then scheduletostart and starttoclose are default to it.\n * if only scheduletostart and starttoclose are provided, then scheduletoclose = scheduletostart + starttoclose.\n * all of them are capped by workflowtimeout. (e.g. if workflowtimeout is 1hour, set 2 hour for scheduletoclose will still get 1 hour :scheduletoclose=min(scheduletoclose, workflowtimeout) )\n\nso why are they?\n\nyou may notice that scheduletoclose is only useful when scheduletoclose < scheduletostart + starttoclose. because if scheduletoclose >= scheduletostart+starttoclose the scheduletoclose timeout is already enforced by the combination of the other two, and it become meaningless.\n\nso the main use case of scheduletoclose being less than the sum of two is that people want to limit the overall timeout of the activity but give more timeout for scheduletostart or starttoclose. this is extremely rare use case.\n\nalso the main use case that people want to distinguish scheduletostart and starttoclose is that the workflow may need to do some special handling for scheduletostart timeout error. this is also very rare use case.\n\ntherefore, you can understand why in tl;dr that i recommend only using scheduletoclose but leave the other two empty. because only in some rare cases you may need it. if you can't think of the use case, then you do not need it.\n\nlocalactivity doesn't have scheduletostart/starttoclose because it's started directly inside workflow worker without server scheduling involved.\n\n\n# heartbeat timeout\n\nheartbeat is very important for long running activity, to prevent it from getting stuck. not only bugs can cause activity getting stuck, regular deployment/host restart/failure could also cause it. because without heartbeat, cadence server couldn't know whether or not the activity is still being worked on. see more details about here https://stackoverflow.com/questions/65118584/solutions-to-stuck-timers-activities-in-cadence-swf-stepfunctions/65118585#65118585\n\n\n# retryoptions and activity with retry\n\nfirst of all, here retryoptions is for server side backoff retry -- meaning that the retry is managed automatically by cadence without interacting with workflows. because retry is managed by cadence, the activity has to be specially handled in cadence history that the started event can not written until the activity is closed. here is some reference: https://stackoverflow.com/questions/65113363/why-an-activity-task-is-scheduled-but-not-started/65113365#65113365\n\nin fact, workflow can do client side retry on their own. this means workflow will be managing the retry logic. you can write your own retry function, or there is some helper function in sdk, like workflow.retry in cadence-java-client. client side retry will show all start events immediately, but there will be many events in the history when retrying for a single activity. it's not recommended because of performance issue.\n\nso what do the options mean:\n\n * expirationinterval:\n \n * it replaces the scheduletoclose timeout to become the actual overall timeout of the activity for all attempts.\n * it's also capped to workflow timeout like other three timeout options. scheduletoclose = min(scheduletoclose, workflowtimeout)\n * the timeout of each attempt is starttoclose, but starttoclose defaults to scheduletoclose like explanation above.\n * scheduletoclose will be extended to expirationinterval: scheduletoclose = max(scheduletoclose, expirationinterval), and this happens before scheduletoclose is copied to scheduletoclose and starttoclose.\n\n * initialinterval: the interval of first retry\n\n * backoffcoefficient: self explained\n\n * maximuminterval: maximum of the interval during retry\n\n * maximumattempts: the maximum attempts. if existing with expirationinterval, then retry stops when either one of them is exceeded.\n\n * requirements and defaults:\n\n * either maximumattempts or expirationinterval is required. expirationinterval is set to workflowtimeout if not provided.\n\nsince expirationinterval is always there, and in fact it's more useful. and i think it's quite confusing to use maximumattempts, so i would recommend just use expirationinterval. unless you really need it.",charsets:{}},{title:"Queries",frontmatter:{layout:"default",title:"Queries",permalink:"/docs/java-client/queries",readingShow:"top"},regularPath:"/docs/04-java-client/11-queries.html",relativePath:"docs/04-java-client/11-queries.md",key:"v-47e211a0",path:"/docs/java-client/queries/",headers:[{level:2,title:"Built-in Query: Stack Trace",slug:"built-in-query-stack-trace",normalizedTitle:"built-in query: stack trace",charIndex:550},{level:2,title:"Customized Query",slug:"customized-query",normalizedTitle:"customized query",charIndex:1055},{level:2,title:"Run Query from Command Line",slug:"run-query-from-command-line",normalizedTitle:"run query from command line",charIndex:2688},{level:2,title:"Run Query from external application code",slug:"run-query-from-external-application-code",normalizedTitle:"run query from external application code",charIndex:4693},{level:2,title:"Consistent Query",slug:"consistent-query",normalizedTitle:"consistent query",charIndex:4803}],codeSwitcherOptions:{},headersStr:"Built-in Query: Stack Trace Customized Query Run Query from Command Line Run Query from external application code Consistent Query",content:'# Queries\n\nQuery is to expose this internal state to the external world Cadence provides a synchronous feature. From the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. Multiple such callbacks can be provided per type exposing different information to different external systems.\n\ncallbacks must be read-only not mutating the state in any way. The other limitation is that the callback cannot contain any blocking code. Both above limitations rule out ability to invoke from the handlers.\n\n\n# Built-in Query: Stack Trace\n\nIf a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. You can use the Cadence to perform this . For example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nThis command uses __stack_trace, which is a built-in type supported by the Cadence client library. You can add custom types to handle such as the current state of a , or how many the has completed.\n\n\n# Customized Query\n\nCadence provides a feature that supports synchronously returning any information from a to an external caller.\n\nInterface QueryMethod indicates that the method is a query method. Query method can be used to query a workflow state by external process at any time during its execution. This annotation applies only to workflow interface methods.\n\nSee the example code :\n\npublic interface HelloWorld {\n @WorkflowMethod\n void sayHello(String name);\n\n @SignalMethod\n void updateGreeting(String greeting);\n\n @QueryMethod\n int getCount();\n}\n\npublic static class HelloWorldImpl implements HelloWorld {\n\n private String greeting = "Hello";\n private int count = 0;\n\n @Override\n public void sayHello(String name) {\n while (!"Bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n String oldGreeting = greeting;\n Workflow.await(() -> !Objects.equals(greeting, oldGreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @Override\n public void updateGreeting(String greeting) {\n this.greeting = greeting;\n }\n\n @Override\n public int getCount() {\n return count;\n }\n}\n\n\nThe new getCount method annotated with @QueryMethod was added to the interface definition. It is allowed to have multiple methods per interface.\n\nThe main restriction on the implementation of the method is that it is not allowed to modify state in any form. It also is not allowed to block its thread in any way. It usually just returns a value derived from the fields of the object.\n\n\n# Run Query from Command Line\n\nLet\'s run the updated and send a couple to it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "HelloQuery" --tasklist HelloWorldTaskList --workflow_type HelloWorld::sayHello --execution_timeout 3600 --input \\"World\\"\nStarted Workflow Id: HelloQuery, run Id: 1925f668-45b5-4405-8cba-74f7c68c3135\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloQuery" --name "HelloWorld::updateGreeting" --input \\"Hi\\"\nSignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloQuery" --name "HelloWorld::updateGreeting" --input \\"Welcome\\"\nSignal workflow succeeded.\n\n\nThe output:\n\n17:35:50.485 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 1: Hello World!\n17:36:10.483 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 2: Hi World!\n17:36:16.204 [workflow-root] INFO c.u.c.samples.hello.GettingStarted - 3: Welcome World!\n\n\nNow let\'s the using the\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "HelloQuery" --query_type "HelloWorld::getCount"\n:query:Query: result as JSON:\n3\n\n\nOne limitation of the is that it requires a process running because it is executing callback code. An interesting feature of the is that it works for completed as well. Let\'s complete the by sending "Bye" and it.\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "HelloQuery" --name "HelloWorld::updateGreeting" --input \\"Bye\\"\nSignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "HelloQuery" --query_type "HelloWorld::getCount"\n:query:Query: result as JSON:\n4\n\n\nThe method can accept parameters. This might be useful if only part of the state should be returned.\n\n\n# Run Query from external application code\n\nThe WorkflowStub without WorkflowOptions is for signal or query\n\n\n# Consistent Query\n\nhas two consistency levels, eventual and strong. Consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nIn this example if were to change state, may or may not see that state update reflected in the result. This is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. A strongly consistent is guaranteed to be based on state which includes all that came before the was issued. An is considered to have come before a if the call creating the external returned success before the was issued. External which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nIn order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nIn order to run a using application code, you need to use service client.\n\nWhen using strongly consistent you should expect higher latency than eventually consistent .',normalizedContent:'# queries\n\nquery is to expose this internal state to the external world cadence provides a synchronous feature. from the implementer point of view the is exposed as a synchronous callback that is invoked by external entities. multiple such callbacks can be provided per type exposing different information to different external systems.\n\ncallbacks must be read-only not mutating the state in any way. the other limitation is that the callback cannot contain any blocking code. both above limitations rule out ability to invoke from the handlers.\n\n\n# built-in query: stack trace\n\nif a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. you can use the cadence to perform this . for example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nthis command uses __stack_trace, which is a built-in type supported by the cadence client library. you can add custom types to handle such as the current state of a , or how many the has completed.\n\n\n# customized query\n\ncadence provides a feature that supports synchronously returning any information from a to an external caller.\n\ninterface querymethod indicates that the method is a query method. query method can be used to query a workflow state by external process at any time during its execution. this annotation applies only to workflow interface methods.\n\nsee the example code :\n\npublic interface helloworld {\n @workflowmethod\n void sayhello(string name);\n\n @signalmethod\n void updategreeting(string greeting);\n\n @querymethod\n int getcount();\n}\n\npublic static class helloworldimpl implements helloworld {\n\n private string greeting = "hello";\n private int count = 0;\n\n @override\n public void sayhello(string name) {\n while (!"bye".equals(greeting)) {\n logger.info(++count + ": " + greeting + " " + name + "!");\n string oldgreeting = greeting;\n workflow.await(() -> !objects.equals(greeting, oldgreeting));\n }\n logger.info(++count + ": " + greeting + " " + name + "!");\n }\n\n @override\n public void updategreeting(string greeting) {\n this.greeting = greeting;\n }\n\n @override\n public int getcount() {\n return count;\n }\n}\n\n\nthe new getcount method annotated with @querymethod was added to the interface definition. it is allowed to have multiple methods per interface.\n\nthe main restriction on the implementation of the method is that it is not allowed to modify state in any form. it also is not allowed to block its thread in any way. it usually just returns a value derived from the fields of the object.\n\n\n# run query from command line\n\nlet\'s run the updated and send a couple to it:\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow start --workflow_id "helloquery" --tasklist helloworldtasklist --workflow_type helloworld::sayhello --execution_timeout 3600 --input \\"world\\"\nstarted workflow id: helloquery, run id: 1925f668-45b5-4405-8cba-74f7c68c3135\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "helloquery" --name "helloworld::updategreeting" --input \\"hi\\"\nsignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "helloquery" --name "helloworld::updategreeting" --input \\"welcome\\"\nsignal workflow succeeded.\n\n\nthe output:\n\n17:35:50.485 [workflow-root] info c.u.c.samples.hello.gettingstarted - 1: hello world!\n17:36:10.483 [workflow-root] info c.u.c.samples.hello.gettingstarted - 2: hi world!\n17:36:16.204 [workflow-root] info c.u.c.samples.hello.gettingstarted - 3: welcome world!\n\n\nnow let\'s the using the\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "helloquery" --query_type "helloworld::getcount"\n:query:query: result as json:\n3\n\n\none limitation of the is that it requires a process running because it is executing callback code. an interesting feature of the is that it works for completed as well. let\'s complete the by sending "bye" and it.\n\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow signal --workflow_id "helloquery" --name "helloworld::updategreeting" --input \\"bye\\"\nsignal workflow succeeded.\ncadence: docker run --network=host --rm ubercadence/cli:master --do test-domain workflow query --workflow_id "helloquery" --query_type "helloworld::getcount"\n:query:query: result as json:\n4\n\n\nthe method can accept parameters. this might be useful if only part of the state should be returned.\n\n\n# run query from external application code\n\nthe workflowstub without workflowoptions is for signal or query\n\n\n# consistent query\n\nhas two consistency levels, eventual and strong. consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nin this example if were to change state, may or may not see that state update reflected in the result. this is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. a strongly consistent is guaranteed to be based on state which includes all that came before the was issued. an is considered to have come before a if the call creating the external returned success before the was issued. external which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nin order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nin order to run a using application code, you need to use service client.\n\nwhen using strongly consistent you should expect higher latency than eventually consistent .',charsets:{}},{title:"Child workflows",frontmatter:{layout:"default",title:"Child workflows",permalink:"/docs/java-client/child-workflows",readingShow:"top"},regularPath:"/docs/04-java-client/13-child-workflows.html",relativePath:"docs/04-java-client/13-child-workflows.md",key:"v-272408a2",path:"/docs/java-client/child-workflows/",codeSwitcherOptions:{},headersStr:null,content:'# Child workflows\n\nBesides , a can also orchestrate other .\n\nworkflow.ExecuteChildWorkflow enables the scheduling of other from within a \'s implementation. The parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\npublic static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n // Workflows are stateful. So a new stub must be created for each new child.\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n\n // This is a non blocking call that returns immediately.\n // Use child.composeGreeting("Hello", name) to call synchronously.\n Promise greeting = Async.function(child::composeGreeting, "Hello", name);\n // Do something else here.\n return greeting.get(); // blocks waiting for the child to complete.\n }\n\n // This example shows how parent workflow return right after starting a child workflow,\n // and let the child run itself.\n private String demoAsyncChildRun(String name) {\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n // non blocking call that initiated child workflow\n Async.function(child::composeGreeting, "Hello", name);\n // instead of using greeting.get() to block till child complete,\n // sometimes we just want to return parent immediately and keep child running\n Promise childPromise = Workflow.getWorkflowExecution(child);\n childPromise.get(); // block until child started,\n // otherwise child may not start because parent complete first.\n return "let child run, parent just return";\n }\n}\n\n\nWorkflow.newChildWorkflowStub returns a client-side stub that implements a child interface. It takes a child type and optional child options as arguments. options may be needed to override the timeouts and if they differ from the ones defined in the @WorkflowMethod annotation or parent .\n\nThe first call to the child stub must always be to a method annotated with @WorkflowMethod. Similar to , a call can be made synchronous or asynchronous by using Async#function or Async#procedure. The synchronous call blocks until a child completes. The asynchronous call returns a Promise that can be used to wait for the completion. After an async call returns the stub, it can be used to send to the child by calling methods annotated with @SignalMethod. a child by calling methods annotated with @QueryMethod from within code is not supported. However, can be done from using the provided WorkflowClient stub.\n\nRunning two children in parallel:\n\npublic static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n\n // Workflows are stateful, so a new stub must be created for each new child.\n GreetingChild child1 = Workflow.newChildWorkflowStub(GreetingChild.class);\n Promise greeting1 = Async.function(child1::composeGreeting, "Hello", name);\n\n // Both children will run concurrently.\n GreetingChild child2 = Workflow.newChildWorkflowStub(GreetingChild.class);\n Promise greeting2 = Async.function(child2::composeGreeting, "Bye", name);\n\n // Do something else here.\n ...\n return "First: " + greeting1.get() + ", second: " + greeting2.get();\n }\n}\n\n\nTo send a to a child, call a method annotated with @SignalMethod:\n\npublic interface GreetingChild {\n @WorkflowMethod\n String composeGreeting(String greeting, String name);\n\n @SignalMethod\n void updateName(String name);\n}\n\npublic static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n Promise greeting = Async.function(child::composeGreeting, "Hello", name);\n child.updateName("Cadence");\n return greeting.get();\n }\n}\n\n\nCalling methods annotated with @QueryMethod is not allowed from within code.',normalizedContent:'# child workflows\n\nbesides , a can also orchestrate other .\n\nworkflow.executechildworkflow enables the scheduling of other from within a \'s implementation. the parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\npublic static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n // workflows are stateful. so a new stub must be created for each new child.\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n\n // this is a non blocking call that returns immediately.\n // use child.composegreeting("hello", name) to call synchronously.\n promise greeting = async.function(child::composegreeting, "hello", name);\n // do something else here.\n return greeting.get(); // blocks waiting for the child to complete.\n }\n\n // this example shows how parent workflow return right after starting a child workflow,\n // and let the child run itself.\n private string demoasyncchildrun(string name) {\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n // non blocking call that initiated child workflow\n async.function(child::composegreeting, "hello", name);\n // instead of using greeting.get() to block till child complete,\n // sometimes we just want to return parent immediately and keep child running\n promise childpromise = workflow.getworkflowexecution(child);\n childpromise.get(); // block until child started,\n // otherwise child may not start because parent complete first.\n return "let child run, parent just return";\n }\n}\n\n\nworkflow.newchildworkflowstub returns a client-side stub that implements a child interface. it takes a child type and optional child options as arguments. options may be needed to override the timeouts and if they differ from the ones defined in the @workflowmethod annotation or parent .\n\nthe first call to the child stub must always be to a method annotated with @workflowmethod. similar to , a call can be made synchronous or asynchronous by using async#function or async#procedure. the synchronous call blocks until a child completes. the asynchronous call returns a promise that can be used to wait for the completion. after an async call returns the stub, it can be used to send to the child by calling methods annotated with @signalmethod. a child by calling methods annotated with @querymethod from within code is not supported. however, can be done from using the provided workflowclient stub.\n\nrunning two children in parallel:\n\npublic static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n\n // workflows are stateful, so a new stub must be created for each new child.\n greetingchild child1 = workflow.newchildworkflowstub(greetingchild.class);\n promise greeting1 = async.function(child1::composegreeting, "hello", name);\n\n // both children will run concurrently.\n greetingchild child2 = workflow.newchildworkflowstub(greetingchild.class);\n promise greeting2 = async.function(child2::composegreeting, "bye", name);\n\n // do something else here.\n ...\n return "first: " + greeting1.get() + ", second: " + greeting2.get();\n }\n}\n\n\nto send a to a child, call a method annotated with @signalmethod:\n\npublic interface greetingchild {\n @workflowmethod\n string composegreeting(string greeting, string name);\n\n @signalmethod\n void updatename(string name);\n}\n\npublic static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n promise greeting = async.function(child::composegreeting, "hello", name);\n child.updatename("cadence");\n return greeting.get();\n }\n}\n\n\ncalling methods annotated with @querymethod is not allowed from within code.',charsets:{}},{title:"Exception Handling",frontmatter:{layout:"default",title:"Exception Handling",permalink:"/docs/java-client/exception-handling",readingShow:"top"},regularPath:"/docs/04-java-client/14-exception-handling.html",relativePath:"docs/04-java-client/14-exception-handling.md",key:"v-d965e2bc",path:"/docs/java-client/exception-handling/",codeSwitcherOptions:{},headersStr:null,content:'# Exception Handling\n\nBy default, Exceptions thrown by an activity are received by the workflow wrapped into an com.uber.cadence.workflow.ActivityFailureException,\n\nExceptions thrown by a child workflow are received by a parent workflow wrapped into a com.uber.cadence.workflow.ChildWorkflowFailureException\n\nExceptions thrown by a workflow are received by a workflow client wrapped into com.uber.cadence.client.WorkflowFailureException.\n\nIn this example a Workflow Client executes a workflow which executes a child workflow which executes an activity which throws an IOException. The resulting exception stack trace is:\n\n com.uber.cadence.client.WorkflowFailureException: WorkflowType="GreetingWorkflow::getGreeting", WorkflowID="38b9ce7a-e370-4cd8-a9f3-35e7295f7b3d", RunID="37ceb58c-9271-4fca-b5aa-ba06c5495214\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:139)\n at com.uber.cadence.internal.dispatcher.UntypedWorkflowStubImpl.getResult(UntypedWorkflowStubImpl.java:111)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.startWorkflow(WorkflowExternalInvocationHandler.java:187)\n at com.uber.cadence.internal.dispatcher.WorkflowExternalInvocationHandler.invoke(WorkflowExternalInvocationHandler.java:113)\n at com.sun.proxy.$Proxy2.getGreeting(Unknown Source)\n at com.uber.cadence.samples.hello.HelloException.main(HelloException.java:117)\n Caused by: com.uber.cadence.workflow.ChildWorkflowFailureException: WorkflowType="GreetingChild::composeGreeting", ID="37ceb58c-9271-4fca-b5aa-ba06c5495214:1", RunID="47859b47-da4c-4225-876a-462421c98c72, EventID=10\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.executeChildWorkflow(ChildWorkflowInvocationHandler.java:114)\n at com.uber.cadence.internal.dispatcher.ChildWorkflowInvocationHandler.invoke(ChildWorkflowInvocationHandler.java:71)\n at com.sun.proxy.$Proxy5.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingWorkflowImpl.getGreeting(HelloException.java:70)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOWorkflowImplementationFactory$POJOWorkflowImplementation.execute(POJOWorkflowImplementationFactory.java:160)\n Caused by: com.uber.cadence.workflow.ActivityFailureException: ActivityType="GreetingActivities::composeGreeting" ActivityID="1", EventID=7\n at java.lang.Thread.getStackTrace(Thread.java:1559)\n at com.uber.cadence.internal.dispatcher.ActivityInvocationHandler.invoke(ActivityInvocationHandler.java:75)\n at com.sun.proxy.$Proxy6.composeGreeting(Unknown Source:0)\n at com.uber.cadence.samples.hello.HelloException$GreetingChildImpl.composeGreeting(HelloException.java:85)\n ... 5 more\n Caused by: java.io.IOException: Hello World!\n at com.uber.cadence.samples.hello.HelloException$GreetingActivitiesImpl.composeGreeting(HelloException.java:93)\n at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method:0)\n at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.lang.reflect.Method.invoke(Method.java:498)\n at com.uber.cadence.internal.worker.POJOActivityImplementationFactory$POJOActivityImplementation.execute(POJOActivityImplementationFactory.java:162)\n\n\nNote that IOException is a checked exception. The standard Java way of adding throws IOException to method signature of activity, child and workflow interfaces is not going to help. It is because at all levels it is never received directly, but in wrapped form. Propagating it without wrapping would not allow adding additional context information like activity, child workflow and parent workflow types and IDs. The Cadence library solution is to provide a special wrapper method Workflow.wrap(Exception) which wraps a checked exception in a special runtime exception. It is special because the framework strips it when chaining exceptions across logical process boundaries. In this example IOException is directly attached to ActivityFailureException besides being wrapped when rethrown.\n\npublic class HelloException {\n\n static final String TASK_LIST = "HelloException";\n\n public interface GreetingWorkflow {\n @WorkflowMethod\n String getGreeting(String name);\n }\n\n public interface GreetingChild {\n @WorkflowMethod\n String composeGreeting(String greeting, String name);\n }\n\n public interface GreetingActivities {\n String composeGreeting(String greeting, String name);\n }\n\n /** Parent implementation that calls GreetingChild#composeGreeting.**/\n public static class GreetingWorkflowImpl implements GreetingWorkflow {\n\n @Override\n public String getGreeting(String name) {\n GreetingChild child = Workflow.newChildWorkflowStub(GreetingChild.class);\n return child.composeGreeting("Hello", name);\n }\n }\n\n /** Child workflow implementation.**/\n public static class GreetingChildImpl implements GreetingChild {\n private final GreetingActivities activities =\n Workflow.newActivityStub(\n GreetingActivities.class,\n new ActivityOptions.Builder()\n .setScheduleToCloseTimeout(Duration.ofSeconds(10))\n .build());\n\n @Override\n public String composeGreeting(String greeting, String name) {\n return activities.composeGreeting(greeting, name);\n }\n }\n\n static class GreetingActivitiesImpl implements GreetingActivities {\n @Override\n public String composeGreeting(String greeting, String name) {\n try {\n throw new IOException(greeting + " " + name + "!");\n } catch (IOException e) {\n // Wrapping the exception as checked exceptions in activity and workflow interface methods\n // are prohibited.\n // It will be unwrapped and attached as a cause to the ActivityFailureException.\n throw Workflow.wrap(e);\n }\n }\n }\n\n public static void main(String[] args) {\n // Get a new client\n // NOTE: to set a different options, you can do like this:\n // ClientOptions.newBuilder().setRpcTimeout(5 * 1000).build();\n WorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(ClientOptions.defaultInstance()),\n WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());\n // Get worker to poll the task list.\n WorkerFactory factory = WorkerFactory.newInstance(workflowClient);\n Worker worker = factory.newWorker(TASK_LIST);\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class, GreetingChildImpl.class);\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n factory.start();\n\n WorkflowOptions workflowOptions =\n new WorkflowOptions.Builder()\n .setTaskList(TASK_LIST)\n .setExecutionStartToCloseTimeout(Duration.ofSeconds(30))\n .build();\n GreetingWorkflow workflow =\n workflowClient.newWorkflowStub(GreetingWorkflow.class, workflowOptions);\n try {\n workflow.getGreeting("World");\n throw new IllegalStateException("unreachable");\n } catch (WorkflowException e) {\n Throwable cause = Throwables.getRootCause(e);\n // prints "Hello World!"\n System.out.println(cause.getMessage());\n System.out.println("\\nStack Trace:\\n" + Throwables.getStackTraceAsString(e));\n }\n System.exit(0);\n }\n \n}\n\n\nThe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(String[] args) {\n Worker.Factory factory = new Worker.Factory(DOMAIN);\n Worker worker = factory.newWorker(TASK_LIST);\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class, GreetingChildImpl.class);\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n factory.start();\n\n WorkflowClient workflowClient = WorkflowClient.newInstance(DOMAIN);\n WorkflowOptions workflowOptions =\n new WorkflowOptions.Builder()\n .setTaskList(TASK_LIST)\n .setExecutionStartToCloseTimeout(Duration.ofSeconds(30))\n .build();\n GreetingWorkflow workflow =\n workflowClient.newWorkflowStub(GreetingWorkflow.class, workflowOptions);\n try {\n workflow.getGreeting("World");\n throw new IllegalStateException("unreachable");\n } catch (WorkflowException e) {\n Throwable cause = Throwables.getRootCause(e);\n // prints "Hello World!"\n System.out.println(cause.getMessage());\n System.out.println("\\nStack Trace:\\n" + Throwables.getStackTraceAsString(e));\n }\n System.exit(0);\n}\n',normalizedContent:'# exception handling\n\nby default, exceptions thrown by an activity are received by the workflow wrapped into an com.uber.cadence.workflow.activityfailureexception,\n\nexceptions thrown by a child workflow are received by a parent workflow wrapped into a com.uber.cadence.workflow.childworkflowfailureexception\n\nexceptions thrown by a workflow are received by a workflow client wrapped into com.uber.cadence.client.workflowfailureexception.\n\nin this example a workflow client executes a workflow which executes a child workflow which executes an activity which throws an ioexception. the resulting exception stack trace is:\n\n com.uber.cadence.client.workflowfailureexception: workflowtype="greetingworkflow::getgreeting", workflowid="38b9ce7a-e370-4cd8-a9f3-35e7295f7b3d", runid="37ceb58c-9271-4fca-b5aa-ba06c5495214\n at com.uber.cadence.internal.dispatcher.untypedworkflowstubimpl.getresult(untypedworkflowstubimpl.java:139)\n at com.uber.cadence.internal.dispatcher.untypedworkflowstubimpl.getresult(untypedworkflowstubimpl.java:111)\n at com.uber.cadence.internal.dispatcher.workflowexternalinvocationhandler.startworkflow(workflowexternalinvocationhandler.java:187)\n at com.uber.cadence.internal.dispatcher.workflowexternalinvocationhandler.invoke(workflowexternalinvocationhandler.java:113)\n at com.sun.proxy.$proxy2.getgreeting(unknown source)\n at com.uber.cadence.samples.hello.helloexception.main(helloexception.java:117)\n caused by: com.uber.cadence.workflow.childworkflowfailureexception: workflowtype="greetingchild::composegreeting", id="37ceb58c-9271-4fca-b5aa-ba06c5495214:1", runid="47859b47-da4c-4225-876a-462421c98c72, eventid=10\n at java.lang.thread.getstacktrace(thread.java:1559)\n at com.uber.cadence.internal.dispatcher.childworkflowinvocationhandler.executechildworkflow(childworkflowinvocationhandler.java:114)\n at com.uber.cadence.internal.dispatcher.childworkflowinvocationhandler.invoke(childworkflowinvocationhandler.java:71)\n at com.sun.proxy.$proxy5.composegreeting(unknown source:0)\n at com.uber.cadence.samples.hello.helloexception$greetingworkflowimpl.getgreeting(helloexception.java:70)\n at sun.reflect.nativemethodaccessorimpl.invoke0(native method:0)\n at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)\n at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)\n at java.lang.reflect.method.invoke(method.java:498)\n at com.uber.cadence.internal.worker.pojoworkflowimplementationfactory$pojoworkflowimplementation.execute(pojoworkflowimplementationfactory.java:160)\n caused by: com.uber.cadence.workflow.activityfailureexception: activitytype="greetingactivities::composegreeting" activityid="1", eventid=7\n at java.lang.thread.getstacktrace(thread.java:1559)\n at com.uber.cadence.internal.dispatcher.activityinvocationhandler.invoke(activityinvocationhandler.java:75)\n at com.sun.proxy.$proxy6.composegreeting(unknown source:0)\n at com.uber.cadence.samples.hello.helloexception$greetingchildimpl.composegreeting(helloexception.java:85)\n ... 5 more\n caused by: java.io.ioexception: hello world!\n at com.uber.cadence.samples.hello.helloexception$greetingactivitiesimpl.composegreeting(helloexception.java:93)\n at sun.reflect.nativemethodaccessorimpl.invoke0(native method:0)\n at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:62)\n at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)\n at java.lang.reflect.method.invoke(method.java:498)\n at com.uber.cadence.internal.worker.pojoactivityimplementationfactory$pojoactivityimplementation.execute(pojoactivityimplementationfactory.java:162)\n\n\nnote that ioexception is a checked exception. the standard java way of adding throws ioexception to method signature of activity, child and workflow interfaces is not going to help. it is because at all levels it is never received directly, but in wrapped form. propagating it without wrapping would not allow adding additional context information like activity, child workflow and parent workflow types and ids. the cadence library solution is to provide a special wrapper method workflow.wrap(exception) which wraps a checked exception in a special runtime exception. it is special because the framework strips it when chaining exceptions across logical process boundaries. in this example ioexception is directly attached to activityfailureexception besides being wrapped when rethrown.\n\npublic class helloexception {\n\n static final string task_list = "helloexception";\n\n public interface greetingworkflow {\n @workflowmethod\n string getgreeting(string name);\n }\n\n public interface greetingchild {\n @workflowmethod\n string composegreeting(string greeting, string name);\n }\n\n public interface greetingactivities {\n string composegreeting(string greeting, string name);\n }\n\n /** parent implementation that calls greetingchild#composegreeting.**/\n public static class greetingworkflowimpl implements greetingworkflow {\n\n @override\n public string getgreeting(string name) {\n greetingchild child = workflow.newchildworkflowstub(greetingchild.class);\n return child.composegreeting("hello", name);\n }\n }\n\n /** child workflow implementation.**/\n public static class greetingchildimpl implements greetingchild {\n private final greetingactivities activities =\n workflow.newactivitystub(\n greetingactivities.class,\n new activityoptions.builder()\n .setscheduletoclosetimeout(duration.ofseconds(10))\n .build());\n\n @override\n public string composegreeting(string greeting, string name) {\n return activities.composegreeting(greeting, name);\n }\n }\n\n static class greetingactivitiesimpl implements greetingactivities {\n @override\n public string composegreeting(string greeting, string name) {\n try {\n throw new ioexception(greeting + " " + name + "!");\n } catch (ioexception e) {\n // wrapping the exception as checked exceptions in activity and workflow interface methods\n // are prohibited.\n // it will be unwrapped and attached as a cause to the activityfailureexception.\n throw workflow.wrap(e);\n }\n }\n }\n\n public static void main(string[] args) {\n // get a new client\n // note: to set a different options, you can do like this:\n // clientoptions.newbuilder().setrpctimeout(5 * 1000).build();\n workflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(clientoptions.defaultinstance()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n // get worker to poll the task list.\n workerfactory factory = workerfactory.newinstance(workflowclient);\n worker worker = factory.newworker(task_list);\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class, greetingchildimpl.class);\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n factory.start();\n\n workflowoptions workflowoptions =\n new workflowoptions.builder()\n .settasklist(task_list)\n .setexecutionstarttoclosetimeout(duration.ofseconds(30))\n .build();\n greetingworkflow workflow =\n workflowclient.newworkflowstub(greetingworkflow.class, workflowoptions);\n try {\n workflow.getgreeting("world");\n throw new illegalstateexception("unreachable");\n } catch (workflowexception e) {\n throwable cause = throwables.getrootcause(e);\n // prints "hello world!"\n system.out.println(cause.getmessage());\n system.out.println("\\nstack trace:\\n" + throwables.getstacktraceasstring(e));\n }\n system.exit(0);\n }\n \n}\n\n\nthe code is slightly different if you are using client version prior to 3.0.0:\n\npublic static void main(string[] args) {\n worker.factory factory = new worker.factory(domain);\n worker worker = factory.newworker(task_list);\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class, greetingchildimpl.class);\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n factory.start();\n\n workflowclient workflowclient = workflowclient.newinstance(domain);\n workflowoptions workflowoptions =\n new workflowoptions.builder()\n .settasklist(task_list)\n .setexecutionstarttoclosetimeout(duration.ofseconds(30))\n .build();\n greetingworkflow workflow =\n workflowclient.newworkflowstub(greetingworkflow.class, workflowoptions);\n try {\n workflow.getgreeting("world");\n throw new illegalstateexception("unreachable");\n } catch (workflowexception e) {\n throwable cause = throwables.getrootcause(e);\n // prints "hello world!"\n system.out.println(cause.getmessage());\n system.out.println("\\nstack trace:\\n" + throwables.getstacktraceasstring(e));\n }\n system.exit(0);\n}\n',charsets:{}},{title:"Worker service",frontmatter:{layout:"default",title:"Worker service",permalink:"/docs/java-client/workers",readingShow:"top"},regularPath:"/docs/04-java-client/09-workers.html",relativePath:"docs/04-java-client/09-workers.md",key:"v-47638d30",path:"/docs/java-client/workers/",codeSwitcherOptions:{},headersStr:null,content:"# Worker service\n\nA or service is a service that hosts the and implementations. The polls the Cadence service for , performs those , and communicates execution results back to the Cadence service. services are developed, deployed, and operated by Cadence customers.\n\nYou can run a Cadence in a new or an existing service. Use the framework APIs to start the Cadence and link in all and implementations that you require the service to execute.\n\n WorkerFactory factory = WorkerFactory.newInstance(workflowClient,\n WorkerFactoryOptions.newBuilder()\n .setMaxWorkflowThreadCount(1000)\n .setStickyCacheSize(100)\n .setDisableStickyExecution(false)\n .build());\n Worker worker = factory.newWorker(TASK_LIST,\n WorkerOptions.newBuilder()\n .setMaxConcurrentActivityExecutionSize(100)\n .setMaxConcurrentWorkflowExecutionSize(100)\n .build());\n \n // Workflows are stateful. So you need a type to create instances.\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class);\n // Activities are stateless and thread safe. So a shared instance is used.\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n // Start listening to the workflow and activity task lists.\n factory.start();\n\n\nThe code is slightly different if you are using client version prior to 3.0.0:\n\nWorker.Factory factory = new Worker.Factory(DOMAIN,\n new Worker.FactoryOptions.Builder()\n .setMaxWorkflowThreadCount(1000)\n .setCacheMaximumSize(100)\n .setDisableStickyExecution(false)\n .build());\n Worker worker = factory.newWorker(TASK_LIST,\n new WorkerOptions.Builder()\n .setMaxConcurrentActivityExecutionSize(100)\n .setMaxConcurrentWorkflowExecutionSize(100)\n .build());\n // Workflows are stateful. So you need a type to create instances.\n worker.registerWorkflowImplementationTypes(GreetingWorkflowImpl.class);\n // Activities are stateless and thread safe. So a shared instance is used.\n worker.registerActivitiesImplementations(new GreetingActivitiesImpl());\n // Start listening to the workflow and activity task lists.\n factory.start();\n\n\nThe WorkerFactoryOptions includes those that need to be shared across workers on the hosts like thread pool, sticky cache.\n\nIn WorkerOptions you can customize things like pollerOptions, activities per second.",normalizedContent:"# worker service\n\na or service is a service that hosts the and implementations. the polls the cadence service for , performs those , and communicates execution results back to the cadence service. services are developed, deployed, and operated by cadence customers.\n\nyou can run a cadence in a new or an existing service. use the framework apis to start the cadence and link in all and implementations that you require the service to execute.\n\n workerfactory factory = workerfactory.newinstance(workflowclient,\n workerfactoryoptions.newbuilder()\n .setmaxworkflowthreadcount(1000)\n .setstickycachesize(100)\n .setdisablestickyexecution(false)\n .build());\n worker worker = factory.newworker(task_list,\n workeroptions.newbuilder()\n .setmaxconcurrentactivityexecutionsize(100)\n .setmaxconcurrentworkflowexecutionsize(100)\n .build());\n \n // workflows are stateful. so you need a type to create instances.\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class);\n // activities are stateless and thread safe. so a shared instance is used.\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n // start listening to the workflow and activity task lists.\n factory.start();\n\n\nthe code is slightly different if you are using client version prior to 3.0.0:\n\nworker.factory factory = new worker.factory(domain,\n new worker.factoryoptions.builder()\n .setmaxworkflowthreadcount(1000)\n .setcachemaximumsize(100)\n .setdisablestickyexecution(false)\n .build());\n worker worker = factory.newworker(task_list,\n new workeroptions.builder()\n .setmaxconcurrentactivityexecutionsize(100)\n .setmaxconcurrentworkflowexecutionsize(100)\n .build());\n // workflows are stateful. so you need a type to create instances.\n worker.registerworkflowimplementationtypes(greetingworkflowimpl.class);\n // activities are stateless and thread safe. so a shared instance is used.\n worker.registeractivitiesimplementations(new greetingactivitiesimpl());\n // start listening to the workflow and activity task lists.\n factory.start();\n\n\nthe workerfactoryoptions includes those that need to be shared across workers on the hosts like thread pool, sticky cache.\n\nin workeroptions you can customize things like polleroptions, activities per second.",charsets:{}},{title:"Continue As New",frontmatter:{layout:"default",title:"Continue As New",permalink:"/docs/java-client/continue-as-new",readingShow:"top"},regularPath:"/docs/04-java-client/15-continue-as-new.html",relativePath:"docs/04-java-client/15-continue-as-new.md",key:"v-68ae0de4",path:"/docs/java-client/continue-as-new/",codeSwitcherOptions:{},headersStr:null,content:'# Continue as new\n\nthat need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the is inside the body of the for loop. The problem with this approach is that the history for that will keep growing to a point where it reaches the maximum size enforced by the service.\n\nContinueAsNew is the low level construct that enables implementing such without the risk of failures down the road. The operation atomically completes the current execution and starts a new execution of the with the same . The new execution will not carry over any history from the old execution.\n\n@Override\npublic void greet(String name) {\n activities.greet("Hello " + name + "!");\n Workflow.continueAsNew(name);\n}\n\n',normalizedContent:'# continue as new\n\nthat need to rerun periodically could naively be implemented as a big for loop with a sleep where the entire logic of the is inside the body of the for loop. the problem with this approach is that the history for that will keep growing to a point where it reaches the maximum size enforced by the service.\n\ncontinueasnew is the low level construct that enables implementing such without the risk of failures down the road. the operation atomically completes the current execution and starts a new execution of the with the same . the new execution will not carry over any history from the old execution.\n\n@override\npublic void greet(string name) {\n activities.greet("hello " + name + "!");\n workflow.continueasnew(name);\n}\n\n',charsets:{}},{title:"Workflow Replay and Shadowing",frontmatter:{layout:"default",title:"Workflow Replay and Shadowing",permalink:"/docs/java-client/workflow-replay-shadowing",readingShow:"top"},regularPath:"/docs/04-java-client/18-workflow-replay-shadowing.html",relativePath:"docs/04-java-client/18-workflow-replay-shadowing.md",key:"v-7a33750a",path:"/docs/java-client/workflow-replay-shadowing/",headers:[{level:2,title:"Workflow Replayer",slug:"workflow-replayer",normalizedTitle:"workflow replayer",charIndex:469},{level:3,title:"Write a Replay Test",slug:"write-a-replay-test",normalizedTitle:"write a replay test",charIndex:824},{level:3,title:"Sample Replay Test",slug:"sample-replay-test",normalizedTitle:"sample replay test",charIndex:2164},{level:2,title:"Workflow Shadower",slug:"workflow-shadower",normalizedTitle:"workflow shadower",charIndex:491},{level:3,title:"Shadow Options",slug:"shadow-options",normalizedTitle:"shadow options",charIndex:3279},{level:3,title:"Local Shadowing Test",slug:"local-shadowing-test",normalizedTitle:"local shadowing test",charIndex:4976},{level:3,title:"Shadowing Worker",slug:"shadowing-worker",normalizedTitle:"shadowing worker",charIndex:6137}],codeSwitcherOptions:{},headersStr:"Workflow Replayer Write a Replay Test Sample Replay Test Workflow Shadower Shadow Options Local Shadowing Test Shadowing Worker",content:"# Workflow Replay and Shadowing\n\nIn the Versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. However, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. To help you identify incompatible changes and catch them before production traffic is impacted, we implemented Workflow Replayer and Workflow Shadower.\n\n\n# Workflow Replayer\n\nWorkflow Replayer is a testing component for replaying existing workflow histories against a workflow definition. The replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# Write a Replay Test\n\n# Step 1: Prepare workflow histories\n\nReplayer can read workflow history from a local json file or fetch it directly from the Cadence server. If you would like to use the first method, you can use the following CLI command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nThe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# Step 2: Call the replay method\n\nOnce you have the workflow history or have the connection to Cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nWorkflowReplayer.replayWorkflowExecution(history, MyWorkflowImpl.class);\n\n// if workflow history is stored in a json file\nWorkflowReplayer.replayWorkflowExecutionFromResource(\"workflowHistory.json\", MyWorkflowImpl.class);\n\n// if workflow history is read from a File\nWorkflowReplayer.replayWorkflowExecution(historyFileObject, MyWorkflowImpl.class);\n\n\n# Step 3: Catch returned exception\n\nIf an exception is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\n\n# Sample Replay Test\n\nThis sample is also available in our samples repo at here.\n\npublic class HelloActivityReplayTest {\n @Test\n public void testReplay() throws Exception {\n WorkflowReplayer.replayWorkflowExecutionFromResource(\n \"HelloActivity.json\", HelloActivity.GreetingWorkflowImpl.class);\n }\n}\n\n\n\n# Workflow Shadower\n\nWorkflow Replayer works well when verifying the compatibility against a small number of workflows histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nWorkflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# Shadow Options\n\nComplete documentation on shadow options which includes default values, accepted values, etc. can be found here. The following sections are just a brief description of each option.\n\n# Scan Filters\n\n * WorkflowQuery: If you are familiar with our advanced visibility query syntax, you can specify a query directly. If specified, all other scan filters must be left empty.\n * WorkflowTypes: A list of workflow Type names.\n * WorkflowStatuses: A list of workflow status.\n * WorkflowStartTimeFilter: Min and max timestamp for workflow start time.\n * WorkflowSamplingRate: Sampling workflows from the scan result before executing the replay test.\n\n# Shadow Exit Condition\n\n * ExpirationInterval: Shadowing will exit when the specified interval has passed.\n * ShadowCount: Shadowing will exit after this number of workflow has been replayed. Note: replay maybe skipped due to errors like can't fetch history, history too short, etc. Skipped workflows won't be taken into account for ShadowCount.\n\n# Shadow Mode\n\n * Normal: Shadowing will complete after all workflows matches WorkflowQuery (after sampling) have been replayed or when exit condition is met.\n * Continuous: A new round of shadowing will be started after all workflows matches WorkflowQuery have been replayed. There will be a 5 min wait period between each round, and currently this wait period is not configurable. Shadowing will complete only when ExitCondition is met. ExitCondition must be specified when using this mode.\n\n# Shadow Concurrency\n\n * Concurrency: workflow replay concurrency. If not specified, it will default to 1. For local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# Local Shadowing Test\n\nLocal shadowing test is similar to the replay test. First create a workflow shadower with optional shadow and replay options, then register the workflow that needs to be shadowed. Finally, call the Run method to start the shadowing. The method will return if shadowing has finished or any non-deterministic error is found.\n\nHere's a simple example. The example is also available here.\n\npublic void testShadowing() throws Throwable {\n IWorkflowService service = new WorkflowServiceTChannel(ClientOptions.defaultInstance());\n\n ShadowingOptions options = ShadowingOptions\n .newBuilder()\n .setDomain(DOMAIN)\n .setShadowMode(Mode.Normal)\n .setWorkflowTypes(Lists.newArrayList(\"GreetingWorkflow::getGreeting\"))\n .setWorkflowStatuses(Lists.newArrayList(WorkflowStatus.OPEN, WorkflowStatus.CLOSED))\n .setExitCondition(new ExitCondition().setExpirationIntervalInSeconds(60))\n .build();\n WorkflowShadower shadower = new WorkflowShadower(service, options, TASK_LIST);\n shadower.registerWorkflowImplementationTypes(HelloActivity.GreetingWorkflowImpl.class);\n\n shadower.run();\n}\n\n\n\n# Shadowing Worker\n\nNOTE:\n\n * All shadow workflows are running in one Cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * The Cadence server used for scanning and getting workflow history will also be the Cadence server for running your shadow workflow. Currently, there's no way to specify different Cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nYour worker can also be configured to run in shadow mode to run shadow tests as a workflow. This is useful if there's a number of workflows that need to be replayed. Using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. It can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nWhen running in shadow mode, the normal decision worker will be disabled so that it won't update any production workflows. A special shadow activity worker will be started to execute activities for scanning and replaying workflows. The actual shadow workflow logic is controlled by Cadence server and your worker is only responsible for scanning and replaying workflows.\n\nReplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nTo enable the shadow mode, you can initialize a shadowing worker and pass in the shadowing options.\n\nTo enable the shadowing worker, here is a example. The example is also available here:\n\nWorkflowClient workflowClient =\n WorkflowClient.newInstance(\n new WorkflowServiceTChannel(ClientOptions.defaultInstance()),\n WorkflowClientOptions.newBuilder().setDomain(DOMAIN).build());\n ShadowingOptions options = ShadowingOptions\n .newBuilder()\n .setDomain(DOMAIN)\n .setShadowMode(Mode.Normal)\n .setWorkflowTypes(Lists.newArrayList(\"GreetingWorkflow::getGreeting\"))\n .setWorkflowStatuses(Lists.newArrayList(WorkflowStatus.OPEN, WorkflowStatus.CLOSED))\n .setExitCondition(new ExitCondition().setExpirationIntervalInSeconds(60))\n .build();\n\n ShadowingWorker shadowingWorker = new ShadowingWorker(\n workflowClient,\n \"HelloActivity\",\n WorkerOptions.defaultInstance(),\n options);\n shadowingWorker.registerWorkflowImplementationTypes(HelloActivity.GreetingWorkflowImpl.class);\n\tshadowingWorker.start();\n\n\nRegistered workflows will be forwarded to the underlying WorkflowReplayer. DataConverter, WorkflowInterceptorChainFactories, ContextPropagators, and Tracer specified in the worker.Options will also be used as ReplayOptions. Since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.",normalizedContent:"# workflow replay and shadowing\n\nin the versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. however, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. to help you identify incompatible changes and catch them before production traffic is impacted, we implemented workflow replayer and workflow shadower.\n\n\n# workflow replayer\n\nworkflow replayer is a testing component for replaying existing workflow histories against a workflow definition. the replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# write a replay test\n\n# step 1: prepare workflow histories\n\nreplayer can read workflow history from a local json file or fetch it directly from the cadence server. if you would like to use the first method, you can use the following cli command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nthe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# step 2: call the replay method\n\nonce you have the workflow history or have the connection to cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nworkflowreplayer.replayworkflowexecution(history, myworkflowimpl.class);\n\n// if workflow history is stored in a json file\nworkflowreplayer.replayworkflowexecutionfromresource(\"workflowhistory.json\", myworkflowimpl.class);\n\n// if workflow history is read from a file\nworkflowreplayer.replayworkflowexecution(historyfileobject, myworkflowimpl.class);\n\n\n# step 3: catch returned exception\n\nif an exception is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\n\n# sample replay test\n\nthis sample is also available in our samples repo at here.\n\npublic class helloactivityreplaytest {\n @test\n public void testreplay() throws exception {\n workflowreplayer.replayworkflowexecutionfromresource(\n \"helloactivity.json\", helloactivity.greetingworkflowimpl.class);\n }\n}\n\n\n\n# workflow shadower\n\nworkflow replayer works well when verifying the compatibility against a small number of workflows histories. if there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nworkflow shadower is built on top of workflow replayer to address this problem. the basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from cadence server and run the replay test. it can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# shadow options\n\ncomplete documentation on shadow options which includes default values, accepted values, etc. can be found here. the following sections are just a brief description of each option.\n\n# scan filters\n\n * workflowquery: if you are familiar with our advanced visibility query syntax, you can specify a query directly. if specified, all other scan filters must be left empty.\n * workflowtypes: a list of workflow type names.\n * workflowstatuses: a list of workflow status.\n * workflowstarttimefilter: min and max timestamp for workflow start time.\n * workflowsamplingrate: sampling workflows from the scan result before executing the replay test.\n\n# shadow exit condition\n\n * expirationinterval: shadowing will exit when the specified interval has passed.\n * shadowcount: shadowing will exit after this number of workflow has been replayed. note: replay maybe skipped due to errors like can't fetch history, history too short, etc. skipped workflows won't be taken into account for shadowcount.\n\n# shadow mode\n\n * normal: shadowing will complete after all workflows matches workflowquery (after sampling) have been replayed or when exit condition is met.\n * continuous: a new round of shadowing will be started after all workflows matches workflowquery have been replayed. there will be a 5 min wait period between each round, and currently this wait period is not configurable. shadowing will complete only when exitcondition is met. exitcondition must be specified when using this mode.\n\n# shadow concurrency\n\n * concurrency: workflow replay concurrency. if not specified, it will default to 1. for local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# local shadowing test\n\nlocal shadowing test is similar to the replay test. first create a workflow shadower with optional shadow and replay options, then register the workflow that needs to be shadowed. finally, call the run method to start the shadowing. the method will return if shadowing has finished or any non-deterministic error is found.\n\nhere's a simple example. the example is also available here.\n\npublic void testshadowing() throws throwable {\n iworkflowservice service = new workflowservicetchannel(clientoptions.defaultinstance());\n\n shadowingoptions options = shadowingoptions\n .newbuilder()\n .setdomain(domain)\n .setshadowmode(mode.normal)\n .setworkflowtypes(lists.newarraylist(\"greetingworkflow::getgreeting\"))\n .setworkflowstatuses(lists.newarraylist(workflowstatus.open, workflowstatus.closed))\n .setexitcondition(new exitcondition().setexpirationintervalinseconds(60))\n .build();\n workflowshadower shadower = new workflowshadower(service, options, task_list);\n shadower.registerworkflowimplementationtypes(helloactivity.greetingworkflowimpl.class);\n\n shadower.run();\n}\n\n\n\n# shadowing worker\n\nnote:\n\n * all shadow workflows are running in one cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * the cadence server used for scanning and getting workflow history will also be the cadence server for running your shadow workflow. currently, there's no way to specify different cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nyour worker can also be configured to run in shadow mode to run shadow tests as a workflow. this is useful if there's a number of workflows that need to be replayed. using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. it can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nwhen running in shadow mode, the normal decision worker will be disabled so that it won't update any production workflows. a special shadow activity worker will be started to execute activities for scanning and replaying workflows. the actual shadow workflow logic is controlled by cadence server and your worker is only responsible for scanning and replaying workflows.\n\nreplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nto enable the shadow mode, you can initialize a shadowing worker and pass in the shadowing options.\n\nto enable the shadowing worker, here is a example. the example is also available here:\n\nworkflowclient workflowclient =\n workflowclient.newinstance(\n new workflowservicetchannel(clientoptions.defaultinstance()),\n workflowclientoptions.newbuilder().setdomain(domain).build());\n shadowingoptions options = shadowingoptions\n .newbuilder()\n .setdomain(domain)\n .setshadowmode(mode.normal)\n .setworkflowtypes(lists.newarraylist(\"greetingworkflow::getgreeting\"))\n .setworkflowstatuses(lists.newarraylist(workflowstatus.open, workflowstatus.closed))\n .setexitcondition(new exitcondition().setexpirationintervalinseconds(60))\n .build();\n\n shadowingworker shadowingworker = new shadowingworker(\n workflowclient,\n \"helloactivity\",\n workeroptions.defaultinstance(),\n options);\n shadowingworker.registerworkflowimplementationtypes(helloactivity.greetingworkflowimpl.class);\n\tshadowingworker.start();\n\n\nregistered workflows will be forwarded to the underlying workflowreplayer. dataconverter, workflowinterceptorchainfactories, contextpropagators, and tracer specified in the worker.options will also be used as replayoptions. since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.",charsets:{}},{title:"Testing",frontmatter:{layout:"default",title:"Testing",permalink:"/docs/java-client/testing",readingShow:"top"},regularPath:"/docs/04-java-client/17-testing.html",relativePath:"docs/04-java-client/17-testing.md",key:"v-56629f80",path:"/docs/java-client/testing/",headers:[{level:2,title:"Workflow Test Environment",slug:"workflow-test-environment",normalizedTitle:"workflow test environment",charIndex:833}],codeSwitcherOptions:{},headersStr:"Workflow Test Environment",content:'# Activity Test Environment\n\nTestActivityEnvironment is the helper class for unit testing activity implementations. Supports calls to Activity methods from the tested activities. An example test:\n\nSee full example here.\n\n\n public interface TestActivity {\n String activity1(String input);\n }\n\n private static class ActivityImpl implements TestActivity {\n @Override\n public String activity1(String input) {\n return Activity.getTask().getActivityType().getName() + "-" + input;\n }\n }\n\n @Test\n public void testSuccess() {\n testEnvironment.registerActivitiesImplementations(new ActivityImpl());\n TestActivity activity = testEnvironment.newActivityStub(TestActivity.class);\n String result = activity.activity1("input1");\n assertEquals("TestActivity::activity1-input1", result);\n }\n\n\n\n\n# Workflow Test Environment\n\nTestWorkflowEnvironment provides workflow unit testing capabilities.\n\nTesting the workflow code is hard as it might be potentially very long running. The included in-memory implementation of the Cadence service supports an automatic time skipping. Anytime a workflow under the test as well as the unit test code are waiting on a timer (or sleep) the internal service time is automatically advanced to the nearest time that unblocks one of the waiting threads. This way a workflow that runs in production for months is unit tested in milliseconds. Here is an example of a test that executes in a few milliseconds instead of over two hours that are needed for the workflow to complete.\n\nSee full example here.\n\npublic class SignaledWorkflowImpl implements SignaledWorkflow {\n private String signalInput;\n\n @Override\n public String workflow1(String input) {\n Workflow.sleep(Duration.ofHours(1));\n Workflow.await(() -> signalInput != null);\n Workflow.sleep(Duration.ofHours(1));\n return signalInput + "-" + input;\n }\n\n @Override\n public void processSignal(String input) {\n signalInput = input;\n }\n}\n\n@Test\npublic void testSignal() throws ExecutionException, InterruptedException {\n // Get a workflow stub using the same task list the worker uses.\n WorkflowOptions workflowOptions =\n new WorkflowOptions.Builder()\n .setTaskList(HelloSignal.TASK_LIST)\n .setExecutionStartToCloseTimeout(Duration.ofDays(30))\n .build();\n GreetingWorkflow workflow =\n workflowClient.newWorkflowStub(GreetingWorkflow.class, workflowOptions);\n\n // Start workflow asynchronously to not use another thread to signal.\n WorkflowClient.start(workflow::getGreetings);\n\n // After start for getGreeting returns, the workflow is guaranteed to be started.\n // So we can send a signal to it using workflow stub immediately.\n // But just to demonstrate the unit testing of a long running workflow adding a long sleep here.\n testEnv.sleep(Duration.ofDays(1));\n // This workflow keeps receiving signals until exit is called\n workflow.waitForName("World");\n workflow.waitForName("Universe");\n workflow.exit();\n // Calling synchronous getGreeting after workflow has started reconnects to the existing\n // workflow and\n // blocks until result is available. Note that this behavior assumes that WorkflowOptions are\n // not configured\n // with WorkflowIdReusePolicy.AllowDuplicate. In that case the call would fail with\n // WorkflowExecutionAlreadyStartedException.\n List greetings = workflow.getGreetings();\n assertEquals(2, greetings.size());\n assertEquals("Hello World!", greetings.get(0));\n assertEquals("Hello Universe!", greetings.get(1));\n}\n',normalizedContent:'# activity test environment\n\ntestactivityenvironment is the helper class for unit testing activity implementations. supports calls to activity methods from the tested activities. an example test:\n\nsee full example here.\n\n\n public interface testactivity {\n string activity1(string input);\n }\n\n private static class activityimpl implements testactivity {\n @override\n public string activity1(string input) {\n return activity.gettask().getactivitytype().getname() + "-" + input;\n }\n }\n\n @test\n public void testsuccess() {\n testenvironment.registeractivitiesimplementations(new activityimpl());\n testactivity activity = testenvironment.newactivitystub(testactivity.class);\n string result = activity.activity1("input1");\n assertequals("testactivity::activity1-input1", result);\n }\n\n\n\n\n# workflow test environment\n\ntestworkflowenvironment provides workflow unit testing capabilities.\n\ntesting the workflow code is hard as it might be potentially very long running. the included in-memory implementation of the cadence service supports an automatic time skipping. anytime a workflow under the test as well as the unit test code are waiting on a timer (or sleep) the internal service time is automatically advanced to the nearest time that unblocks one of the waiting threads. this way a workflow that runs in production for months is unit tested in milliseconds. here is an example of a test that executes in a few milliseconds instead of over two hours that are needed for the workflow to complete.\n\nsee full example here.\n\npublic class signaledworkflowimpl implements signaledworkflow {\n private string signalinput;\n\n @override\n public string workflow1(string input) {\n workflow.sleep(duration.ofhours(1));\n workflow.await(() -> signalinput != null);\n workflow.sleep(duration.ofhours(1));\n return signalinput + "-" + input;\n }\n\n @override\n public void processsignal(string input) {\n signalinput = input;\n }\n}\n\n@test\npublic void testsignal() throws executionexception, interruptedexception {\n // get a workflow stub using the same task list the worker uses.\n workflowoptions workflowoptions =\n new workflowoptions.builder()\n .settasklist(hellosignal.task_list)\n .setexecutionstarttoclosetimeout(duration.ofdays(30))\n .build();\n greetingworkflow workflow =\n workflowclient.newworkflowstub(greetingworkflow.class, workflowoptions);\n\n // start workflow asynchronously to not use another thread to signal.\n workflowclient.start(workflow::getgreetings);\n\n // after start for getgreeting returns, the workflow is guaranteed to be started.\n // so we can send a signal to it using workflow stub immediately.\n // but just to demonstrate the unit testing of a long running workflow adding a long sleep here.\n testenv.sleep(duration.ofdays(1));\n // this workflow keeps receiving signals until exit is called\n workflow.waitforname("world");\n workflow.waitforname("universe");\n workflow.exit();\n // calling synchronous getgreeting after workflow has started reconnects to the existing\n // workflow and\n // blocks until result is available. note that this behavior assumes that workflowoptions are\n // not configured\n // with workflowidreusepolicy.allowduplicate. in that case the call would fail with\n // workflowexecutionalreadystartedexception.\n list greetings = workflow.getgreetings();\n assertequals(2, greetings.size());\n assertequals("hello world!", greetings.get(0));\n assertequals("hello universe!", greetings.get(1));\n}\n',charsets:{}},{title:"Side Effect",frontmatter:{layout:"default",title:"Side Effect",permalink:"/docs/java-client/side-effect",readingShow:"top"},regularPath:"/docs/04-java-client/16-side-effect.html",relativePath:"docs/04-java-client/16-side-effect.md",key:"v-53d65f58",path:"/docs/java-client/side-effect/",headers:[{level:2,title:"Mutable Side Effect",slug:"mutable-side-effect",normalizedTitle:"mutable side effect",charIndex:1563}],codeSwitcherOptions:{},headersStr:"Mutable Side Effect",content:"# Side Effect\n\nSide Effect allow workflow executes the provided function once, records its result into the workflow history. The recorded result on history will be returned without executing the provided function during replay. This guarantees the deterministic requirement for workflow as the exact same result will be returned in replay. Common use case is to run some short non-deterministic code in workflow, like getting random number. The only way to fail SideEffect is to panic which causes decision task failure. The decision task after timeout is rescheduled and re-executed giving SideEffect another chance to succeed.\n\n!!Caution: do not use sideEffect function to modify any workflow state. Only use the SideEffect's return value. For example this code is BROKEN:\n\nBad example:\n\n AtomicInteger random = new AtomicInteger();\n Workflow.sideEffect(() -> {\n random.set(random.nextInt(100));\n return null;\n });\n // random will always be 0 in replay, thus this code is non-deterministic\n if random.get() < 50 {\n ....\n } else {\n ....\n }\n\n\nOn replay the provided function is not executed, the random will always be 0, and the workflow could takes a different path breaking the determinism.\n\nHere is the correct way to use sideEffect:\n\nGood example:\n\n int random = Workflow.sideEffect(Integer.class, () -> random.nextInt(100));\n if random < 50 {\n ....\n } else {\n ....\n }\n\n\nIf function throws any exception it is not delivered to the workflow code. It is wrapped in an Error causing failure of the current decision.\n\n\n# Mutable Side Effect\n\nMutableSideEffect is similar to sideEffect, in allowing calls of non-deterministic functions from workflow code. The difference is that every sideEffect call in non-replay mode results in a new marker event recorded into the history. However, mutableSideEffect only records a new marker if a value has changed. During the replay, mutableSideEffect will not execute the function again, but it will return the exact same value as it was returning during the non-replay run.\n\nOne good use case of mutableSideEffect is to access a dynamically changing config without breaking determinism. Even if called very frequently the config value is recorded only when it changes not causing any performance degradation due to a large history size.\n\n!!Caution: do not use mutableSideEffect function to modify any workflow sate. Only use the mutableSideEffect's return value.\n\nIf function throws any exception it is not delivered to the workflow code. It is wrapped in an Error causing failure of the current decision.",normalizedContent:"# side effect\n\nside effect allow workflow executes the provided function once, records its result into the workflow history. the recorded result on history will be returned without executing the provided function during replay. this guarantees the deterministic requirement for workflow as the exact same result will be returned in replay. common use case is to run some short non-deterministic code in workflow, like getting random number. the only way to fail sideeffect is to panic which causes decision task failure. the decision task after timeout is rescheduled and re-executed giving sideeffect another chance to succeed.\n\n!!caution: do not use sideeffect function to modify any workflow state. only use the sideeffect's return value. for example this code is broken:\n\nbad example:\n\n atomicinteger random = new atomicinteger();\n workflow.sideeffect(() -> {\n random.set(random.nextint(100));\n return null;\n });\n // random will always be 0 in replay, thus this code is non-deterministic\n if random.get() < 50 {\n ....\n } else {\n ....\n }\n\n\non replay the provided function is not executed, the random will always be 0, and the workflow could takes a different path breaking the determinism.\n\nhere is the correct way to use sideeffect:\n\ngood example:\n\n int random = workflow.sideeffect(integer.class, () -> random.nextint(100));\n if random < 50 {\n ....\n } else {\n ....\n }\n\n\nif function throws any exception it is not delivered to the workflow code. it is wrapped in an error causing failure of the current decision.\n\n\n# mutable side effect\n\nmutablesideeffect is similar to sideeffect, in allowing calls of non-deterministic functions from workflow code. the difference is that every sideeffect call in non-replay mode results in a new marker event recorded into the history. however, mutablesideeffect only records a new marker if a value has changed. during the replay, mutablesideeffect will not execute the function again, but it will return the exact same value as it was returning during the non-replay run.\n\none good use case of mutablesideeffect is to access a dynamically changing config without breaking determinism. even if called very frequently the config value is recorded only when it changes not causing any performance degradation due to a large history size.\n\n!!caution: do not use mutablesideeffect function to modify any workflow sate. only use the mutablesideeffect's return value.\n\nif function throws any exception it is not delivered to the workflow code. it is wrapped in an error causing failure of the current decision.",charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/java-client",readingShow:"top"},regularPath:"/docs/04-java-client/",relativePath:"docs/04-java-client/index.md",key:"v-c1687e0a",path:"/docs/java-client/",codeSwitcherOptions:{},headersStr:null,content:"# Java client\n\nThe following are important links for the Cadence Java client:\n\n * GitHub project: https://github.com/uber/cadence-java-client\n * Samples: https://github.com/uber/cadence-java-samples\n * JavaDoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\nAdd cadence-client as a dependency to your pom.xml:\n\n\n com.uber.cadence\n cadence-client\n LATEST.RELEASE.VERSION\n\n\n\nor to build.gradle:\n\ndependencies {\n implementation group: 'com.uber.cadence', name: 'cadence-client', version: 'LATEST.RELEASE.VERSION'\n}\n\n\nIf you are using gradle 6.9 or older, you can use compile group\n\ndependencies {\n compile group: 'com.uber.cadence', name: 'cadence-client', version: 'LATEST.RELEASE.VERSION'\n}\n\n\nRelease versions are available in the release page",normalizedContent:"# java client\n\nthe following are important links for the cadence java client:\n\n * github project: https://github.com/uber/cadence-java-client\n * samples: https://github.com/uber/cadence-java-samples\n * javadoc documentation: https://www.javadoc.io/doc/com.uber.cadence/cadence-client\n\nadd cadence-client as a dependency to your pom.xml:\n\n\n com.uber.cadence\n cadence-client\n latest.release.version\n\n\n\nor to build.gradle:\n\ndependencies {\n implementation group: 'com.uber.cadence', name: 'cadence-client', version: 'latest.release.version'\n}\n\n\nif you are using gradle 6.9 or older, you can use compile group\n\ndependencies {\n compile group: 'com.uber.cadence', name: 'cadence-client', version: 'latest.release.version'\n}\n\n\nrelease versions are available in the release page",charsets:{}},{title:"Creating workflows",frontmatter:{layout:"default",title:"Creating workflows",permalink:"/docs/go-client/create-workflows",readingShow:"top"},regularPath:"/docs/05-go-client/02-create-workflows.html",relativePath:"docs/05-go-client/02-create-workflows.md",key:"v-861efabc",path:"/docs/go-client/create-workflows/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:968},{level:2,title:"Declaration",slug:"declaration",normalizedTitle:"declaration",charIndex:1991},{level:2,title:"Implementation",slug:"implementation",normalizedTitle:"implementation",charIndex:934},{level:3,title:"Special Cadence client library functions and types",slug:"special-cadence-client-library-functions-and-types",normalizedTitle:"special cadence client library functions and types",charIndex:4738},{level:3,title:"Failing a workflow",slug:"failing-a-workflow",normalizedTitle:"failing a workflow",charIndex:5529},{level:2,title:"Registration",slug:"registration",normalizedTitle:"registration",charIndex:5664}],codeSwitcherOptions:{},headersStr:"Overview Declaration Implementation Special Cadence client library functions and types Failing a workflow Registration",content:'# Creating workflows\n\nThe is the implementation of the coordination logic. The Cadence programming framework (aka client library) allows you to write the coordination logic as simple procedural code that uses standard Go data modeling. The client library takes care of the communication between the service and the Cadence service, and ensures state persistence between even in case of failures. Furthermore, any particular execution is not tied to a particular machine. Different steps of the coordination logic can end up executing on different instances, with the framework ensuring that the necessary state is recreated on the executing the step.\n\nHowever, in order to facilitate this operational model, both the Cadence programming framework and the managed service impose some requirements and restrictions on the implementation of the coordination logic. The details of these requirements and restrictions are described in the Implementation section below.\n\n\n# Overview\n\nThe sample code below shows a simple implementation of a that executes one . The also passes the sole parameter it receives as part of its initialization as a parameter to the .\n\npackage sample\n\nimport (\n "time"\n\n "go.uber.org/cadence/workflow"\n)\n\nfunc init() {\n workflow.Register(SimpleWorkflow)\n}\n\nfunc SimpleWorkflow(ctx workflow.Context, value string) error {\n ao := workflow.ActivityOptions{\n TaskList: "sampleTaskList",\n ScheduleToCloseTimeout: time.Second * 60,\n ScheduleToStartTimeout: time.Second * 60,\n StartToCloseTimeout: time.Second * 60,\n HeartbeatTimeout: time.Second * 10,\n WaitForCancellation: false,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n\n future := workflow.ExecuteActivity(ctx, SimpleActivity, value)\n var result string\n if err := future.Get(ctx, &result); err != nil {\n return err\n }\n workflow.GetLogger(ctx).Info("Done", zap.String("result", result))\n return nil\n}\n\n\n\n# Declaration\n\nIn the Cadence programing model, a is implemented with a function. The function declaration specifies the parameters the accepts as well as any values it might return.\n\nfunc SimpleWorkflow(ctx workflow.Context, value string) error\n\n\nLet’s deconstruct the declaration above:\n\n * The first parameter to the function is ctx workflow.Context. This is a required parameter for all functions and is used by the Cadence client library to pass execution context. Virtually all the client library functions that are callable from the functions require this ctx parameter. This context parameter is the same concept as the standard context.Context provided by Go. The only difference between workflow.Context and context.Context is that the Done() function in workflow.Context returns workflow.Channel instead the standard go chan.\n * The second parameter, string, is a custom parameter that can be used to pass data into the on start. A can have one or more such parameters. All parameters to a function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers.\n * Since it only declares error as the return value, this means that the does not return a value. The error return value is used to indicate an error was encountered during execution and the should be terminated.\n\n\n# Implementation\n\nIn order to support the synchronous and sequential programming model for the implementation, there are certain restrictions and requirements on how the implementation must behave in order to guarantee correctness. The requirements are that:\n\n * Execution must be deterministic\n * Execution must be idempotent\n\nA straightforward way to think about these requirements is that the code is as follows:\n\n * code can only read and manipulate local state or state received as return values from Cadence client library functions.\n * code should not affect changes in external systems other than through invocation of .\n * code should interact with time only through the functions provided by the Cadence client library (i.e. workflow.Now(), workflow.Sleep()).\n * code should not create and interact with goroutines directly, it should instead use the functions provided by the Cadence client library (i.e., workflow.Go() instead of go, workflow.Channel instead of chan, workflow.Selector instead of select).\n * code should do all logging via the logger provided by the Cadence client library (i.e., workflow.GetLogger()).\n * code should not iterate over maps using range because the order of map iteration is randomized.\n\nNow that we have laid the ground rules, we can take a look at some of the special functions and types used for writing Cadence and how to implement some common patterns.\n\n\n# Special Cadence client library functions and types\n\nThe Cadence client library provides a number of functions and types as alternatives to some native Go functions and types. Usage of these replacement functions/types is necessary in order to ensure that the code execution is deterministic and repeatable within an execution context.\n\nCoroutine related constructs:\n\n * workflow.Go : This is a replacement for the the go statement.\n * workflow.Channel : This is a replacement for the native chan type. Cadence provides support for both buffered and unbuffered channels.\n * workflow.Selector : This is a replacement for the select statement.\n\nTime related functions:\n\n * workflow.Now() : This is a replacement for time.Now().\n * workflow.Sleep() : This is a replacement for time.Sleep().\n\n\n# Failing a workflow\n\nTo mark a as failed, all that needs to happen is for the function to return an error via the err return value.\n\n\n# Registration\n\nFor some client code to be able to invoke a type, the process needs to be aware of all the implementations it has access to. A is registered with the following call:\n\nworkflow.Register(SimpleWorkflow)\n\n\nThis call essentially creates an in-memory mapping inside the process between the fully qualified function name and the implementation. It is safe to call this registration method from an init() function. If the receives for a type it does not know, it will fail that . However, the failure of the will not cause the entire to fail.',normalizedContent:'# creating workflows\n\nthe is the implementation of the coordination logic. the cadence programming framework (aka client library) allows you to write the coordination logic as simple procedural code that uses standard go data modeling. the client library takes care of the communication between the service and the cadence service, and ensures state persistence between even in case of failures. furthermore, any particular execution is not tied to a particular machine. different steps of the coordination logic can end up executing on different instances, with the framework ensuring that the necessary state is recreated on the executing the step.\n\nhowever, in order to facilitate this operational model, both the cadence programming framework and the managed service impose some requirements and restrictions on the implementation of the coordination logic. the details of these requirements and restrictions are described in the implementation section below.\n\n\n# overview\n\nthe sample code below shows a simple implementation of a that executes one . the also passes the sole parameter it receives as part of its initialization as a parameter to the .\n\npackage sample\n\nimport (\n "time"\n\n "go.uber.org/cadence/workflow"\n)\n\nfunc init() {\n workflow.register(simpleworkflow)\n}\n\nfunc simpleworkflow(ctx workflow.context, value string) error {\n ao := workflow.activityoptions{\n tasklist: "sampletasklist",\n scheduletoclosetimeout: time.second * 60,\n scheduletostarttimeout: time.second * 60,\n starttoclosetimeout: time.second * 60,\n heartbeattimeout: time.second * 10,\n waitforcancellation: false,\n }\n ctx = workflow.withactivityoptions(ctx, ao)\n\n future := workflow.executeactivity(ctx, simpleactivity, value)\n var result string\n if err := future.get(ctx, &result); err != nil {\n return err\n }\n workflow.getlogger(ctx).info("done", zap.string("result", result))\n return nil\n}\n\n\n\n# declaration\n\nin the cadence programing model, a is implemented with a function. the function declaration specifies the parameters the accepts as well as any values it might return.\n\nfunc simpleworkflow(ctx workflow.context, value string) error\n\n\nlet’s deconstruct the declaration above:\n\n * the first parameter to the function is ctx workflow.context. this is a required parameter for all functions and is used by the cadence client library to pass execution context. virtually all the client library functions that are callable from the functions require this ctx parameter. this context parameter is the same concept as the standard context.context provided by go. the only difference between workflow.context and context.context is that the done() function in workflow.context returns workflow.channel instead the standard go chan.\n * the second parameter, string, is a custom parameter that can be used to pass data into the on start. a can have one or more such parameters. all parameters to a function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers.\n * since it only declares error as the return value, this means that the does not return a value. the error return value is used to indicate an error was encountered during execution and the should be terminated.\n\n\n# implementation\n\nin order to support the synchronous and sequential programming model for the implementation, there are certain restrictions and requirements on how the implementation must behave in order to guarantee correctness. the requirements are that:\n\n * execution must be deterministic\n * execution must be idempotent\n\na straightforward way to think about these requirements is that the code is as follows:\n\n * code can only read and manipulate local state or state received as return values from cadence client library functions.\n * code should not affect changes in external systems other than through invocation of .\n * code should interact with time only through the functions provided by the cadence client library (i.e. workflow.now(), workflow.sleep()).\n * code should not create and interact with goroutines directly, it should instead use the functions provided by the cadence client library (i.e., workflow.go() instead of go, workflow.channel instead of chan, workflow.selector instead of select).\n * code should do all logging via the logger provided by the cadence client library (i.e., workflow.getlogger()).\n * code should not iterate over maps using range because the order of map iteration is randomized.\n\nnow that we have laid the ground rules, we can take a look at some of the special functions and types used for writing cadence and how to implement some common patterns.\n\n\n# special cadence client library functions and types\n\nthe cadence client library provides a number of functions and types as alternatives to some native go functions and types. usage of these replacement functions/types is necessary in order to ensure that the code execution is deterministic and repeatable within an execution context.\n\ncoroutine related constructs:\n\n * workflow.go : this is a replacement for the the go statement.\n * workflow.channel : this is a replacement for the native chan type. cadence provides support for both buffered and unbuffered channels.\n * workflow.selector : this is a replacement for the select statement.\n\ntime related functions:\n\n * workflow.now() : this is a replacement for time.now().\n * workflow.sleep() : this is a replacement for time.sleep().\n\n\n# failing a workflow\n\nto mark a as failed, all that needs to happen is for the function to return an error via the err return value.\n\n\n# registration\n\nfor some client code to be able to invoke a type, the process needs to be aware of all the implementations it has access to. a is registered with the following call:\n\nworkflow.register(simpleworkflow)\n\n\nthis call essentially creates an in-memory mapping inside the process between the fully qualified function name and the implementation. it is safe to call this registration method from an init() function. if the receives for a type it does not know, it will fail that . however, the failure of the will not cause the entire to fail.',charsets:{}},{title:"Worker service",frontmatter:{layout:"default",title:"Worker service",permalink:"/docs/go-client/workers",readingShow:"top"},regularPath:"/docs/05-go-client/01-workers.html",relativePath:"docs/05-go-client/01-workers.md",key:"v-e5936714",path:"/docs/go-client/workers/",codeSwitcherOptions:{},headersStr:null,content:'# Worker service\n\nA or service is a service that hosts the and implementations. The polls the Cadence service for , performs those , and communicates execution results back to the Cadence service. services are developed, deployed, and operated by Cadence customers.\n\nYou can run a Cadence in a new or an existing service. Use the framework APIs to start the Cadence and link in all and implementations that you require the service to execute.\n\nThe following is an example worker service utilising tchannel, one of the two transport protocols supported by Cadence.\n\npackage main\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/worker"\n\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/api/transport"\n "go.uber.org/yarpc/transport/tchannel"\n)\n\nvar HostPort = "127.0.0.1:7933"\nvar Domain = "SimpleDomain"\nvar TaskListName = "SimpleWorker"\nvar ClientName = "SimpleWorker"\nvar CadenceService = "cadence-frontend"\n\nfunc main() {\n startWorker(buildLogger(), buildCadenceClient())\n}\n\nfunc buildLogger() *zap.Logger {\n config := zap.NewDevelopmentConfig()\n config.Level.SetLevel(zapcore.InfoLevel)\n\n var err error\n logger, err := config.Build()\n if err != nil {\n panic("Failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n ch, err := tchannel.NewChannelTransport(tchannel.ServiceName(ClientName))\n if err != nil {\n panic("Failed to setup tchannel")\n }\n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n Name: ClientName,\n Outbounds: yarpc.Outbounds{\n CadenceService: {Unary: ch.NewSingleOutbound(HostPort)},\n },\n })\n if err := dispatcher.Start(); err != nil {\n panic("Failed to start dispatcher")\n }\n\n return workflowserviceclient.New(dispatcher.ClientConfig(CadenceService))\n}\n\nfunc startWorker(logger *zap.Logger, service workflowserviceclient.Interface) {\n // TaskListName identifies set of client workflows, activities, and workers.\n // It could be your group or client or application name.\n workerOptions := worker.Options{\n Logger: logger,\n MetricsScope: tally.NewTestScope(TaskListName, map[string]string{}),\n }\n\n worker := worker.New(\n service,\n Domain,\n TaskListName,\n workerOptions)\n err := worker.Start()\n if err != nil {\n panic("Failed to start worker")\n }\n\n logger.Info("Started Worker.", zap.String("worker", TaskListName))\n}\n\n\nThe other supported transport protocol is gRPC. A worker service using gRPC can be set up in similar fashion, but the buildCadenceClient function will need the following alterations, and some of the imported packages need to change.\n\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\n.\n.\n.\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n\n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n Name: ClientName,\n Outbounds: yarpc.Outbounds{\n CadenceService: {Unary: grpc.NewTransport().NewSingleOutbound(HostPort)},\n },\n })\n if err := dispatcher.Start(); err != nil {\n panic("Failed to start dispatcher")\n }\n\n clientConfig := dispatcher.ClientConfig(CadenceService)\n\n return compatibility.NewThrift2ProtoAdapter(\n apiv1.NewDomainAPIYARPCClient(clientConfig),\n apiv1.NewWorkflowAPIYARPCClient(clientConfig),\n apiv1.NewWorkerAPIYARPCClient(clientConfig),\n apiv1.NewVisibilityAPIYARPCClient(clientConfig),\n )\n}\n\n\nNote also that the HostPort variable must be changed to target the gRPC listener port of the Cadence cluster (typically, 7833).\n\nFinally, gRPC can also support TLS connections between Go clients and the Cadence server. This requires the following alterations to the imported packages, and the buildCadenceClient function. Note that this also requires you replace "path/to/cert/file" in the function with a path to a valid certificate file matching the TLS configuration of the Cadence server.\n\n\nimport (\n\n "fmt"\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n "go.uber.org/yarpc/peer"\n "go.uber.org/yarpc/peer/hostport"\n\n "crypto/tls"\n "crypto/x509"\n "io/ioutil"\n\n "google.golang.org/grpc/credentials"\n)\n\n.\n.\n.\n\nfunc buildCadenceClient() workflowserviceclient.Interface {\n grpcTransport := grpc.NewTransport()\n var dialOptions []grpc.DialOption\n \n caCert, err := ioutil.ReadFile("/path/to/cert/file")\n if err != nil {\n fmt.Printf("Failed to load server CA certificate: %v", zap.Error(err))\n }\n \n caCertPool := x509.NewCertPool()\n if !caCertPool.AppendCertsFromPEM(caCert) {\n fmt.Errorf("Failed to add server CA\'s certificate")\n }\n \n tlsConfig := tls.Config{\n RootCAs: caCertPool,\n }\n \n creds := credentials.NewTLS(&tlsConfig)\n dialOptions = append(dialOptions, grpc.DialerCredentials(creds))\n \n dialer := grpcTransport.NewDialer(dialOptions...)\n outbound := grpcTransport.NewOutbound(\n peer.NewSingle(hostport.PeerIdentifier(HostPort), dialer)\n )\n \n dispatcher := yarpc.NewDispatcher(yarpc.Config{\n Name: ClientName,\n Outbounds: yarpc.Outbounds{\n CadenceService: {Unary: outbound},\n },\n })\n if err := dispatcher.Start(); err != nil {\n panic("Failed to start dispatcher")\n }\n \n clientConfig := dispatcher.ClientConfig(CadenceService)\n \n return compatibility.NewThrift2ProtoAdapter(\n apiv1.NewDomainAPIYARPCClient(clientConfig),\n apiv1.NewWorkflowAPIYARPCClient(clientConfig),\n apiv1.NewWorkerAPIYARPCClient(clientConfig),\n apiv1.NewVisibilityAPIYARPCClient(clientConfig),\n )\n}\n',normalizedContent:'# worker service\n\na or service is a service that hosts the and implementations. the polls the cadence service for , performs those , and communicates execution results back to the cadence service. services are developed, deployed, and operated by cadence customers.\n\nyou can run a cadence in a new or an existing service. use the framework apis to start the cadence and link in all and implementations that you require the service to execute.\n\nthe following is an example worker service utilising tchannel, one of the two transport protocols supported by cadence.\n\npackage main\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/worker"\n\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/api/transport"\n "go.uber.org/yarpc/transport/tchannel"\n)\n\nvar hostport = "127.0.0.1:7933"\nvar domain = "simpledomain"\nvar tasklistname = "simpleworker"\nvar clientname = "simpleworker"\nvar cadenceservice = "cadence-frontend"\n\nfunc main() {\n startworker(buildlogger(), buildcadenceclient())\n}\n\nfunc buildlogger() *zap.logger {\n config := zap.newdevelopmentconfig()\n config.level.setlevel(zapcore.infolevel)\n\n var err error\n logger, err := config.build()\n if err != nil {\n panic("failed to setup logger")\n }\n\n return logger\n}\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n ch, err := tchannel.newchanneltransport(tchannel.servicename(clientname))\n if err != nil {\n panic("failed to setup tchannel")\n }\n dispatcher := yarpc.newdispatcher(yarpc.config{\n name: clientname,\n outbounds: yarpc.outbounds{\n cadenceservice: {unary: ch.newsingleoutbound(hostport)},\n },\n })\n if err := dispatcher.start(); err != nil {\n panic("failed to start dispatcher")\n }\n\n return workflowserviceclient.new(dispatcher.clientconfig(cadenceservice))\n}\n\nfunc startworker(logger *zap.logger, service workflowserviceclient.interface) {\n // tasklistname identifies set of client workflows, activities, and workers.\n // it could be your group or client or application name.\n workeroptions := worker.options{\n logger: logger,\n metricsscope: tally.newtestscope(tasklistname, map[string]string{}),\n }\n\n worker := worker.new(\n service,\n domain,\n tasklistname,\n workeroptions)\n err := worker.start()\n if err != nil {\n panic("failed to start worker")\n }\n\n logger.info("started worker.", zap.string("worker", tasklistname))\n}\n\n\nthe other supported transport protocol is grpc. a worker service using grpc can be set up in similar fashion, but the buildcadenceclient function will need the following alterations, and some of the imported packages need to change.\n\n\nimport (\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n)\n\n.\n.\n.\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n\n dispatcher := yarpc.newdispatcher(yarpc.config{\n name: clientname,\n outbounds: yarpc.outbounds{\n cadenceservice: {unary: grpc.newtransport().newsingleoutbound(hostport)},\n },\n })\n if err := dispatcher.start(); err != nil {\n panic("failed to start dispatcher")\n }\n\n clientconfig := dispatcher.clientconfig(cadenceservice)\n\n return compatibility.newthrift2protoadapter(\n apiv1.newdomainapiyarpcclient(clientconfig),\n apiv1.newworkflowapiyarpcclient(clientconfig),\n apiv1.newworkerapiyarpcclient(clientconfig),\n apiv1.newvisibilityapiyarpcclient(clientconfig),\n )\n}\n\n\nnote also that the hostport variable must be changed to target the grpc listener port of the cadence cluster (typically, 7833).\n\nfinally, grpc can also support tls connections between go clients and the cadence server. this requires the following alterations to the imported packages, and the buildcadenceclient function. note that this also requires you replace "path/to/cert/file" in the function with a path to a valid certificate file matching the tls configuration of the cadence server.\n\n\nimport (\n\n "fmt"\n\n "go.uber.org/cadence/.gen/go/cadence"\n "go.uber.org/cadence/.gen/go/cadence/workflowserviceclient"\n "go.uber.org/cadence/compatibility"\n "go.uber.org/cadence/worker"\n\n apiv1 "github.com/uber/cadence-idl/go/proto/api/v1"\n "github.com/uber-go/tally"\n "go.uber.org/zap"\n "go.uber.org/zap/zapcore"\n "go.uber.org/yarpc"\n "go.uber.org/yarpc/transport/grpc"\n "go.uber.org/yarpc/peer"\n "go.uber.org/yarpc/peer/hostport"\n\n "crypto/tls"\n "crypto/x509"\n "io/ioutil"\n\n "google.golang.org/grpc/credentials"\n)\n\n.\n.\n.\n\nfunc buildcadenceclient() workflowserviceclient.interface {\n grpctransport := grpc.newtransport()\n var dialoptions []grpc.dialoption\n \n cacert, err := ioutil.readfile("/path/to/cert/file")\n if err != nil {\n fmt.printf("failed to load server ca certificate: %v", zap.error(err))\n }\n \n cacertpool := x509.newcertpool()\n if !cacertpool.appendcertsfrompem(cacert) {\n fmt.errorf("failed to add server ca\'s certificate")\n }\n \n tlsconfig := tls.config{\n rootcas: cacertpool,\n }\n \n creds := credentials.newtls(&tlsconfig)\n dialoptions = append(dialoptions, grpc.dialercredentials(creds))\n \n dialer := grpctransport.newdialer(dialoptions...)\n outbound := grpctransport.newoutbound(\n peer.newsingle(hostport.peeridentifier(hostport), dialer)\n )\n \n dispatcher := yarpc.newdispatcher(yarpc.config{\n name: clientname,\n outbounds: yarpc.outbounds{\n cadenceservice: {unary: outbound},\n },\n })\n if err := dispatcher.start(); err != nil {\n panic("failed to start dispatcher")\n }\n \n clientconfig := dispatcher.clientconfig(cadenceservice)\n \n return compatibility.newthrift2protoadapter(\n apiv1.newdomainapiyarpcclient(clientconfig),\n apiv1.newworkflowapiyarpcclient(clientconfig),\n apiv1.newworkerapiyarpcclient(clientconfig),\n apiv1.newvisibilityapiyarpcclient(clientconfig),\n )\n}\n',charsets:{}},{title:"Activity overview",frontmatter:{layout:"default",title:"Activity overview",permalink:"/docs/go-client/activities",readingShow:"top"},regularPath:"/docs/05-go-client/03-activities.html",relativePath:"docs/05-go-client/03-activities.md",key:"v-43760982",path:"/docs/go-client/activities/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:1160},{level:3,title:"Declaration",slug:"declaration",normalizedTitle:"declaration",charIndex:1849},{level:3,title:"Implementation",slug:"implementation",normalizedTitle:"implementation",charIndex:2975},{level:3,title:"Registration",slug:"registration",normalizedTitle:"registration",charIndex:5198},{level:2,title:"Failing an Activity",slug:"failing-an-activity",normalizedTitle:"failing an activity",charIndex:5603}],codeSwitcherOptions:{},headersStr:"Overview Declaration Implementation Registration Failing an Activity",content:'# Activity overview\n\nAn is the implementation of a particular in the business logic.\n\nare implemented as functions. Data can be passed directly to an via function parameters. The parameters can be either basic types or structs, with the only requirement being that the parameters must be serializable. Though it is not required, we recommend that the first parameter of an function is of type context.Context, in order to allow the to interact with other framework methods. The function must return an error value, and can optionally return a result value. The result value can be either a basic type or a struct with the only requirement being that it is serializable.\n\nThe values passed to through invocation parameters or returned through the result value are recorded in the execution history. The entire execution history is transferred from the Cadence service to with every that the logic needs to process. A large execution history can thus adversely impact the performance of your . Therefore, be mindful of the amount of data you transfer via invocation parameters or return values. Otherwise, no additional limitations exist on implementations.\n\n\n# Overview\n\nThe following example demonstrates a simple that accepts a string parameter, appends a word to it, and then returns a result.\n\npackage simple\n\nimport (\n "context"\n\n "go.uber.org/cadence/activity"\n "go.uber.org/zap"\n)\n\nfunc init() {\n activity.Register(SimpleActivity)\n}\n\n// SimpleActivity is a sample Cadence activity function that takes one parameter and\n// returns a string containing the parameter value.\nfunc SimpleActivity(ctx context.Context, value string) (string, error) {\n activity.GetLogger(ctx).Info("SimpleActivity called.", zap.String("Value", value))\n return "Processed: " + value, nil\n}\n\n\nLet\'s take a look at each component of this activity.\n\n\n# Declaration\n\nIn the Cadence programing model, an is implemented with a function. The function declaration specifies the parameters the accepts as well as any values it might return. An function can take zero or many specific parameters and can return one or two values. It must always at least return an error value. The function can accept as parameters and return as results any serializable type.\n\nfunc SimpleActivity(ctx context.Context, value string) (string, error)\n\nThe first parameter to the function is context.Context. This is an optional parameter and can be omitted. This parameter is the standard Go context. The second string parameter is a custom specific parameter that can be used to pass data into the on start. An can have one or more such parameters. All parameters to an function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers. The declares two return values: string and error. The string return value is used to return the result of the . The error return value is used to indicate that an error was encountered during execution.\n\n\n# Implementation\n\nYou can write implementation code in the same way that you would any other Go service code. Additionally, you can use the usual loggers and metrics controllers, and the standard Go concurrency constructs.\n\n# Heart Beating\n\nFor long-running , Cadence provides an API for the code to report both liveness and progress back to the Cadence managed service.\n\nprogress := 0\nfor hasWork {\n // Send heartbeat message to the server.\n cadence.RecordActivityHeartbeat(ctx, progress)\n // Do some work.\n ...\n progress++\n}\n\n\nWhen an times out due to a missed heartbeat, the last value of the details (progress in the above sample) is returned from the cadence.ExecuteActivity function as the details field of TimeoutError with TimeoutType_HEARTBEAT.\n\nNew auto heartbeat option in Cadence Go Client 0.17.0 release: In case you don\'t need to report progress, but still want to report liveness of your worker through heartbeating for your long running activities, there\'s a new auto-heartbeat option that you can enable when you register your activity. When this option is enabled Cadence library will do the heartbeat for you in the background.\n\n\tRegisterActivityOptions struct {\n\t\t...\n\t\t// Automatically send heartbeats for this activity at an interval that is less than the HeartbeatTimeout.\n\t\t// This option has no effect if the activity is executed with a HeartbeatTimeout of 0.\n\t\t// Default: false\n\t\tEnableAutoHeartbeat bool\n\t}\n\n\nYou can also heartbeat an from an external source:\n\n// Instantiate a Cadence service client.\ncadence.Client client = cadence.NewClient(...)\n\n// Record heartbeat.\nerr := client.RecordActivityHeartbeat(taskToken, details)\n\n\nThe parameters of the RecordActivityHeartbeat function are:\n\n * taskToken: The value of the binary TaskToken field of the ActivityInfo struct retrieved inside the .\n * details: The serializable payload containing progress information.\n\n# Cancellation\n\nWhen an is cancelled, or its has completed or failed, the context passed into its function is cancelled, which sets its channel’s closed state to Done. An can use that to perform any necessary cleanup and abort its execution. Cancellation is only delivered to that call RecordActivityHeartbeat.\n\n\n# Registration\n\nTo make the visible to the process hosting it, the must be registered via a call to activity.Register.\n\nfunc init() {\n activity.Register(SimpleActivity)\n}\n\n\nThis call creates an in-memory mapping inside the process between the fully qualified function name and the implementation. If a receives a request to start an execution for an type it does not know, it will fail that request.\n\n\n# Failing an Activity\n\nTo mark an as failed, the function must return an error via the error return value.',normalizedContent:'# activity overview\n\nan is the implementation of a particular in the business logic.\n\nare implemented as functions. data can be passed directly to an via function parameters. the parameters can be either basic types or structs, with the only requirement being that the parameters must be serializable. though it is not required, we recommend that the first parameter of an function is of type context.context, in order to allow the to interact with other framework methods. the function must return an error value, and can optionally return a result value. the result value can be either a basic type or a struct with the only requirement being that it is serializable.\n\nthe values passed to through invocation parameters or returned through the result value are recorded in the execution history. the entire execution history is transferred from the cadence service to with every that the logic needs to process. a large execution history can thus adversely impact the performance of your . therefore, be mindful of the amount of data you transfer via invocation parameters or return values. otherwise, no additional limitations exist on implementations.\n\n\n# overview\n\nthe following example demonstrates a simple that accepts a string parameter, appends a word to it, and then returns a result.\n\npackage simple\n\nimport (\n "context"\n\n "go.uber.org/cadence/activity"\n "go.uber.org/zap"\n)\n\nfunc init() {\n activity.register(simpleactivity)\n}\n\n// simpleactivity is a sample cadence activity function that takes one parameter and\n// returns a string containing the parameter value.\nfunc simpleactivity(ctx context.context, value string) (string, error) {\n activity.getlogger(ctx).info("simpleactivity called.", zap.string("value", value))\n return "processed: " + value, nil\n}\n\n\nlet\'s take a look at each component of this activity.\n\n\n# declaration\n\nin the cadence programing model, an is implemented with a function. the function declaration specifies the parameters the accepts as well as any values it might return. an function can take zero or many specific parameters and can return one or two values. it must always at least return an error value. the function can accept as parameters and return as results any serializable type.\n\nfunc simpleactivity(ctx context.context, value string) (string, error)\n\nthe first parameter to the function is context.context. this is an optional parameter and can be omitted. this parameter is the standard go context. the second string parameter is a custom specific parameter that can be used to pass data into the on start. an can have one or more such parameters. all parameters to an function must be serializable, which essentially means that params can’t be channels, functions, variadic, or unsafe pointers. the declares two return values: string and error. the string return value is used to return the result of the . the error return value is used to indicate that an error was encountered during execution.\n\n\n# implementation\n\nyou can write implementation code in the same way that you would any other go service code. additionally, you can use the usual loggers and metrics controllers, and the standard go concurrency constructs.\n\n# heart beating\n\nfor long-running , cadence provides an api for the code to report both liveness and progress back to the cadence managed service.\n\nprogress := 0\nfor haswork {\n // send heartbeat message to the server.\n cadence.recordactivityheartbeat(ctx, progress)\n // do some work.\n ...\n progress++\n}\n\n\nwhen an times out due to a missed heartbeat, the last value of the details (progress in the above sample) is returned from the cadence.executeactivity function as the details field of timeouterror with timeouttype_heartbeat.\n\nnew auto heartbeat option in cadence go client 0.17.0 release: in case you don\'t need to report progress, but still want to report liveness of your worker through heartbeating for your long running activities, there\'s a new auto-heartbeat option that you can enable when you register your activity. when this option is enabled cadence library will do the heartbeat for you in the background.\n\n\tregisteractivityoptions struct {\n\t\t...\n\t\t// automatically send heartbeats for this activity at an interval that is less than the heartbeattimeout.\n\t\t// this option has no effect if the activity is executed with a heartbeattimeout of 0.\n\t\t// default: false\n\t\tenableautoheartbeat bool\n\t}\n\n\nyou can also heartbeat an from an external source:\n\n// instantiate a cadence service client.\ncadence.client client = cadence.newclient(...)\n\n// record heartbeat.\nerr := client.recordactivityheartbeat(tasktoken, details)\n\n\nthe parameters of the recordactivityheartbeat function are:\n\n * tasktoken: the value of the binary tasktoken field of the activityinfo struct retrieved inside the .\n * details: the serializable payload containing progress information.\n\n# cancellation\n\nwhen an is cancelled, or its has completed or failed, the context passed into its function is cancelled, which sets its channel’s closed state to done. an can use that to perform any necessary cleanup and abort its execution. cancellation is only delivered to that call recordactivityheartbeat.\n\n\n# registration\n\nto make the visible to the process hosting it, the must be registered via a call to activity.register.\n\nfunc init() {\n activity.register(simpleactivity)\n}\n\n\nthis call creates an in-memory mapping inside the process between the fully qualified function name and the implementation. if a receives a request to start an execution for an type it does not know, it will fail that request.\n\n\n# failing an activity\n\nto mark an as failed, the function must return an error via the error return value.',charsets:{}},{title:"Executing activities",frontmatter:{layout:"default",title:"Executing activities",permalink:"/docs/go-client/execute-activity",readingShow:"top"},regularPath:"/docs/05-go-client/04-execute-activity.html",relativePath:"docs/05-go-client/04-execute-activity.md",key:"v-caeda73c",path:"/docs/go-client/execute-activity/",headers:[{level:2,title:"Activity options",slug:"activity-options",normalizedTitle:"activity options",charIndex:796},{level:2,title:"Activity timeouts",slug:"activity-timeouts",normalizedTitle:"activity timeouts",charIndex:1282},{level:2,title:"ExecuteActivity call",slug:"executeactivity-call",normalizedTitle:"executeactivity call",charIndex:2346}],codeSwitcherOptions:{},headersStr:"Activity options Activity timeouts ExecuteActivity call",content:'# Executing activities\n\nThe primary responsibility of a implementation is to schedule for execution. The most straightforward way to do this is via the library method workflow.ExecuteActivity. The following sample code demonstrates making this call:\n\nao := cadence.ActivityOptions{\n TaskList: "sampleTaskList",\n ScheduleToCloseTimeout: time.Second * 60,\n ScheduleToStartTimeout: time.Second * 60,\n StartToCloseTimeout: time.Second * 60,\n HeartbeatTimeout: time.Second * 10,\n WaitForCancellation: false,\n}\nctx = cadence.WithActivityOptions(ctx, ao)\n\nfuture := workflow.ExecuteActivity(ctx, SimpleActivity, value)\nvar result string\nif err := future.Get(ctx, &result); err != nil {\n return err\n}\n\n\nLet\'s take a look at each component of this call.\n\n\n# Activity options\n\nBefore calling workflow.ExecuteActivity(), you must configure ActivityOptions for the invocation. These options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. The child context is then passed into the workflow.ExecuteActivity() call. If multiple are sharing the same option values, then the same context instance can be used when calling workflow.ExecuteActivity().\n\n\n# Activity timeouts\n\nThere can be various kinds of timeouts associated with an . Cadence guarantees that are executed at most once, so an either succeeds or fails with one of the following timeouts:\n\nTIMEOUT DESCRIPTION\nStartToCloseTimeout Maximum time that a worker can take to process a task after\n it has received the task.\nScheduleToStartTimeout Time a task can wait to be picked up by an after a schedules\n it. If there are no workers available to process this task\n for the specified duration, the task will time out.\nScheduleToCloseTimeout Time a task can take to complete after it is scheduled by a\n . This is usually greater than the sum of StartToClose and\n ScheduleToStart timeouts.\nHeartbeatTimeout If a task doesn\'t heartbeat to the Cadence service for this\n duration, it will be considered to have failed. This is\n useful for long-running tasks.\n\n\n# ExecuteActivity call\n\nThe first parameter in the call is the required cadence.Context object. This type is a copy of context.Context with the Done() method returning cadence.Channel instead of the native Go chan.\n\nThe second parameter is the function that we registered as an function. This parameter can also be a string representing the fully qualified name of the function. The benefit of passing in the actual function object is that the framework can validate parameters.\n\nThe remaining parameters are passed to the as part of the call. In our example, we have a single parameter: value. This list of parameters must match the list of parameters declared by the function. The Cadence client library will validate this.\n\nThe method call returns immediately and returns a cadence.Future. This allows you to execute more code without having to wait for the scheduled to complete.\n\nWhen you are ready to process the results of the , call the Get() method on the future object returned. The parameters to this method are the ctx object we passed to the workflow.ExecuteActivity() call and an output parameter that will receive the output of the . The type of the output parameter must match the type of the return value declared by the function. The Get() method will block until the completes and results are available.\n\nYou can retrieve the result value returned by workflow.ExecuteActivity() from the future and use it like any normal result from a synchronous function call. The following sample code demonstrates how you can use the result if it is a string value:\n\nvar result string\nif err := future.Get(ctx1, &result); err != nil {\n return err\n}\n\nswitch result {\ncase "apple":\n // Do something.\ncase "banana":\n // Do something.\ndefault:\n return err\n}\n\n\nIn this example, we called the Get() method on the returned future immediately after workflow.ExecuteActivity(). However, this is not necessary. If you want to execute multiple in parallel, you can repeatedly call workflow.ExecuteActivity(), store the returned futures, and then wait for all to complete by calling the Get() methods of the future at a later time.\n\nTo implement more complex wait conditions on returned future objects, use the cadence.Selector class.',normalizedContent:'# executing activities\n\nthe primary responsibility of a implementation is to schedule for execution. the most straightforward way to do this is via the library method workflow.executeactivity. the following sample code demonstrates making this call:\n\nao := cadence.activityoptions{\n tasklist: "sampletasklist",\n scheduletoclosetimeout: time.second * 60,\n scheduletostarttimeout: time.second * 60,\n starttoclosetimeout: time.second * 60,\n heartbeattimeout: time.second * 10,\n waitforcancellation: false,\n}\nctx = cadence.withactivityoptions(ctx, ao)\n\nfuture := workflow.executeactivity(ctx, simpleactivity, value)\nvar result string\nif err := future.get(ctx, &result); err != nil {\n return err\n}\n\n\nlet\'s take a look at each component of this call.\n\n\n# activity options\n\nbefore calling workflow.executeactivity(), you must configure activityoptions for the invocation. these options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. the child context is then passed into the workflow.executeactivity() call. if multiple are sharing the same option values, then the same context instance can be used when calling workflow.executeactivity().\n\n\n# activity timeouts\n\nthere can be various kinds of timeouts associated with an . cadence guarantees that are executed at most once, so an either succeeds or fails with one of the following timeouts:\n\ntimeout description\nstarttoclosetimeout maximum time that a worker can take to process a task after\n it has received the task.\nscheduletostarttimeout time a task can wait to be picked up by an after a schedules\n it. if there are no workers available to process this task\n for the specified duration, the task will time out.\nscheduletoclosetimeout time a task can take to complete after it is scheduled by a\n . this is usually greater than the sum of starttoclose and\n scheduletostart timeouts.\nheartbeattimeout if a task doesn\'t heartbeat to the cadence service for this\n duration, it will be considered to have failed. this is\n useful for long-running tasks.\n\n\n# executeactivity call\n\nthe first parameter in the call is the required cadence.context object. this type is a copy of context.context with the done() method returning cadence.channel instead of the native go chan.\n\nthe second parameter is the function that we registered as an function. this parameter can also be a string representing the fully qualified name of the function. the benefit of passing in the actual function object is that the framework can validate parameters.\n\nthe remaining parameters are passed to the as part of the call. in our example, we have a single parameter: value. this list of parameters must match the list of parameters declared by the function. the cadence client library will validate this.\n\nthe method call returns immediately and returns a cadence.future. this allows you to execute more code without having to wait for the scheduled to complete.\n\nwhen you are ready to process the results of the , call the get() method on the future object returned. the parameters to this method are the ctx object we passed to the workflow.executeactivity() call and an output parameter that will receive the output of the . the type of the output parameter must match the type of the return value declared by the function. the get() method will block until the completes and results are available.\n\nyou can retrieve the result value returned by workflow.executeactivity() from the future and use it like any normal result from a synchronous function call. the following sample code demonstrates how you can use the result if it is a string value:\n\nvar result string\nif err := future.get(ctx1, &result); err != nil {\n return err\n}\n\nswitch result {\ncase "apple":\n // do something.\ncase "banana":\n // do something.\ndefault:\n return err\n}\n\n\nin this example, we called the get() method on the returned future immediately after workflow.executeactivity(). however, this is not necessary. if you want to execute multiple in parallel, you can repeatedly call workflow.executeactivity(), store the returned futures, and then wait for all to complete by calling the get() methods of the future at a later time.\n\nto implement more complex wait conditions on returned future objects, use the cadence.selector class.',charsets:{}},{title:"Starting workflows",frontmatter:{layout:"default",title:"Starting workflows",permalink:"/docs/go-client/start-workflows",readingShow:"top"},regularPath:"/docs/05-go-client/02.5-starting-workflows.html",relativePath:"docs/05-go-client/02.5-starting-workflows.md",key:"v-76c4aa02",path:"/docs/go-client/start-workflows/",headers:[{level:2,title:"Starting a workflow",slug:"starting-a-workflow",normalizedTitle:"starting a workflow",charIndex:408},{level:2,title:"Jitter Start and Batches of Workflows",slug:"jitter-start-and-batches-of-workflows",normalizedTitle:"jitter start and batches of workflows",charIndex:1321},{level:2,title:"StartWorkflowOptions",slug:"startworkflowoptions",normalizedTitle:"startworkflowoptions",charIndex:791}],codeSwitcherOptions:{},headersStr:"Starting a workflow Jitter Start and Batches of Workflows StartWorkflowOptions",content:'# Starting workflows\n\nStarting workflows can be done from any service that can send requests to the Cadence server. There is no requirement for workflows to be started from the worker services.\n\nGenerally workflows can either be started using a direct reference to the workflow code, or by referring to the registered name of the function. In Workflow Registration we show how to register the workflows.\n\n\n# Starting a workflow\n\nAfter creating a workflow we can start it. This can be done from the cli, but typically we want to start workflow programmatically e.g. from an http handler. We can do this using the client.StartWorkflow function:\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceClient client.Client \n# Initialize cadenceClient\n\ncadenceClient.StartWorkflow(\n ctx,\n client.StartWorkflowOptions{\n TaskList: "workflow-task-list",\n ExecutionStartToCloseTimeout: 10 * time.Second,\n },\n WorkflowFunc,\n workflowArg1,\n workflowArg2,\n workflowArg3,\n ...\n)\n\n\nThe will start the workflow defined in the function WorkflowFunc, note that for named workflows WorkflowFunc could be replaced by the name e.g. "WorkflowFuncName".\n\nworkflowArg1, workflowArg2, workflowArg3 are arguments to the workflow, as specified in WorkflowFunc, note that the arguments needs to be serializable.\n\n\n# Jitter Start and Batches of Workflows\n\nBelow we list all the startWorkflowOptions, however a particularly useful option is JitterStart.\n\nStarting many workflows at the same time will have Cadence trying to schedule all the workflows immediately. This can result in overloading Cadence and the database backing Cadence, as well as the workers processing the workflows.\n\nThis is especially bad when the workflow starts comes in batches, such as an end of month load. These sudden loads can lead to both Cadence and the workers needing to immediately scale up. Scaling up often takes some time, causing queues in Cadence, delaying the execution of all workflows, potentially causing workflows to timeout.\n\nTo solve this we can start our workflows with JitterStart. JitterStart will start the workflow at a random point between now and now + JitterStart, so if we e.g. start 1000 workflows at 12:00 AM with a JitterStart of 6 hours, the workflows will be randomly started between 12:00 AM and 6:00 PM.\n\nThis makes the sudden load of 1000 workflows much more manageable.\n\nFor many batch-like workloads a random delay is completely acceptable as the batch just needs to be processed e.g. before the end of the day.\n\nAdding a JitterStart of 6 hours in the example above is as simple as adding\n\nJitterStart: 6 * time.Hour,\n\n\nto the options like so,\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceClient client.Client\n# Initialize cadenceClient\n\ncadenceClient.StartWorkflow(\n ctx,\n client.StartWorkflowOptions{\n TaskList: "workflow-task-list",\n ExecutionStartToCloseTimeout: 10 * time.Second,\n JitterStart: 6 * time.Hour, // Added JitterStart\n },\n WorkflowFunc,\n workflowArg1,\n workflowArg2,\n workflowArg3,\n ...\n)\n\n\nnow the workflow will start at a random point between now and six hours from now.\n\n\n# StartWorkflowOptions\n\nThe client.StartWorkflowOptions specifies the behavior of this particular workflow. The invocation above only specifies the two mandatory options; TaskList and ExecutionStartToCloseTimeout, all the options are described in the inline documentation:\n\ntype StartWorkflowOptions struct {\n\t// ID - The business identifier of the workflow execution.\n\t// Optional: defaulted to a uuid.\n\tID string\n\n\t// TaskList - The decisions of the workflow are scheduled on this queue.\n\t// This is also the default task list on which activities are scheduled. The workflow author can choose\n\t// to override this using activity options.\n\t// Mandatory: No default.\n\tTaskList string\n\n\t// ExecutionStartToCloseTimeout - The timeout for duration of workflow execution.\n\t// The resolution is seconds.\n\t// Mandatory: No default.\n\tExecutionStartToCloseTimeout time.Duration\n\n\t// DecisionTaskStartToCloseTimeout - The timeout for processing decision task from the time the worker\n\t// pulled this task. If a decision task is lost, it is retried after this timeout.\n\t// The resolution is seconds.\n\t// Optional: defaulted to 10 secs.\n\tDecisionTaskStartToCloseTimeout time.Duration\n\n\t// WorkflowIDReusePolicy - Whether server allow reuse of workflow ID, can be useful\n\t// for dedup logic if set to WorkflowIdReusePolicyRejectDuplicate.\n\t// Optional: defaulted to WorkflowIDReusePolicyAllowDuplicateFailedOnly.\n\tWorkflowIDReusePolicy WorkflowIDReusePolicy\n\n\t// RetryPolicy - Optional retry policy for workflow. If a retry policy is specified, in case of workflow failure\n\t// server will start new workflow execution if needed based on the retry policy.\n\tRetryPolicy *RetryPolicy\n\n\t// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run\n\t// as a cron based on the schedule. The scheduling will be based on UTC time. Schedule for next run only happen\n\t// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed\n\t// or timeout, the workflow will be retried based on the retry policy. While the workflow is retrying, it won\'t\n\t// schedule its next run. If next schedule is due while workflow is running (or retrying), then it will skip that\n\t// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).\n\t// The cron spec is as following:\n\t// ┌───────────── minute (0 - 59)\n\t// │ ┌───────────── hour (0 - 23)\n\t// │ │ ┌───────────── day of the month (1 - 31)\n\t// │ │ │ ┌───────────── month (1 - 12)\n\t// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n\t// │ │ │ │ │\n\t// │ │ │ │ │\n\t// * * * * *\n\tCronSchedule string\n\n\t// Memo - Optional non-indexed info that will be shown in list workflow.\n\tMemo map[string]interface{}\n\n\t// SearchAttributes - Optional indexed info that can be used in query of List/Scan/Count workflow APIs (only\n\t// supported when Cadence server is using ElasticSearch). The key and value type must be registered on Cadence server side.\n\t// Use GetSearchAttributes API to get valid key and corresponding value type.\n\tSearchAttributes map[string]interface{}\n\n\t// DelayStartSeconds - Seconds to delay the workflow start\n\t// The resolution is seconds.\n\t// Optional: defaulted to 0 seconds\n\tDelayStart time.Duration\n\n\t// JitterStart - Seconds to jitter the workflow start. For example, if set to 10, the workflow will start some time between 0-10 seconds.\n\t// This works with CronSchedule and with DelayStart.\n\t// Optional: defaulted to 0 seconds\n\tJitterStart time.Duration\n}\n',normalizedContent:'# starting workflows\n\nstarting workflows can be done from any service that can send requests to the cadence server. there is no requirement for workflows to be started from the worker services.\n\ngenerally workflows can either be started using a direct reference to the workflow code, or by referring to the registered name of the function. in workflow registration we show how to register the workflows.\n\n\n# starting a workflow\n\nafter creating a workflow we can start it. this can be done from the cli, but typically we want to start workflow programmatically e.g. from an http handler. we can do this using the client.startworkflow function:\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceclient client.client \n# initialize cadenceclient\n\ncadenceclient.startworkflow(\n ctx,\n client.startworkflowoptions{\n tasklist: "workflow-task-list",\n executionstarttoclosetimeout: 10 * time.second,\n },\n workflowfunc,\n workflowarg1,\n workflowarg2,\n workflowarg3,\n ...\n)\n\n\nthe will start the workflow defined in the function workflowfunc, note that for named workflows workflowfunc could be replaced by the name e.g. "workflowfuncname".\n\nworkflowarg1, workflowarg2, workflowarg3 are arguments to the workflow, as specified in workflowfunc, note that the arguments needs to be serializable.\n\n\n# jitter start and batches of workflows\n\nbelow we list all the startworkflowoptions, however a particularly useful option is jitterstart.\n\nstarting many workflows at the same time will have cadence trying to schedule all the workflows immediately. this can result in overloading cadence and the database backing cadence, as well as the workers processing the workflows.\n\nthis is especially bad when the workflow starts comes in batches, such as an end of month load. these sudden loads can lead to both cadence and the workers needing to immediately scale up. scaling up often takes some time, causing queues in cadence, delaying the execution of all workflows, potentially causing workflows to timeout.\n\nto solve this we can start our workflows with jitterstart. jitterstart will start the workflow at a random point between now and now + jitterstart, so if we e.g. start 1000 workflows at 12:00 am with a jitterstart of 6 hours, the workflows will be randomly started between 12:00 am and 6:00 pm.\n\nthis makes the sudden load of 1000 workflows much more manageable.\n\nfor many batch-like workloads a random delay is completely acceptable as the batch just needs to be processed e.g. before the end of the day.\n\nadding a jitterstart of 6 hours in the example above is as simple as adding\n\njitterstart: 6 * time.hour,\n\n\nto the options like so,\n\nimport "go.uber.org/cadence/client"\n\nvar cadenceclient client.client\n# initialize cadenceclient\n\ncadenceclient.startworkflow(\n ctx,\n client.startworkflowoptions{\n tasklist: "workflow-task-list",\n executionstarttoclosetimeout: 10 * time.second,\n jitterstart: 6 * time.hour, // added jitterstart\n },\n workflowfunc,\n workflowarg1,\n workflowarg2,\n workflowarg3,\n ...\n)\n\n\nnow the workflow will start at a random point between now and six hours from now.\n\n\n# startworkflowoptions\n\nthe client.startworkflowoptions specifies the behavior of this particular workflow. the invocation above only specifies the two mandatory options; tasklist and executionstarttoclosetimeout, all the options are described in the inline documentation:\n\ntype startworkflowoptions struct {\n\t// id - the business identifier of the workflow execution.\n\t// optional: defaulted to a uuid.\n\tid string\n\n\t// tasklist - the decisions of the workflow are scheduled on this queue.\n\t// this is also the default task list on which activities are scheduled. the workflow author can choose\n\t// to override this using activity options.\n\t// mandatory: no default.\n\ttasklist string\n\n\t// executionstarttoclosetimeout - the timeout for duration of workflow execution.\n\t// the resolution is seconds.\n\t// mandatory: no default.\n\texecutionstarttoclosetimeout time.duration\n\n\t// decisiontaskstarttoclosetimeout - the timeout for processing decision task from the time the worker\n\t// pulled this task. if a decision task is lost, it is retried after this timeout.\n\t// the resolution is seconds.\n\t// optional: defaulted to 10 secs.\n\tdecisiontaskstarttoclosetimeout time.duration\n\n\t// workflowidreusepolicy - whether server allow reuse of workflow id, can be useful\n\t// for dedup logic if set to workflowidreusepolicyrejectduplicate.\n\t// optional: defaulted to workflowidreusepolicyallowduplicatefailedonly.\n\tworkflowidreusepolicy workflowidreusepolicy\n\n\t// retrypolicy - optional retry policy for workflow. if a retry policy is specified, in case of workflow failure\n\t// server will start new workflow execution if needed based on the retry policy.\n\tretrypolicy *retrypolicy\n\n\t// cronschedule - optional cron schedule for workflow. if a cron schedule is specified, the workflow will run\n\t// as a cron based on the schedule. the scheduling will be based on utc time. schedule for next run only happen\n\t// after the current run is completed/failed/timeout. if a retrypolicy is also supplied, and the workflow failed\n\t// or timeout, the workflow will be retried based on the retry policy. while the workflow is retrying, it won\'t\n\t// schedule its next run. if next schedule is due while workflow is running (or retrying), then it will skip that\n\t// schedule. cron workflow will not stop until it is terminated or cancelled (by returning cadence.cancelederror).\n\t// the cron spec is as following:\n\t// ┌───────────── minute (0 - 59)\n\t// │ ┌───────────── hour (0 - 23)\n\t// │ │ ┌───────────── day of the month (1 - 31)\n\t// │ │ │ ┌───────────── month (1 - 12)\n\t// │ │ │ │ ┌───────────── day of the week (0 - 6) (sunday to saturday)\n\t// │ │ │ │ │\n\t// │ │ │ │ │\n\t// * * * * *\n\tcronschedule string\n\n\t// memo - optional non-indexed info that will be shown in list workflow.\n\tmemo map[string]interface{}\n\n\t// searchattributes - optional indexed info that can be used in query of list/scan/count workflow apis (only\n\t// supported when cadence server is using elasticsearch). the key and value type must be registered on cadence server side.\n\t// use getsearchattributes api to get valid key and corresponding value type.\n\tsearchattributes map[string]interface{}\n\n\t// delaystartseconds - seconds to delay the workflow start\n\t// the resolution is seconds.\n\t// optional: defaulted to 0 seconds\n\tdelaystart time.duration\n\n\t// jitterstart - seconds to jitter the workflow start. for example, if set to 10, the workflow will start some time between 0-10 seconds.\n\t// this works with cronschedule and with delaystart.\n\t// optional: defaulted to 0 seconds\n\tjitterstart time.duration\n}\n',charsets:{}},{title:"Child workflows",frontmatter:{layout:"default",title:"Child workflows",permalink:"/docs/go-client/child-workflows",readingShow:"top"},regularPath:"/docs/05-go-client/05-child-workflows.html",relativePath:"docs/05-go-client/05-child-workflows.md",key:"v-0327ca12",path:"/docs/go-client/child-workflows/",codeSwitcherOptions:{},headersStr:null,content:'# Child workflows\n\nworkflow.ExecuteChildWorkflow enables the scheduling of other from within a \'s implementation. The parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\ncwo := workflow.ChildWorkflowOptions{\n // Do not specify WorkflowID if you want Cadence to generate a unique ID for the child execution.\n WorkflowID: "BID-SIMPLE-CHILD-WORKFLOW",\n ExecutionStartToCloseTimeout: time.Minute * 30,\n}\nctx = workflow.WithChildWorkflowOptions(ctx, cwo)\n\nvar result string\nfuture := workflow.ExecuteChildWorkflow(ctx, SimpleChildWorkflow, value)\nif err := future.Get(ctx, &result); err != nil {\n workflow.GetLogger(ctx).Error("SimpleChildWorkflow failed.", zap.Error(err))\n return err\n}\n\n\nLet\'s take a look at each component of this call.\n\nBefore calling workflow.ExecuteChildworkflow(), you must configure ChildWorkflowOptions for the invocation. These options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. The child context is then passed into the workflow.ExecuteChildWorkflow() call. If multiple child are sharing the same option values, then the same context instance can be used when calling workflow.ExecuteChildworkflow().\n\nThe first parameter in the call is the required cadence.Context object. This type is a copy of context.Context with the Done() method returning cadence.Channel instead of the native Go chan.\n\nThe second parameter is the function that we registered as a function. This parameter can also be a string representing the fully qualified name of the function. The benefit of this is that when you pass in the actual function object, the framework can validate parameters.\n\nThe remaining parameters are passed to the as part of the call. In our example, we have a single parameter: value. This list of parameters must match the list of parameters declared by the function.\n\nThe method call returns immediately and returns a cadence.Future. This allows you to execute more code without having to wait for the scheduled to complete.\n\nWhen you are ready to process the results of the , call the Get() method on the returned future object. The parameters to this method is the ctx object we passed to the workflow.ExecuteChildWorkflow() call and an output parameter that will receive the output of the . The type of the output parameter must match the type of the return value declared by the function. The Get() method will block until the completes and results are available.\n\nThe workflow.ExecuteChildWorkflow() function is similar to workflow.ExecuteActivity(). All of the patterns described for using workflow.ExecuteActivity() apply to the workflow.ExecuteChildWorkflow() function as well.\n\nWhen a parent is cancelled by the user, the child can be cancelled or abandoned based on a configurable child policy.',normalizedContent:'# child workflows\n\nworkflow.executechildworkflow enables the scheduling of other from within a \'s implementation. the parent has the ability to monitor and impact the lifecycle of the child , similar to the way it does for an that it invoked.\n\ncwo := workflow.childworkflowoptions{\n // do not specify workflowid if you want cadence to generate a unique id for the child execution.\n workflowid: "bid-simple-child-workflow",\n executionstarttoclosetimeout: time.minute * 30,\n}\nctx = workflow.withchildworkflowoptions(ctx, cwo)\n\nvar result string\nfuture := workflow.executechildworkflow(ctx, simplechildworkflow, value)\nif err := future.get(ctx, &result); err != nil {\n workflow.getlogger(ctx).error("simplechildworkflow failed.", zap.error(err))\n return err\n}\n\n\nlet\'s take a look at each component of this call.\n\nbefore calling workflow.executechildworkflow(), you must configure childworkflowoptions for the invocation. these options customize various execution timeouts, and are passed in by creating a child context from the initial context and overwriting the desired values. the child context is then passed into the workflow.executechildworkflow() call. if multiple child are sharing the same option values, then the same context instance can be used when calling workflow.executechildworkflow().\n\nthe first parameter in the call is the required cadence.context object. this type is a copy of context.context with the done() method returning cadence.channel instead of the native go chan.\n\nthe second parameter is the function that we registered as a function. this parameter can also be a string representing the fully qualified name of the function. the benefit of this is that when you pass in the actual function object, the framework can validate parameters.\n\nthe remaining parameters are passed to the as part of the call. in our example, we have a single parameter: value. this list of parameters must match the list of parameters declared by the function.\n\nthe method call returns immediately and returns a cadence.future. this allows you to execute more code without having to wait for the scheduled to complete.\n\nwhen you are ready to process the results of the , call the get() method on the returned future object. the parameters to this method is the ctx object we passed to the workflow.executechildworkflow() call and an output parameter that will receive the output of the . the type of the output parameter must match the type of the return value declared by the function. the get() method will block until the completes and results are available.\n\nthe workflow.executechildworkflow() function is similar to workflow.executeactivity(). all of the patterns described for using workflow.executeactivity() apply to the workflow.executechildworkflow() function as well.\n\nwhen a parent is cancelled by the user, the child can be cancelled or abandoned based on a configurable child policy.',charsets:{}},{title:"Activity and workflow retries",frontmatter:{layout:"default",title:"Activity and workflow retries",permalink:"/docs/go-client/retries",readingShow:"top"},regularPath:"/docs/05-go-client/06-retries.html",relativePath:"docs/05-go-client/06-retries.md",key:"v-5fac5e6c",path:"/docs/go-client/retries/",codeSwitcherOptions:{},headersStr:null,content:"# Activity and workflow retries\n\nand can fail due to various intermediate conditions. In those cases, we want to retry the failed or child or even the parent . This can be achieved by supplying an optional retry policy. A retry policy looks like the following:\n\n// RetryPolicy defines the retry policy.\nRetryPolicy struct {\n // Backoff interval for the first retry. If coefficient is 1.0 then it is used for all retries.\n // Required, no default value.\n InitialInterval time.Duration\n\n // Coefficient used to calculate the next retry backoff interval.\n // The next retry interval is previous interval multiplied by this coefficient.\n // Must be 1 or larger. Default is 2.0.\n BackoffCoefficient float64\n\n // Maximum backoff interval between retries. Exponential backoff leads to interval increase.\n // This value is the cap of the interval. Default is 100x of initial interval.\n MaximumInterval time.Duration\n\n // Maximum time to retry. Either ExpirationInterval or MaximumAttempts is required.\n // When exceeded the retries stop even if maximum retries is not reached yet.\n // First (non-retry) attempt is unaffected by this field and is guaranteed to run \n // for the entirety of the workflow timeout duration (ExecutionStartToCloseTimeoutSeconds).\n ExpirationInterval time.Duration\n\n // Maximum number of attempts. When exceeded the retries stop even if not expired yet.\n // If not set or set to 0, it means unlimited, and relies on ExpirationInterval to stop.\n // Either MaximumAttempts or ExpirationInterval is required.\n MaximumAttempts int32\n\n // Non-Retriable errors. This is optional. Cadence server will stop retry if error reason matches this list.\n // Error reason for custom error is specified when your activity/workflow returns cadence.NewCustomError(reason).\n // Error reason for panic error is \"cadenceInternal:Panic\".\n // Error reason for any other error is \"cadenceInternal:Generic\".\n // Error reason for timeouts is: \"cadenceInternal:Timeout TIMEOUT_TYPE\". TIMEOUT_TYPE could be START_TO_CLOSE or HEARTBEAT.\n // Note that cancellation is not a failure, so it won't be retried.\n NonRetriableErrorReasons []string\n}\n\n\nTo enable retry, supply a custom retry policy to ActivityOptions or ChildWorkflowOptions when you execute them.\n\nexpiration := time.Minute * 10\nretryPolicy := &cadence.RetryPolicy{\n InitialInterval: time.Second,\n BackoffCoefficient: 2,\n MaximumInterval: expiration,\n ExpirationInterval: time.Minute * 10,\n MaximumAttempts: 5,\n}\nao := workflow.ActivityOptions{\n ScheduleToStartTimeout: expiration,\n StartToCloseTimeout: expiration,\n HeartbeatTimeout: time.Second * 30,\n RetryPolicy: retryPolicy, // Enable retry.\n}\nctx = workflow.WithActivityOptions(ctx, ao)\nactivityFuture := workflow.ExecuteActivity(ctx, SampleActivity, params)\n\n\nIf heartbeat its progress before it failed, the retry attempt will contain the progress so implementation could resume from failed progress like:\n\nfunc SampleActivity(ctx context.Context, inputArg InputParams) error {\n startIdx := inputArg.StartIndex\n if activity.HasHeartbeatDetails(ctx) {\n // Recover from finished progress.\n var finishedIndex int\n if err := activity.GetHeartbeatDetails(ctx, &finishedIndex); err == nil {\n startIdx = finishedIndex + 1 // Start from next one.\n }\n }\n\n // Normal activity logic...\n for i:=startIdx; i 0 && signalVal != "SOME_VALUE" {\n return errors.New("signalVal")\n}\n\n\nIn the example above, the code uses workflow.GetSignalChannel to open a workflow.Channel for the named . We then use a workflow.Selector to wait on this channel and process the payload received with the .\n\n\n# SignalWithStart\n\nYou may not know if a is running and can accept a . The client.SignalWithStartWorkflow API allows you to send a to the current instance if one exists or to create a new run and then send the . SignalWithStartWorkflow therefore doesn\'t take a as a parameter.',normalizedContent:'# signals\n\nprovide a mechanism to send data directly to a running . previously, you had two options for passing data to the implementation:\n\n * via start parameters\n * as return values from\n\nwith start parameters, we could only pass in values before began.\n\nreturn values from allowed us to pass information to a running , but this approach comes with its own complications. one major drawback is reliance on polling. this means that the data needs to be stored in a third-party location until it\'s ready to be picked up by the . further, the lifecycle of this requires management, and the requires manual restart if it fails before acquiring the data.\n\n, on the other hand, provide a fully asynchronous and durable mechanism for providing data to a running . when a is received for a running , cadence persists the and the payload in the history. the can then process the at any time afterwards without the risk of losing the information. the also has the option to stop execution by blocking on a channel.\n\nvar signalval string\nsignalchan := workflow.getsignalchannel(ctx, signalname)\n\ns := workflow.newselector(ctx)\ns.addreceive(signalchan, func(c workflow.channel, more bool) {\n c.receive(ctx, &signalval)\n workflow.getlogger(ctx).info("received signal!", zap.string("signal", signalname), zap.string("value", signalval))\n})\ns.select(ctx)\n\nif len(signalval) > 0 && signalval != "some_value" {\n return errors.new("signalval")\n}\n\n\nin the example above, the code uses workflow.getsignalchannel to open a workflow.channel for the named . we then use a workflow.selector to wait on this channel and process the payload received with the .\n\n\n# signalwithstart\n\nyou may not know if a is running and can accept a . the client.signalwithstartworkflow api allows you to send a to the current instance if one exists or to create a new run and then send the . signalwithstartworkflow therefore doesn\'t take a as a parameter.',charsets:{}},{title:"Side effect",frontmatter:{layout:"default",title:"Side effect",permalink:"/docs/go-client/side-effect",readingShow:"top"},regularPath:"/docs/05-go-client/10-side-effect.html",relativePath:"docs/05-go-client/10-side-effect.md",key:"v-d0383dd4",path:"/docs/go-client/side-effect/",codeSwitcherOptions:{},headersStr:null,content:'# Side effect\n\nworkflow.SideEffect is useful for short, nondeterministic code snippets, such as getting a random value or generating a UUID. It executes the provided function once and records its result into the history. workflow.SideEffect does not re-execute upon replay, but instead returns the recorded result. It can be seen as an "inline" . Something to note about workflow.SideEffect is that, unlike the Cadence guarantee of at-most-once execution for , there is no such guarantee with workflow.SideEffect. Under certain failure conditions, workflow.SideEffect can end up executing a function more than once.\n\nThe only way to fail SideEffect is to panic, which causes a failure. After the timeout, Cadence reschedules and then re-executes the , giving SideEffect another chance to succeed. Do not return any data from SideEffect other than through its recorded return value.\n\nThe following sample demonstrates how to use SideEffect:\n\nencodedRandom := SideEffect(func(ctx cadence.Context) interface{} {\n return rand.Intn(100)\n})\n\nvar random int\nencodedRandom.Get(&random)\nif random < 50 {\n ...\n} else {\n ...\n}\n',normalizedContent:'# side effect\n\nworkflow.sideeffect is useful for short, nondeterministic code snippets, such as getting a random value or generating a uuid. it executes the provided function once and records its result into the history. workflow.sideeffect does not re-execute upon replay, but instead returns the recorded result. it can be seen as an "inline" . something to note about workflow.sideeffect is that, unlike the cadence guarantee of at-most-once execution for , there is no such guarantee with workflow.sideeffect. under certain failure conditions, workflow.sideeffect can end up executing a function more than once.\n\nthe only way to fail sideeffect is to panic, which causes a failure. after the timeout, cadence reschedules and then re-executes the , giving sideeffect another chance to succeed. do not return any data from sideeffect other than through its recorded return value.\n\nthe following sample demonstrates how to use sideeffect:\n\nencodedrandom := sideeffect(func(ctx cadence.context) interface{} {\n return rand.intn(100)\n})\n\nvar random int\nencodedrandom.get(&random)\nif random < 50 {\n ...\n} else {\n ...\n}\n',charsets:{}},{title:"Queries",frontmatter:{layout:"default",title:"Queries",permalink:"/docs/go-client/queries",readingShow:"top"},regularPath:"/docs/05-go-client/11-queries.html",relativePath:"docs/05-go-client/11-queries.md",key:"v-a1460e54",path:"/docs/go-client/queries/",headers:[{level:2,title:"Consistent Query",slug:"consistent-query",normalizedTitle:"consistent query",charIndex:2009}],codeSwitcherOptions:{},headersStr:"Consistent Query",content:'# Queries\n\nIf a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. You can use the Cadence to perform this . For example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nThis command uses __stack_trace, which is a built-in type supported by the Cadence client library. You can add custom types to handle such as the current state of a , or how many the has completed. To do this, you need to set up a handler using workflow.SetQueryHandler.\n\nThe handler must be a function that returns two values:\n\n 1. A serializable result\n 2. An error\n\nThe handler function can receive any number of input parameters, but all input parameters must be serializable. The following sample code sets up a handler that handles the type of current_state:\n\nfunc MyWorkflow(ctx workflow.Context, input string) error {\n currentState := "started" // This could be any serializable struct.\n err := workflow.SetQueryHandler(ctx, "current_state", func() (string, error) {\n return currentState, nil\n })\n if err != nil {\n currentState = "failed to register query handler"\n return err\n }\n // Your normal workflow code begins here, and you update the currentState as the code makes progress.\n currentState = "waiting timer"\n err = NewTimer(ctx, time.Hour).Get(ctx, nil)\n if err != nil {\n currentState = "timer failed"\n return err\n }\n\n currentState = "waiting activity"\n ctx = WithActivityOptions(ctx, myActivityOptions)\n err = ExecuteActivity(ctx, MyActivity, "my_input").Get(ctx, nil)\n if err != nil {\n currentState = "activity failed"\n return err\n }\n currentState = "done"\n return nil\n}\n\n\nYou can now current_state by using the\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nYou can also issue a from code using the QueryWorkflow() API on a Cadence client object.\n\n\n# Consistent Query\n\nhas two consistency levels, eventual and strong. Consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nIn this example if were to change state, may or may not see that state update reflected in the result. This is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. A strongly consistent is guaranteed to be based on state which includes all that came before the was issued. An is considered to have come before a if the call creating the external returned success before the was issued. External which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nIn order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nIn order to run a using the go client do the following:\n\nresp, err := cadenceClient.QueryWorkflowWithOptions(ctx, &client.QueryWorkflowWithOptionsRequest{\n WorkflowID: workflowID,\n RunID: runID,\n QueryType: queryType,\n QueryConsistencyLevel: shared.QueryConsistencyLevelStrong.Ptr(),\n})\n\n\nWhen using strongly consistent you should expect higher latency than eventually consistent .',normalizedContent:'# queries\n\nif a has been stuck at a state for longer than an expected period of time, you might want to the current call stack. you can use the cadence to perform this . for example:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt __stack_trace\n\nthis command uses __stack_trace, which is a built-in type supported by the cadence client library. you can add custom types to handle such as the current state of a , or how many the has completed. to do this, you need to set up a handler using workflow.setqueryhandler.\n\nthe handler must be a function that returns two values:\n\n 1. a serializable result\n 2. an error\n\nthe handler function can receive any number of input parameters, but all input parameters must be serializable. the following sample code sets up a handler that handles the type of current_state:\n\nfunc myworkflow(ctx workflow.context, input string) error {\n currentstate := "started" // this could be any serializable struct.\n err := workflow.setqueryhandler(ctx, "current_state", func() (string, error) {\n return currentstate, nil\n })\n if err != nil {\n currentstate = "failed to register query handler"\n return err\n }\n // your normal workflow code begins here, and you update the currentstate as the code makes progress.\n currentstate = "waiting timer"\n err = newtimer(ctx, time.hour).get(ctx, nil)\n if err != nil {\n currentstate = "timer failed"\n return err\n }\n\n currentstate = "waiting activity"\n ctx = withactivityoptions(ctx, myactivityoptions)\n err = executeactivity(ctx, myactivity, "my_input").get(ctx, nil)\n if err != nil {\n currentstate = "activity failed"\n return err\n }\n currentstate = "done"\n return nil\n}\n\n\nyou can now current_state by using the\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nyou can also issue a from code using the queryworkflow() api on a cadence client object.\n\n\n# consistent query\n\nhas two consistency levels, eventual and strong. consider if you were to a and then immediately the\n\ncadence-cli --domain samples-domain workflow signal -w my_workflow_id -r my_run_id -n signal_name -if ./input.json\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state\n\nin this example if were to change state, may or may not see that state update reflected in the result. this is what it means for to be eventually consistent.\n\nhas another consistency level called strong consistency. a strongly consistent is guaranteed to be based on state which includes all that came before the was issued. an is considered to have come before a if the call creating the external returned success before the was issued. external which are created while the is outstanding may or may not be reflected in the state the result is based on.\n\nin order to run consistent through the do the following:\n\ncadence-cli --domain samples-domain workflow query -w my_workflow_id -r my_run_id -qt current_state --qcl strong\n\nin order to run a using the go client do the following:\n\nresp, err := cadenceclient.queryworkflowwithoptions(ctx, &client.queryworkflowwithoptionsrequest{\n workflowid: workflowid,\n runid: runid,\n querytype: querytype,\n queryconsistencylevel: shared.queryconsistencylevelstrong.ptr(),\n})\n\n\nwhen using strongly consistent you should expect higher latency than eventually consistent .',charsets:{}},{title:"Async activity completion",frontmatter:{layout:"default",title:"Async activity completion",permalink:"/docs/go-client/activity-async-completion",readingShow:"top"},regularPath:"/docs/05-go-client/12-activity-async-completion.html",relativePath:"docs/05-go-client/12-activity-async-completion.md",key:"v-0a1dd2ec",path:"/docs/go-client/activity-async-completion/",codeSwitcherOptions:{},headersStr:null,content:'# Asynchronous activity completion\n\nThere are certain scenarios when completing an upon completion of its function is not possible or desirable. For example, you might have an application that requires user input in order to complete the . You could implement the with a polling mechanism, but a simpler and less resource-intensive implementation is to asynchronously complete a Cadence .\n\nThere two parts to implementing an asynchronously completed activity:\n\n 1. The provides the information necessary for completion from an external system and notifies the Cadence service that it is waiting for that outside callback.\n 2. The external service calls the Cadence service to complete the .\n\nThe following example demonstrates the first part:\n\n// Retrieve the activity information needed to asynchronously complete the activity.\nactivityInfo := cadence.GetActivityInfo(ctx)\ntaskToken := activityInfo.TaskToken\n\n// Send the taskToken to the external service that will complete the activity.\n...\n\n// Return from the activity a function indicating that Cadence should wait for an async completion\n// message.\nreturn "", activity.ErrResultPending\n\n\nThe following code demonstrates how to complete the successfully:\n\n// Instantiate a Cadence service client.\n// The same client can be used to complete or fail any number of activities.\ncadence.Client client = cadence.NewClient(...)\n\n// Complete the activity.\nclient.CompleteActivity(taskToken, result, nil)\n\n\nTo fail the , you would do the following:\n\n// Fail the activity.\nclient.CompleteActivity(taskToken, nil, err)\n\n\nFollowing are the parameters of the CompleteActivity function:\n\n * taskToken: The value of the binary TaskToken field of the ActivityInfo struct retrieved inside the .\n * result: The return value to record for the . The type of this value must match the type of the return value declared by the function.\n * err: The error code to return if the terminates with an error.\n\nIf error is not null, the value of the result field is ignored.',normalizedContent:'# asynchronous activity completion\n\nthere are certain scenarios when completing an upon completion of its function is not possible or desirable. for example, you might have an application that requires user input in order to complete the . you could implement the with a polling mechanism, but a simpler and less resource-intensive implementation is to asynchronously complete a cadence .\n\nthere two parts to implementing an asynchronously completed activity:\n\n 1. the provides the information necessary for completion from an external system and notifies the cadence service that it is waiting for that outside callback.\n 2. the external service calls the cadence service to complete the .\n\nthe following example demonstrates the first part:\n\n// retrieve the activity information needed to asynchronously complete the activity.\nactivityinfo := cadence.getactivityinfo(ctx)\ntasktoken := activityinfo.tasktoken\n\n// send the tasktoken to the external service that will complete the activity.\n...\n\n// return from the activity a function indicating that cadence should wait for an async completion\n// message.\nreturn "", activity.errresultpending\n\n\nthe following code demonstrates how to complete the successfully:\n\n// instantiate a cadence service client.\n// the same client can be used to complete or fail any number of activities.\ncadence.client client = cadence.newclient(...)\n\n// complete the activity.\nclient.completeactivity(tasktoken, result, nil)\n\n\nto fail the , you would do the following:\n\n// fail the activity.\nclient.completeactivity(tasktoken, nil, err)\n\n\nfollowing are the parameters of the completeactivity function:\n\n * tasktoken: the value of the binary tasktoken field of the activityinfo struct retrieved inside the .\n * result: the return value to record for the . the type of this value must match the type of the return value declared by the function.\n * err: the error code to return if the terminates with an error.\n\nif error is not null, the value of the result field is ignored.',charsets:{}},{title:"Testing",frontmatter:{layout:"default",title:"Testing",permalink:"/docs/go-client/workflow-testing",readingShow:"top"},regularPath:"/docs/05-go-client/13-workflow-testing.html",relativePath:"docs/05-go-client/13-workflow-testing.md",key:"v-c8a8f07c",path:"/docs/go-client/workflow-testing/",headers:[{level:2,title:"Setup",slug:"setup",normalizedTitle:"setup",charIndex:619},{level:2,title:"A Simple Test",slug:"a-simple-test",normalizedTitle:"a simple test",charIndex:2858},{level:2,title:"Activity mocking and overriding",slug:"activity-mocking-and-overriding",normalizedTitle:"activity mocking and overriding",charIndex:3534},{level:2,title:"Testing signals",slug:"testing-signals",normalizedTitle:"testing signals",charIndex:6905}],codeSwitcherOptions:{},headersStr:"Setup A Simple Test Activity mocking and overriding Testing signals",content:'# Testing\n\nThe Cadence Go client library provides a test framework to facilitate testing implementations. The framework is suited for implementing unit tests as well as functional tests of the logic.\n\nThe following code implements unit tests for the SimpleWorkflow sample:\n\npackage sample\n\nimport (\n "errors"\n "testing"\n\n "github.com/stretchr/testify/mock"\n "github.com/stretchr/testify/suite"\n\n "go.uber.org/cadence"\n "go.uber.org/cadence/testsuite"\n)\n\ntype UnitTestSuite struct {\n suite.Suite\n testsuite.WorkflowTestSuite\n\n env *testsuite.TestWorkflowEnvironment\n}\n\nfunc (s *UnitTestSuite) SetupTest() {\n s.env = s.NewTestWorkflowEnvironment()\n}\n\nfunc (s *UnitTestSuite) AfterTest(suiteName, testName string) {\n s.env.AssertExpectations(s.T())\n}\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_Success() {\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n func(ctx context.Context, value string) (string, error) {\n s.Equal("test_success", value)\n return value, nil\n }\n )\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n "", errors.New("SimpleActivityFailure"))\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure")\n\n s.True(s.env.IsWorkflowCompleted())\n\n s.NotNil(s.env.GetWorkflowError())\n s.True(cadence.IsGenericError(s.env.GetWorkflowError()))\n s.Equal("SimpleActivityFailure", s.env.GetWorkflowError().Error())\n}\n\nfunc TestUnitTestSuite(t *testing.T) {\n suite.Run(t, new(UnitTestSuite))\n}\n\n\n\n# Setup\n\nTo run unit tests, we first define a "test suite" struct that absorbs both the basic suite functionality from testify via suite.Suite and the suite functionality from the Cadence test framework via cadence.WorkflowTestSuite. Because every test in this test suite will test our , we add a property to our struct to hold an instance of the test environment. This allows us to initialize the test environment in a setup method. For testing , we use a cadence.TestWorkflowEnvironment.\n\nNext, we implement a SetupTest method to setup a new test environment before each test. Doing so ensures that each test runs in its own isolated sandbox. We also implement an AfterTest function where we assert that all mocks we set up were indeed called by invoking s.env.AssertExpectations(s.T()).\n\nFinally, we create a regular test function recognized by "go test" and pass the struct to suite.Run.\n\n\n# A Simple Test\n\nThe most simple test case we can write is to have the test environment execute the and then evaluate the results.\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_Success() {\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\n\nCalling s.env.ExecuteWorkflow(...) executes the logic and any invoked inside the test process. The first parameter of s.env.ExecuteWorkflow(...) contains the functions, and any subsequent parameters contain values for custom input parameters declared by the function.\n\n> Note that unless the invocations are mocked or implementation replaced (see Activity mocking and overriding), the test environment will execute the actual code including any calls to outside services.\n\nAfter executing the in the above example, we assert that the ran through completion via the call to s.env.IsWorkflowComplete(). We also assert that no errors were returned by asserting on the return value of s.env.GetWorkflowError(). If our returned a value, we could have retrieved that value via a call to s.env.GetWorkflowResult(&value) and had additional asserts on that value.\n\n\n# Activity mocking and overriding\n\nWhen running unit tests on , we want to test the logic in isolation. Additionally, we want to inject errors during our test runs. The test framework provides two mechanisms that support these scenarios: mocking and overriding. Both of these mechanisms allow you to change the behavior of invoked by your without the need to modify the actual code.\n\nLet\'s take a look at a test that simulates a test that fails via the "activity mocking" mechanism.\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityFails() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n "", errors.New("SimpleActivityFailure"))\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_failure")\n\n s.True(s.env.IsWorkflowCompleted())\n\n s.NotNil(s.env.GetWorkflowError())\n _, ok := s.env.GetWorkflowError().(*cadence.GenericError)\n s.True(ok)\n s.Equal("SimpleActivityFailure", s.env.GetWorkflowError().Error())\n}\n\n\nThis test simulates the execution of the SimpleActivity that is invoked by our SimpleWorkflow returning an error. We accomplish this by setting up a mock on the test environment for the SimpleActivity that returns an error.\n\ns.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n "", errors.New("SimpleActivityFailure"))\n\n\nWith the mock set up we can now execute the via the s.env.ExecuteWorkflow(...) method and assert that the completed successfully and returned the expected error.\n\nSimply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate logic. However, sometimes we want to replace the with an alternate implementation to support a more complex test scenario. Let\'s assume we want to validate that the gets called with the correct parameters.\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_ActivityParamCorrect() {\n s.env.OnActivity(SimpleActivity, mock.Anything, mock.Anything).Return(\n func(ctx context.Context, value string) (string, error) {\n s.Equal("test_success", value)\n return value, nil\n }\n )\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\n\nIn this example, we provide a function implementation as the parameter to Return. This allows us to provide an alternate implementation for the SimpleActivity. The framework will execute this function whenever the is invoked and pass on the return value from the function as the result of the invocation. Additionally, the framework will validate that the signature of the “mock” function matches the signature of the original function.\n\nSince this can be an entire function, there is no limitation as to what we can do here. In this example, we assert that the “value” param has the same content as the value param we passed to the .\n\n\n# Testing signals\n\nTo test signals we can use the functions s.env.SignalWorkflow, and s.env.SignalWorkflowByID. These functions needs to be called inside s.env.RegisterDelayedCallback, as the signal should be send while the is running. It is important to register the signal before calling s.env.ExecuteWorkflow, otherwise the signal will not be send.\n\nIf our is waiting for a signal with name signalName we can register to send this signal before the workflow is executed like this:\n\nfunc (s *UnitTestSuite) Test_SimpleWorkflow_Signal() {\n // Send the signal\n\ts.env.RegisterDelayedCallback(func() {\n\t\ts.env.SignalWorkflow(signalName, signalData)\n\t}, time.Minute*10)\n\n // Execute the workflow\n s.env.ExecuteWorkflow(SimpleWorkflow, "test_success")\n\n s.True(s.env.IsWorkflowCompleted())\n s.NoError(s.env.GetWorkflowError())\n}\n\n\nNote that the s.env.RegisterDelayedCallback function does not actually wait 10 minutes in the unit test instead the cadence test framework uses an internal clock which knows which event is the next, and executes it immediately.',normalizedContent:'# testing\n\nthe cadence go client library provides a test framework to facilitate testing implementations. the framework is suited for implementing unit tests as well as functional tests of the logic.\n\nthe following code implements unit tests for the simpleworkflow sample:\n\npackage sample\n\nimport (\n "errors"\n "testing"\n\n "github.com/stretchr/testify/mock"\n "github.com/stretchr/testify/suite"\n\n "go.uber.org/cadence"\n "go.uber.org/cadence/testsuite"\n)\n\ntype unittestsuite struct {\n suite.suite\n testsuite.workflowtestsuite\n\n env *testsuite.testworkflowenvironment\n}\n\nfunc (s *unittestsuite) setuptest() {\n s.env = s.newtestworkflowenvironment()\n}\n\nfunc (s *unittestsuite) aftertest(suitename, testname string) {\n s.env.assertexpectations(s.t())\n}\n\nfunc (s *unittestsuite) test_simpleworkflow_success() {\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\nfunc (s *unittestsuite) test_simpleworkflow_activityparamcorrect() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n func(ctx context.context, value string) (string, error) {\n s.equal("test_success", value)\n return value, nil\n }\n )\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\nfunc (s *unittestsuite) test_simpleworkflow_activityfails() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n "", errors.new("simpleactivityfailure"))\n s.env.executeworkflow(simpleworkflow, "test_failure")\n\n s.true(s.env.isworkflowcompleted())\n\n s.notnil(s.env.getworkflowerror())\n s.true(cadence.isgenericerror(s.env.getworkflowerror()))\n s.equal("simpleactivityfailure", s.env.getworkflowerror().error())\n}\n\nfunc testunittestsuite(t *testing.t) {\n suite.run(t, new(unittestsuite))\n}\n\n\n\n# setup\n\nto run unit tests, we first define a "test suite" struct that absorbs both the basic suite functionality from testify via suite.suite and the suite functionality from the cadence test framework via cadence.workflowtestsuite. because every test in this test suite will test our , we add a property to our struct to hold an instance of the test environment. this allows us to initialize the test environment in a setup method. for testing , we use a cadence.testworkflowenvironment.\n\nnext, we implement a setuptest method to setup a new test environment before each test. doing so ensures that each test runs in its own isolated sandbox. we also implement an aftertest function where we assert that all mocks we set up were indeed called by invoking s.env.assertexpectations(s.t()).\n\nfinally, we create a regular test function recognized by "go test" and pass the struct to suite.run.\n\n\n# a simple test\n\nthe most simple test case we can write is to have the test environment execute the and then evaluate the results.\n\nfunc (s *unittestsuite) test_simpleworkflow_success() {\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\n\ncalling s.env.executeworkflow(...) executes the logic and any invoked inside the test process. the first parameter of s.env.executeworkflow(...) contains the functions, and any subsequent parameters contain values for custom input parameters declared by the function.\n\n> note that unless the invocations are mocked or implementation replaced (see activity mocking and overriding), the test environment will execute the actual code including any calls to outside services.\n\nafter executing the in the above example, we assert that the ran through completion via the call to s.env.isworkflowcomplete(). we also assert that no errors were returned by asserting on the return value of s.env.getworkflowerror(). if our returned a value, we could have retrieved that value via a call to s.env.getworkflowresult(&value) and had additional asserts on that value.\n\n\n# activity mocking and overriding\n\nwhen running unit tests on , we want to test the logic in isolation. additionally, we want to inject errors during our test runs. the test framework provides two mechanisms that support these scenarios: mocking and overriding. both of these mechanisms allow you to change the behavior of invoked by your without the need to modify the actual code.\n\nlet\'s take a look at a test that simulates a test that fails via the "activity mocking" mechanism.\n\nfunc (s *unittestsuite) test_simpleworkflow_activityfails() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n "", errors.new("simpleactivityfailure"))\n s.env.executeworkflow(simpleworkflow, "test_failure")\n\n s.true(s.env.isworkflowcompleted())\n\n s.notnil(s.env.getworkflowerror())\n _, ok := s.env.getworkflowerror().(*cadence.genericerror)\n s.true(ok)\n s.equal("simpleactivityfailure", s.env.getworkflowerror().error())\n}\n\n\nthis test simulates the execution of the simpleactivity that is invoked by our simpleworkflow returning an error. we accomplish this by setting up a mock on the test environment for the simpleactivity that returns an error.\n\ns.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n "", errors.new("simpleactivityfailure"))\n\n\nwith the mock set up we can now execute the via the s.env.executeworkflow(...) method and assert that the completed successfully and returned the expected error.\n\nsimply mocking the execution to return a desired value or error is a pretty powerful mechanism to isolate logic. however, sometimes we want to replace the with an alternate implementation to support a more complex test scenario. let\'s assume we want to validate that the gets called with the correct parameters.\n\nfunc (s *unittestsuite) test_simpleworkflow_activityparamcorrect() {\n s.env.onactivity(simpleactivity, mock.anything, mock.anything).return(\n func(ctx context.context, value string) (string, error) {\n s.equal("test_success", value)\n return value, nil\n }\n )\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\n\nin this example, we provide a function implementation as the parameter to return. this allows us to provide an alternate implementation for the simpleactivity. the framework will execute this function whenever the is invoked and pass on the return value from the function as the result of the invocation. additionally, the framework will validate that the signature of the “mock” function matches the signature of the original function.\n\nsince this can be an entire function, there is no limitation as to what we can do here. in this example, we assert that the “value” param has the same content as the value param we passed to the .\n\n\n# testing signals\n\nto test signals we can use the functions s.env.signalworkflow, and s.env.signalworkflowbyid. these functions needs to be called inside s.env.registerdelayedcallback, as the signal should be send while the is running. it is important to register the signal before calling s.env.executeworkflow, otherwise the signal will not be send.\n\nif our is waiting for a signal with name signalname we can register to send this signal before the workflow is executed like this:\n\nfunc (s *unittestsuite) test_simpleworkflow_signal() {\n // send the signal\n\ts.env.registerdelayedcallback(func() {\n\t\ts.env.signalworkflow(signalname, signaldata)\n\t}, time.minute*10)\n\n // execute the workflow\n s.env.executeworkflow(simpleworkflow, "test_success")\n\n s.true(s.env.isworkflowcompleted())\n s.noerror(s.env.getworkflowerror())\n}\n\n\nnote that the s.env.registerdelayedcallback function does not actually wait 10 minutes in the unit test instead the cadence test framework uses an internal clock which knows which event is the next, and executes it immediately.',charsets:{}},{title:"Versioning",frontmatter:{layout:"default",title:"Versioning",permalink:"/docs/go-client/workflow-versioning",readingShow:"top"},regularPath:"/docs/05-go-client/14-workflow-versioning.html",relativePath:"docs/05-go-client/14-workflow-versioning.md",key:"v-0b9844ac",path:"/docs/go-client/workflow-versioning/",headers:[{level:2,title:"workflow.GetVersion()",slug:"workflow-getversion",normalizedTitle:"workflow.getversion()",charIndex:315},{level:2,title:"Sanity checking",slug:"sanity-checking",normalizedTitle:"sanity checking",charIndex:5619}],codeSwitcherOptions:{},headersStr:"workflow.GetVersion() Sanity checking",content:'# Versioning\n\nThe definition code of a Cadence must be deterministic because Cadence uses sourcing to reconstruct the state by replaying the saved history data on the definition code. This means that any incompatible update to the definition code could cause a non-deterministic issue if not handled correctly.\n\n\n# workflow.GetVersion()\n\nConsider the following definition:\n\nfunc MyWorkflow(ctx workflow.Context, data string) (string, error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Minute,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n var result1 string\n err := workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n if err != nil {\n return "", err\n }\n var result2 string\n err = workflow.ExecuteActivity(ctx, ActivityB, result1).Get(ctx, &result2)\n return result2, err\n}\n\n\nNow let\'s say we have replaced ActivityA with ActivityC, and deployed the updated code. If there is an existing that was started by the original version of the code, where ActivityA had already completed and the result was recorded to history, the new version of the code will pick up that and try to resume from there. However, the will fail because the new code expects a result for ActivityC from the history data, but instead it gets the result for ActivityA. This causes the to fail on the non-deterministic error.\n\nThus we use workflow.GetVersion().\n\nvar err error\nv := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 1)\nif v == workflow.DefaultVersion {\n err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)\n}\nif err != nil {\n return "", err\n}\n\nvar result2 string\nerr = workflow.ExecuteActivity(ctx, ActivityB, result1).Get(ctx, &result2)\nreturn result2, err\n\n\nWhen workflow.GetVersion() is run for the new , it records a marker in the history so that all future calls to GetVersion for this change ID--Step 1 in the example--on this will always return the given version number, which is 1 in the example.\n\nIf you make an additional change, such as replacing ActivityC with ActivityD, you need to add some additional code:\n\nv := workflow.GetVersion(ctx, "Step1", workflow.DefaultVersion, 2)\nif v == workflow.DefaultVersion {\n err = workflow.ExecuteActivity(ctx, ActivityA, data).Get(ctx, &result1)\n} else if v == 1 {\n err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n}\n\n\nNote that we have changed maxSupported from 1 to 2. A that had already passed this GetVersion() call before it was introduced will return DefaultVersion. A that was run with maxSupported set to 1, will return 1. New will return 2.\n\nAfter you are sure that all of the prior to version 1 have completed, you can remove the code for that version. It should now look like the following:\n\nv := workflow.GetVersion(ctx, "Step1", 1, 2)\nif v == 1 {\n err = workflow.ExecuteActivity(ctx, ActivityC, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n}\n\n\nYou\'ll note that minSupported has changed from DefaultVersion to 1. If an older version of the history is replayed on this code, it will fail because the minimum expected version is 1. After you are sure that all of the for version 1 have completed, then you can remove 1 so that your code would look like the following:\n\n_ := workflow.GetVersion(ctx, "Step1", 2, 2)\nerr = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n\n\nNote that we have preserved the call to GetVersion(). There are two reasons to preserve this call:\n\n 1. This ensures that if there is a still running for an older version, it will fail here and not proceed.\n 2. If you need to make additional changes for Step1, such as changing ActivityD to ActivityE, you only need to update maxVersion from 2 to 3 and branch from there.\n\nYou only need to preserve the first call to GetVersion() for each changeID. All subsequent calls to GetVersion() with the same change ID are safe to remove. If necessary, you can remove the first GetVersion() call, but you need to ensure the following:\n\n * All executions with an older version are completed.\n * You can no longer use Step1 for the changeID. If you need to make changes to that same part in the future, such as change from ActivityD to ActivityE, you would need to use a different changeID like Step1-fix2, and start minVersion from DefaultVersion again. The code would look like the following:\n\nv := workflow.GetVersion(ctx, "Step1-fix2", workflow.DefaultVersion, 1)\nif v == workflow.DefaultVersion {\n err = workflow.ExecuteActivity(ctx, ActivityD, data).Get(ctx, &result1)\n} else {\n err = workflow.ExecuteActivity(ctx, ActivityE, data).Get(ctx, &result1)\n}\n\n\nUpgrading a is straightforward if you don\'t need to preserve your currently running . You can simply terminate all of the currently running and suspend new ones from being created while you deploy the new version of your code, which does not use GetVersion(), and then resume creation. However, that is often not the case, and you need to take care of the currently running , so using GetVersion() to update your code is the method to use.\n\nHowever, if you want your currently running to proceed based on the current logic, but you want to ensure new are running on new logic, you can define your as a new WorkflowType, and change your start path (calls to StartWorkflow()) to start the new type.\n\n\n# Sanity checking\n\nThe Cadence client SDK performs a sanity check to help prevent obvious incompatible changes. The sanity check verifies whether a made in replay matches the recorded in history, in the same order. The is generated by calling any of the following methods:\n\n * workflow.ExecuteActivity()\n * workflow.ExecuteChildWorkflow()\n * workflow.NewTimer()\n * workflow.Sleep()\n * workflow.SideEffect()\n * workflow.RequestCancelWorkflow()\n * workflow.SignalExternalWorkflow()\n * workflow.UpsertSearchAttributes()\n\nAdding, removing, or reordering any of the above methods triggers the sanity check and results in a non-deterministic error.\n\nThe sanity check does not perform a thorough check. For example, it does not check on the \'s input arguments or the timer duration. If the check is enforced on every property, then it becomes too restricted and harder to maintain the code. For example, if you move your code from one package to another package, that changes the ActivityType, which technically becomes a different . But, we don\'t want to fail on that change, so we only check the function name part of the ActivityType.',normalizedContent:'# versioning\n\nthe definition code of a cadence must be deterministic because cadence uses sourcing to reconstruct the state by replaying the saved history data on the definition code. this means that any incompatible update to the definition code could cause a non-deterministic issue if not handled correctly.\n\n\n# workflow.getversion()\n\nconsider the following definition:\n\nfunc myworkflow(ctx workflow.context, data string) (string, error) {\n ao := workflow.activityoptions{\n scheduletostarttimeout: time.minute,\n starttoclosetimeout: time.minute,\n }\n ctx = workflow.withactivityoptions(ctx, ao)\n var result1 string\n err := workflow.executeactivity(ctx, activitya, data).get(ctx, &result1)\n if err != nil {\n return "", err\n }\n var result2 string\n err = workflow.executeactivity(ctx, activityb, result1).get(ctx, &result2)\n return result2, err\n}\n\n\nnow let\'s say we have replaced activitya with activityc, and deployed the updated code. if there is an existing that was started by the original version of the code, where activitya had already completed and the result was recorded to history, the new version of the code will pick up that and try to resume from there. however, the will fail because the new code expects a result for activityc from the history data, but instead it gets the result for activitya. this causes the to fail on the non-deterministic error.\n\nthus we use workflow.getversion().\n\nvar err error\nv := workflow.getversion(ctx, "step1", workflow.defaultversion, 1)\nif v == workflow.defaultversion {\n err = workflow.executeactivity(ctx, activitya, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activityc, data).get(ctx, &result1)\n}\nif err != nil {\n return "", err\n}\n\nvar result2 string\nerr = workflow.executeactivity(ctx, activityb, result1).get(ctx, &result2)\nreturn result2, err\n\n\nwhen workflow.getversion() is run for the new , it records a marker in the history so that all future calls to getversion for this change id--step 1 in the example--on this will always return the given version number, which is 1 in the example.\n\nif you make an additional change, such as replacing activityc with activityd, you need to add some additional code:\n\nv := workflow.getversion(ctx, "step1", workflow.defaultversion, 2)\nif v == workflow.defaultversion {\n err = workflow.executeactivity(ctx, activitya, data).get(ctx, &result1)\n} else if v == 1 {\n err = workflow.executeactivity(ctx, activityc, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n}\n\n\nnote that we have changed maxsupported from 1 to 2. a that had already passed this getversion() call before it was introduced will return defaultversion. a that was run with maxsupported set to 1, will return 1. new will return 2.\n\nafter you are sure that all of the prior to version 1 have completed, you can remove the code for that version. it should now look like the following:\n\nv := workflow.getversion(ctx, "step1", 1, 2)\nif v == 1 {\n err = workflow.executeactivity(ctx, activityc, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n}\n\n\nyou\'ll note that minsupported has changed from defaultversion to 1. if an older version of the history is replayed on this code, it will fail because the minimum expected version is 1. after you are sure that all of the for version 1 have completed, then you can remove 1 so that your code would look like the following:\n\n_ := workflow.getversion(ctx, "step1", 2, 2)\nerr = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n\n\nnote that we have preserved the call to getversion(). there are two reasons to preserve this call:\n\n 1. this ensures that if there is a still running for an older version, it will fail here and not proceed.\n 2. if you need to make additional changes for step1, such as changing activityd to activitye, you only need to update maxversion from 2 to 3 and branch from there.\n\nyou only need to preserve the first call to getversion() for each changeid. all subsequent calls to getversion() with the same change id are safe to remove. if necessary, you can remove the first getversion() call, but you need to ensure the following:\n\n * all executions with an older version are completed.\n * you can no longer use step1 for the changeid. if you need to make changes to that same part in the future, such as change from activityd to activitye, you would need to use a different changeid like step1-fix2, and start minversion from defaultversion again. the code would look like the following:\n\nv := workflow.getversion(ctx, "step1-fix2", workflow.defaultversion, 1)\nif v == workflow.defaultversion {\n err = workflow.executeactivity(ctx, activityd, data).get(ctx, &result1)\n} else {\n err = workflow.executeactivity(ctx, activitye, data).get(ctx, &result1)\n}\n\n\nupgrading a is straightforward if you don\'t need to preserve your currently running . you can simply terminate all of the currently running and suspend new ones from being created while you deploy the new version of your code, which does not use getversion(), and then resume creation. however, that is often not the case, and you need to take care of the currently running , so using getversion() to update your code is the method to use.\n\nhowever, if you want your currently running to proceed based on the current logic, but you want to ensure new are running on new logic, you can define your as a new workflowtype, and change your start path (calls to startworkflow()) to start the new type.\n\n\n# sanity checking\n\nthe cadence client sdk performs a sanity check to help prevent obvious incompatible changes. the sanity check verifies whether a made in replay matches the recorded in history, in the same order. the is generated by calling any of the following methods:\n\n * workflow.executeactivity()\n * workflow.executechildworkflow()\n * workflow.newtimer()\n * workflow.sleep()\n * workflow.sideeffect()\n * workflow.requestcancelworkflow()\n * workflow.signalexternalworkflow()\n * workflow.upsertsearchattributes()\n\nadding, removing, or reordering any of the above methods triggers the sanity check and results in a non-deterministic error.\n\nthe sanity check does not perform a thorough check. for example, it does not check on the \'s input arguments or the timer duration. if the check is enforced on every property, then it becomes too restricted and harder to maintain the code. for example, if you move your code from one package to another package, that changes the activitytype, which technically becomes a different . but, we don\'t want to fail on that change, so we only check the function name part of the activitytype.',charsets:{}},{title:"Sessions",frontmatter:{layout:"default",title:"Sessions",permalink:"/docs/go-client/sessions",readingShow:"top"},regularPath:"/docs/05-go-client/15-sessions.html",relativePath:"docs/05-go-client/15-sessions.md",key:"v-edf882bc",path:"/docs/go-client/sessions/",headers:[{level:2,title:"Use Cases",slug:"use-cases",normalizedTitle:"use cases",charIndex:254},{level:2,title:"Basic Usage",slug:"basic-usage",normalizedTitle:"basic usage",charIndex:822},{level:3,title:"Sample Code",slug:"sample-code",normalizedTitle:"sample code",charIndex:3519},{level:2,title:"Session Metadata",slug:"session-metadata",normalizedTitle:"session metadata",charIndex:4548},{level:2,title:"Concurrent Session Limitation",slug:"concurrent-session-limitation",normalizedTitle:"concurrent session limitation",charIndex:3044},{level:2,title:"Recreate Session",slug:"recreate-session",normalizedTitle:"recreate session",charIndex:5565},{level:2,title:"Q & A",slug:"q-a",normalizedTitle:"q & a",charIndex:null},{level:3,title:"Is there a complete example?",slug:"is-there-a-complete-example",normalizedTitle:"is there a complete example?",charIndex:6228},{level:3,title:"What happens to my activity if the worker dies?",slug:"what-happens-to-my-activity-if-the-worker-dies",normalizedTitle:"what happens to my activity if the worker dies?",charIndex:6369},{level:3,title:"Is the concurrent session limitation per process or per host?",slug:"is-the-concurrent-session-limitation-per-process-or-per-host",normalizedTitle:"is the concurrent session limitation per process or per host?",charIndex:6577},{level:2,title:"Future Work",slug:"future-work",normalizedTitle:"future work",charIndex:6753}],codeSwitcherOptions:{},headersStr:"Use Cases Basic Usage Sample Code Session Metadata Concurrent Session Limitation Recreate Session Q & A Is there a complete example? What happens to my activity if the worker dies? Is the concurrent session limitation per process or per host? Future Work",content:"# Sessions\n\nThe session framework provides a straightforward interface for scheduling multiple on a single without requiring you to manually specify the name. It also includes features like concurrent session limitation and worker failure detection.\n\n\n# Use Cases\n\n * File Processing: You may want to implement a that can download a file, process it, and then upload the modified version. If these three steps are implemented as three different , all of them should be executed by the same .\n\n * Machine Learning Model Training: Training a machine learning model typically involves three stages: download the data set, optimize the model, and upload the trained parameter. Since the models may consume a large amount of resources (GPU memory for example), the number of models processed on a host needs to be limited.\n\n\n# Basic Usage\n\nBefore using the session framework to write your code, you need to configure your to process sessions. To do that, set the EnableSessionWorker field of worker.Options to true when starting your .\n\nThe most important APIs provided by the session framework are workflow.CreateSession() and workflow.CompleteSession(). The basic idea is that all the executed within a session will be processed by the same and these two APIs allow you to create new sessions and close them after all finish executing.\n\nHere's a more detailed description of these two APIs:\n\ntype SessionOptions struct {\n // ExecutionTimeout: required, no default.\n // Specifies the maximum amount of time the session can run.\n ExecutionTimeout time.Duration\n\n // CreationTimeout: required, no default.\n // Specifies how long session creation can take before returning an error.\n CreationTimeout time.Duration\n}\n\nfunc CreateSession(ctx Context, sessionOptions *SessionOptions) (Context, error)\n\n\nCreateSession() takes in workflow.Context, sessionOptions and returns a new context which contains metadata information of the created session (referred to as the session context below). When it's called, it will check the name specified in the ActivityOptions (or in the StartWorkflowOptions if the name is not specified in ActivityOptions), and create the session on one of the which is polling that .\n\nThe returned session context should be used to execute all belonging to the session. The context will be cancelled if the executing this session dies or CompleteSession() is called. When using the returned session context to execute , a workflow.ErrSessionFailed error may be returned if the session framework detects that the executing this session has died. The failure of your won't affect the state of the session, so you still need to handle the errors returned from your and call CompleteSession() if necessary.\n\nCreateSession() will return an error if the context passed in already contains an open session. If all the are currently busy and unable to handle new sessions, the framework will keep retrying until the CreationTimeout you specified in SessionOptions has passed before returning an error (check the Concurrent Session Limitation section for more details).\n\nfunc CompleteSession(ctx Context)\n\n\nCompleteSession() releases the resources reserved on the , so it's important to call it as soon as you no longer need the session. It will cancel the session context and therefore all the using that session context. Note that it's safe to call CompleteSession() on a failed session, meaning that you can call it from a defer function after the session is successfully created.\n\n\n# Sample Code\n\nfunc FileProcessingWorkflow(ctx workflow.Context, fileID string) (err error) {\n ao := workflow.ActivityOptions{\n ScheduleToStartTimeout: time.Second * 5,\n StartToCloseTimeout: time.Minute,\n }\n ctx = workflow.WithActivityOptions(ctx, ao)\n\n so := &workflow.SessionOptions{\n CreationTimeout: time.Minute,\n ExecutionTimeout: time.Minute,\n }\n sessionCtx, err := workflow.CreateSession(ctx, so)\n if err != nil {\n return err\n }\n defer workflow.CompleteSession(sessionCtx)\n\n var fInfo *fileInfo\n err = workflow.ExecuteActivity(sessionCtx, downloadFileActivityName, fileID).Get(sessionCtx, &fInfo)\n if err != nil {\n return err\n }\n\n var fInfoProcessed *fileInfo\n err = workflow.ExecuteActivity(sessionCtx, processFileActivityName, *fInfo).Get(sessionCtx, &fInfoProcessed)\n if err != nil {\n return err\n }\n\n return workflow.ExecuteActivity(sessionCtx, uploadFileActivityName, *fInfoProcessed).Get(sessionCtx, nil)\n}\n\n\n\n# Session Metadata\n\ntype SessionInfo struct {\n // A unique ID for the session\n SessionID string\n\n // The hostname of the worker that is executing the session\n HostName string\n\n // ... other unexported fields\n}\n\nfunc GetSessionInfo(ctx Context) *SessionInfo\n\n\nThe session context also stores some session metadata, which can be retrieved by the GetSessionInfo() API. If the context passed in doesn't contain any session metadata, this API will return a nil pointer.\n\n\n# Concurrent Session Limitation\n\nTo limit the number of concurrent sessions running on a , set the MaxConcurrentSessionExecutionSize field of worker.Options to the desired value. By default this field is set to a very large value, so there's no need to manually set it if no limitation is needed.\n\nIf a hits this limitation, it won't accept any new CreateSession() requests until one of the existing sessions is completed. CreateSession() will return an error if the session can't be created within CreationTimeout.\n\n\n# Recreate Session\n\nFor long-running sessions, you may want to use the ContinueAsNew feature to split the into multiple runs when all need to be executed by the same . The RecreateSession() API is designed for such a use case.\n\nfunc RecreateSession(ctx Context, recreateToken []byte, sessionOptions *SessionOptions) (Context, error)\n\n\nIts usage is the same as CreateSession() except that it also takes in a recreateToken, which is needed to create a new session on the same as the previous one. You can get the token by calling the GetRecreateToken() method of the SessionInfo object.\n\ntoken := workflow.GetSessionInfo(sessionCtx).GetRecreateToken()\n\n\n\n# Q & A\n\n\n# Is there a complete example?\n\nYes, the file processing example in the cadence-sample repo has been updated to use the session framework.\n\n\n# What happens to my activity if the worker dies?\n\nIf your has already been scheduled, it will be cancelled. If not, you will get a workflow.ErrSessionFailed error when you call workflow.ExecuteActivity().\n\n\n# Is the concurrent session limitation per process or per host?\n\nIt's per process, so make sure there's only one process running on the host if you plan to use that feature.\n\n\n# Future Work\n\n * Support automatic session re-establishing Right now a session is considered failed if the process dies. However, for some use cases, you may only care whether host is alive or not. For these uses cases, the session should be automatically re-established if the process is restarted.\n\n * Support fine-grained concurrent session limitation The current implementation assumes that all sessions are consuming the same type of resource and there's only one global limitation. Our plan is to allow you to specify what type of resource your session will consume and enforce different limitations on different types of resources.",normalizedContent:"# sessions\n\nthe session framework provides a straightforward interface for scheduling multiple on a single without requiring you to manually specify the name. it also includes features like concurrent session limitation and worker failure detection.\n\n\n# use cases\n\n * file processing: you may want to implement a that can download a file, process it, and then upload the modified version. if these three steps are implemented as three different , all of them should be executed by the same .\n\n * machine learning model training: training a machine learning model typically involves three stages: download the data set, optimize the model, and upload the trained parameter. since the models may consume a large amount of resources (gpu memory for example), the number of models processed on a host needs to be limited.\n\n\n# basic usage\n\nbefore using the session framework to write your code, you need to configure your to process sessions. to do that, set the enablesessionworker field of worker.options to true when starting your .\n\nthe most important apis provided by the session framework are workflow.createsession() and workflow.completesession(). the basic idea is that all the executed within a session will be processed by the same and these two apis allow you to create new sessions and close them after all finish executing.\n\nhere's a more detailed description of these two apis:\n\ntype sessionoptions struct {\n // executiontimeout: required, no default.\n // specifies the maximum amount of time the session can run.\n executiontimeout time.duration\n\n // creationtimeout: required, no default.\n // specifies how long session creation can take before returning an error.\n creationtimeout time.duration\n}\n\nfunc createsession(ctx context, sessionoptions *sessionoptions) (context, error)\n\n\ncreatesession() takes in workflow.context, sessionoptions and returns a new context which contains metadata information of the created session (referred to as the session context below). when it's called, it will check the name specified in the activityoptions (or in the startworkflowoptions if the name is not specified in activityoptions), and create the session on one of the which is polling that .\n\nthe returned session context should be used to execute all belonging to the session. the context will be cancelled if the executing this session dies or completesession() is called. when using the returned session context to execute , a workflow.errsessionfailed error may be returned if the session framework detects that the executing this session has died. the failure of your won't affect the state of the session, so you still need to handle the errors returned from your and call completesession() if necessary.\n\ncreatesession() will return an error if the context passed in already contains an open session. if all the are currently busy and unable to handle new sessions, the framework will keep retrying until the creationtimeout you specified in sessionoptions has passed before returning an error (check the concurrent session limitation section for more details).\n\nfunc completesession(ctx context)\n\n\ncompletesession() releases the resources reserved on the , so it's important to call it as soon as you no longer need the session. it will cancel the session context and therefore all the using that session context. note that it's safe to call completesession() on a failed session, meaning that you can call it from a defer function after the session is successfully created.\n\n\n# sample code\n\nfunc fileprocessingworkflow(ctx workflow.context, fileid string) (err error) {\n ao := workflow.activityoptions{\n scheduletostarttimeout: time.second * 5,\n starttoclosetimeout: time.minute,\n }\n ctx = workflow.withactivityoptions(ctx, ao)\n\n so := &workflow.sessionoptions{\n creationtimeout: time.minute,\n executiontimeout: time.minute,\n }\n sessionctx, err := workflow.createsession(ctx, so)\n if err != nil {\n return err\n }\n defer workflow.completesession(sessionctx)\n\n var finfo *fileinfo\n err = workflow.executeactivity(sessionctx, downloadfileactivityname, fileid).get(sessionctx, &finfo)\n if err != nil {\n return err\n }\n\n var finfoprocessed *fileinfo\n err = workflow.executeactivity(sessionctx, processfileactivityname, *finfo).get(sessionctx, &finfoprocessed)\n if err != nil {\n return err\n }\n\n return workflow.executeactivity(sessionctx, uploadfileactivityname, *finfoprocessed).get(sessionctx, nil)\n}\n\n\n\n# session metadata\n\ntype sessioninfo struct {\n // a unique id for the session\n sessionid string\n\n // the hostname of the worker that is executing the session\n hostname string\n\n // ... other unexported fields\n}\n\nfunc getsessioninfo(ctx context) *sessioninfo\n\n\nthe session context also stores some session metadata, which can be retrieved by the getsessioninfo() api. if the context passed in doesn't contain any session metadata, this api will return a nil pointer.\n\n\n# concurrent session limitation\n\nto limit the number of concurrent sessions running on a , set the maxconcurrentsessionexecutionsize field of worker.options to the desired value. by default this field is set to a very large value, so there's no need to manually set it if no limitation is needed.\n\nif a hits this limitation, it won't accept any new createsession() requests until one of the existing sessions is completed. createsession() will return an error if the session can't be created within creationtimeout.\n\n\n# recreate session\n\nfor long-running sessions, you may want to use the continueasnew feature to split the into multiple runs when all need to be executed by the same . the recreatesession() api is designed for such a use case.\n\nfunc recreatesession(ctx context, recreatetoken []byte, sessionoptions *sessionoptions) (context, error)\n\n\nits usage is the same as createsession() except that it also takes in a recreatetoken, which is needed to create a new session on the same as the previous one. you can get the token by calling the getrecreatetoken() method of the sessioninfo object.\n\ntoken := workflow.getsessioninfo(sessionctx).getrecreatetoken()\n\n\n\n# q & a\n\n\n# is there a complete example?\n\nyes, the file processing example in the cadence-sample repo has been updated to use the session framework.\n\n\n# what happens to my activity if the worker dies?\n\nif your has already been scheduled, it will be cancelled. if not, you will get a workflow.errsessionfailed error when you call workflow.executeactivity().\n\n\n# is the concurrent session limitation per process or per host?\n\nit's per process, so make sure there's only one process running on the host if you plan to use that feature.\n\n\n# future work\n\n * support automatic session re-establishing right now a session is considered failed if the process dies. however, for some use cases, you may only care whether host is alive or not. for these uses cases, the session should be automatically re-established if the process is restarted.\n\n * support fine-grained concurrent session limitation the current implementation assumes that all sessions are consuming the same type of resource and there's only one global limitation. our plan is to allow you to specify what type of resource your session will consume and enforce different limitations on different types of resources.",charsets:{}},{title:"Distributed CRON",frontmatter:{layout:"default",title:"Distributed CRON",permalink:"/docs/go-client/distributed-cron",readingShow:"top"},regularPath:"/docs/05-go-client/16-distributed-cron.html",relativePath:"docs/05-go-client/16-distributed-cron.md",key:"v-35913a62",path:"/docs/go-client/distributed-cron/",headers:[{level:2,title:"Convert existing cron workflow",slug:"convert-existing-cron-workflow",normalizedTitle:"convert existing cron workflow",charIndex:2151},{level:2,title:"Retrieve last successful result",slug:"retrieve-last-successful-result",normalizedTitle:"retrieve last successful result",charIndex:2614}],codeSwitcherOptions:{},headersStr:"Convert existing cron workflow Retrieve last successful result",content:'# Distributed CRON\n\nIt is relatively straightforward to turn any Cadence into a Cron . All you need is to supply a cron schedule when starting the using the CronSchedule parameter of StartWorkflowOptions.\n\nYou can also start a using the Cadence with an optional cron schedule using the --cron argument.\n\nFor with CronSchedule:\n\n * Cron schedule is based on UTC time. For example cron schedule "15 8 * * *" will run daily at 8:15am UTC. Another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * If a failed and a RetryPolicy is supplied to the StartWorkflowOptions as well, the will retry based on the RetryPolicy. While the is retrying, the server will not schedule the next cron run.\n * Cadence server only schedules the next cron run after the current run is completed. If the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * Cron will not stop until they are terminated or cancelled.\n\nCadence supports the standard cron spec:\n\n// CronSchedule - Optional cron schedule for workflow. If a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. The scheduling will be based on UTC time. The schedule for next run only happen\n// after the current run is completed/failed/timeout. If a RetryPolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. While the workflow is retrying, it won\'t\n// schedule its next run. If next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. Cron workflow will not stop until it is terminated or cancelled (by returning cadence.CanceledError).\n// The cron spec is as following:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (Sunday to Saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\nCronSchedule string\n\n\nCadence also supports more advanced cron expressions.\n\nThe crontab guru site is useful for testing your cron expressions.\n\n\n# Convert existing cron workflow\n\nBefore CronSchedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return ContinueAsNew. One problem with that implementation is that if the fails or times out, the cron would stop.\n\nTo convert those to make use of Cadence CronSchedule, all you need is to remove the delay timer and return without using ContinueAsNew. Then start the with the desired CronSchedule.\n\n\n# Retrieve last successful result\n\nSometimes it is useful to obtain the progress of previous successful runs. This is supported by two new APIs in the client library: HasLastCompletionResult and GetLastCompletionResult. Below is an example of how to use this in Go:\n\nfunc CronWorkflow(ctx workflow.Context) (CronResult, error) {\n startTimestamp := time.Time{} // By default start from 0 time.\n if workflow.HasLastCompletionResult(ctx) {\n var progress CronResult\n if err := workflow.GetLastCompletionResult(ctx, &progress); err == nil {\n startTimestamp = progress.LastSyncTimestamp\n }\n }\n endTimestamp := workflow.Now(ctx)\n\n // Process work between startTimestamp (exclusive), endTimestamp (inclusive).\n // Business logic implementation goes here.\n\n result := CronResult{LastSyncTimestamp: endTimestamp}\n return result, nil\n}\n\n\nNote that this works even if one of the cron schedule runs failed. The next schedule will still get the last successful result if it ever successfully completed at least once. For example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these APIs.',normalizedContent:'# distributed cron\n\nit is relatively straightforward to turn any cadence into a cron . all you need is to supply a cron schedule when starting the using the cronschedule parameter of startworkflowoptions.\n\nyou can also start a using the cadence with an optional cron schedule using the --cron argument.\n\nfor with cronschedule:\n\n * cron schedule is based on utc time. for example cron schedule "15 8 * * *" will run daily at 8:15am utc. another example "*/2 * * * 5-6" will schedule a workflow every two minutes on fridays and saturdays.\n * if a failed and a retrypolicy is supplied to the startworkflowoptions as well, the will retry based on the retrypolicy. while the is retrying, the server will not schedule the next cron run.\n * cadence server only schedules the next cron run after the current run is completed. if the next schedule is due while a is running (or retrying), then it will skip that schedule.\n * cron will not stop until they are terminated or cancelled.\n\ncadence supports the standard cron spec:\n\n// cronschedule - optional cron schedule for workflow. if a cron schedule is specified, the workflow will run\n// as a cron based on the schedule. the scheduling will be based on utc time. the schedule for next run only happen\n// after the current run is completed/failed/timeout. if a retrypolicy is also supplied, and the workflow failed\n// or timed out, the workflow will be retried based on the retry policy. while the workflow is retrying, it won\'t\n// schedule its next run. if next schedule is due while the workflow is running (or retrying), then it will skip that\n// schedule. cron workflow will not stop until it is terminated or cancelled (by returning cadence.cancelederror).\n// the cron spec is as following:\n// ┌───────────── minute (0 - 59)\n// │ ┌───────────── hour (0 - 23)\n// │ │ ┌───────────── day of the month (1 - 31)\n// │ │ │ ┌───────────── month (1 - 12)\n// │ │ │ │ ┌───────────── day of the week (0 - 6) (sunday to saturday)\n// │ │ │ │ │\n// │ │ │ │ │\n// * * * * *\ncronschedule string\n\n\ncadence also supports more advanced cron expressions.\n\nthe crontab guru site is useful for testing your cron expressions.\n\n\n# convert existing cron workflow\n\nbefore cronschedule was available, the previous approach to implementing cron was to use a delay timer as the last step and then return continueasnew. one problem with that implementation is that if the fails or times out, the cron would stop.\n\nto convert those to make use of cadence cronschedule, all you need is to remove the delay timer and return without using continueasnew. then start the with the desired cronschedule.\n\n\n# retrieve last successful result\n\nsometimes it is useful to obtain the progress of previous successful runs. this is supported by two new apis in the client library: haslastcompletionresult and getlastcompletionresult. below is an example of how to use this in go:\n\nfunc cronworkflow(ctx workflow.context) (cronresult, error) {\n starttimestamp := time.time{} // by default start from 0 time.\n if workflow.haslastcompletionresult(ctx) {\n var progress cronresult\n if err := workflow.getlastcompletionresult(ctx, &progress); err == nil {\n starttimestamp = progress.lastsynctimestamp\n }\n }\n endtimestamp := workflow.now(ctx)\n\n // process work between starttimestamp (exclusive), endtimestamp (inclusive).\n // business logic implementation goes here.\n\n result := cronresult{lastsynctimestamp: endtimestamp}\n return result, nil\n}\n\n\nnote that this works even if one of the cron schedule runs failed. the next schedule will still get the last successful result if it ever successfully completed at least once. for example, for a daily cron , if the first day run succeeds and the second day fails, then the third day run will still get the result from first day\'s run using these apis.',charsets:{}},{title:"Tracing and context propagation",frontmatter:{layout:"default",title:"Tracing and context propagation",permalink:"/docs/go-client/tracing",readingShow:"top"},regularPath:"/docs/05-go-client/17-tracing.html",relativePath:"docs/05-go-client/17-tracing.md",key:"v-9d2716dc",path:"/docs/go-client/tracing/",headers:[{level:2,title:"Tracing",slug:"tracing",normalizedTitle:"tracing",charIndex:2},{level:2,title:"Context Propagation",slug:"context-propagation",normalizedTitle:"context propagation",charIndex:651},{level:3,title:"Server-Side Headers Support",slug:"server-side-headers-support",normalizedTitle:"server-side headers support",charIndex:1158},{level:3,title:"Context Propagators",slug:"context-propagators",normalizedTitle:"context propagators",charIndex:2070},{level:2,title:"Q & A",slug:"q-a",normalizedTitle:"q & a",charIndex:null},{level:3,title:"Is there a complete example?",slug:"is-there-a-complete-example",normalizedTitle:"is there a complete example?",charIndex:3015},{level:3,title:"Can I configure multiple context propagators?",slug:"can-i-configure-multiple-context-propagators",normalizedTitle:"can i configure multiple context propagators?",charIndex:3182}],codeSwitcherOptions:{},headersStr:"Tracing Context Propagation Server-Side Headers Support Context Propagators Q & A Is there a complete example? Can I configure multiple context propagators?",content:"# Tracing and context propagation\n\n\n# Tracing\n\nThe Go client provides distributed tracing support through OpenTracing. Tracing can be configured by providing an opentracing.Tracer implementation in ClientOptions and WorkerOptions during client and instantiation, respectively. Tracing allows you to view the call graph of a along with its , child etc. For more details on how to configure and leverage tracing, see the OpenTracing documentation. The OpenTracing support has been validated using Jaeger, but other implementations mentioned here should also work. Tracing support utilizes generic context propagation support provided by the client.\n\n\n# Context Propagation\n\nWe provide a standard way to propagate custom context across a . ClientOptions and WorkerOptions allow configuring a context propagator. The context propagator extracts and passes on information present in the context.Context and workflow.Context objects across the . Once a context propagator is configured, you should be able to access the required values in the context objects as you would normally do in Go. For a sample, the Go client implements a tracing context propagator.\n\n\n# Server-Side Headers Support\n\nOn the server side, Cadence provides a mechanism to propagate what it calls headers across different transitions.\n\nstruct Header {\n 10: optional map fields\n}\n\n\nThe client leverages this to pass around selected context information. HeaderReader and HeaderWriter are interfaces that allow reading and writing to the Cadence server headers. The client already provides implementations for these. HeaderWriter sets a field in the header. Headers is a map, so setting a value for the the same key multiple times will overwrite the previous values. HeaderReader iterates through the headers map and runs the provided handler function on each key/value pair, allowing you to deal with the fields you are interested in.\n\ntype HeaderWriter interface {\n Set(string, []byte)\n}\n\ntype HeaderReader interface {\n ForEachKey(handler func(string, []byte) error) error\n}\n\n\n\n# Context Propagators\n\nContext propagators require implementing the following four methods to propagate selected context across a workflow:\n\n * Inject is meant to pick out the context keys of interest from a Go context.Context object and write that into the headers using the HeaderWriter interface\n * InjectFromWorkflow is the same as above, but operates on a workflow.Context object\n * Extract reads the headers and places the information of interest back into the context.Context object\n * ExtractToWorkflow is the same as above, but operates on a workflow.Context object\n\nThe tracing context propagator shows a sample implementation of context propagation.\n\ntype ContextPropagator interface {\n Inject(context.Context, HeaderWriter) error\n\n Extract(context.Context, HeaderReader) (context.Context, error)\n\n InjectFromWorkflow(Context, HeaderWriter) error\n\n ExtractToWorkflow(Context, HeaderReader) (Context, error)\n}\n\n\n\n# Q & A\n\n\n# Is there a complete example?\n\nThe context propagation sample configures a custom context propagator and shows context propagation of custom keys across a and an .\n\n\n# Can I configure multiple context propagators?\n\nYes, we recommended that you configure multiple context propagators with each propagator meant to propagate a particular type of context.",normalizedContent:"# tracing and context propagation\n\n\n# tracing\n\nthe go client provides distributed tracing support through opentracing. tracing can be configured by providing an opentracing.tracer implementation in clientoptions and workeroptions during client and instantiation, respectively. tracing allows you to view the call graph of a along with its , child etc. for more details on how to configure and leverage tracing, see the opentracing documentation. the opentracing support has been validated using jaeger, but other implementations mentioned here should also work. tracing support utilizes generic context propagation support provided by the client.\n\n\n# context propagation\n\nwe provide a standard way to propagate custom context across a . clientoptions and workeroptions allow configuring a context propagator. the context propagator extracts and passes on information present in the context.context and workflow.context objects across the . once a context propagator is configured, you should be able to access the required values in the context objects as you would normally do in go. for a sample, the go client implements a tracing context propagator.\n\n\n# server-side headers support\n\non the server side, cadence provides a mechanism to propagate what it calls headers across different transitions.\n\nstruct header {\n 10: optional map fields\n}\n\n\nthe client leverages this to pass around selected context information. headerreader and headerwriter are interfaces that allow reading and writing to the cadence server headers. the client already provides implementations for these. headerwriter sets a field in the header. headers is a map, so setting a value for the the same key multiple times will overwrite the previous values. headerreader iterates through the headers map and runs the provided handler function on each key/value pair, allowing you to deal with the fields you are interested in.\n\ntype headerwriter interface {\n set(string, []byte)\n}\n\ntype headerreader interface {\n foreachkey(handler func(string, []byte) error) error\n}\n\n\n\n# context propagators\n\ncontext propagators require implementing the following four methods to propagate selected context across a workflow:\n\n * inject is meant to pick out the context keys of interest from a go context.context object and write that into the headers using the headerwriter interface\n * injectfromworkflow is the same as above, but operates on a workflow.context object\n * extract reads the headers and places the information of interest back into the context.context object\n * extracttoworkflow is the same as above, but operates on a workflow.context object\n\nthe tracing context propagator shows a sample implementation of context propagation.\n\ntype contextpropagator interface {\n inject(context.context, headerwriter) error\n\n extract(context.context, headerreader) (context.context, error)\n\n injectfromworkflow(context, headerwriter) error\n\n extracttoworkflow(context, headerreader) (context, error)\n}\n\n\n\n# q & a\n\n\n# is there a complete example?\n\nthe context propagation sample configures a custom context propagator and shows context propagation of custom keys across a and an .\n\n\n# can i configure multiple context propagators?\n\nyes, we recommended that you configure multiple context propagators with each propagator meant to propagate a particular type of context.",charsets:{}},{title:"Workflow Replay and Shadowing",frontmatter:{layout:"default",title:"Workflow Replay and Shadowing",permalink:"/docs/go-client/workflow-replay-shadowing",readingShow:"top"},regularPath:"/docs/05-go-client/18-workflow-replay-shadowing.html",relativePath:"docs/05-go-client/18-workflow-replay-shadowing.md",key:"v-d043b980",path:"/docs/go-client/workflow-replay-shadowing/",headers:[{level:2,title:"Workflow Replayer",slug:"workflow-replayer",normalizedTitle:"workflow replayer",charIndex:469},{level:3,title:"Write a Replay Test",slug:"write-a-replay-test",normalizedTitle:"write a replay test",charIndex:824},{level:3,title:"Sample Replay Test",slug:"sample-replay-test",normalizedTitle:"sample replay test",charIndex:3778},{level:2,title:"Workflow Shadower",slug:"workflow-shadower",normalizedTitle:"workflow shadower",charIndex:491},{level:3,title:"Shadow Options",slug:"shadow-options",normalizedTitle:"shadow options",charIndex:4923},{level:3,title:"Local Shadowing Test",slug:"local-shadowing-test",normalizedTitle:"local shadowing test",charIndex:6606},{level:3,title:"Shadowing Worker",slug:"shadowing-worker",normalizedTitle:"shadowing worker",charIndex:7673}],codeSwitcherOptions:{},headersStr:"Workflow Replayer Write a Replay Test Sample Replay Test Workflow Shadower Shadow Options Local Shadowing Test Shadowing Worker",content:"# Workflow Replay and Shadowing\n\nIn the Versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. However, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. To help you identify incompatible changes and catch them before production traffic is impacted, we implemented Workflow Replayer and Workflow Shadower.\n\n\n# Workflow Replayer\n\nWorkflow Replayer is a testing component for replaying existing workflow histories against a workflow definition. The replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# Write a Replay Test\n\n# Step 1: Create workflow replayer\n\nCreate a workflow Replayer by:\n\nreplayer := worker.NewWorkflowReplayer()\n\n\nor if custom data converter, context propagator, interceptor, etc. is used in your workflow:\n\noptions := worker.ReplayOptions{\n DataConverter: myDataConverter,\n ContextPropagators: []workflow.ContextPropagator{\n myContextPropagator,\n },\n WorkflowInterceptorChainFactories: []interceptors.WorkflowInterceptorFactory{\n myInterceptorFactory,\n },\n Tracer: myTracer,\n}\nreplayer := worker.NewWorkflowReplayWithOptions(options)\n\n\n# Step 2: Register workflow definition\n\nNext, register your workflow definitions as you normally do. Make sure workflows are registered the same way as they were when running and generating histories; otherwise the replay will not be able to find the corresponding definition.\n\nreplayer.RegisterWorkflow(myWorkflowFunc1)\nreplayer.RegisterWorkflow(myWorkflowFunc2, workflow.RegisterOptions{\n\tName: workflowName,\n})\n\n\n# Step 3: Prepare workflow histories\n\nReplayer can read workflow history from a local json file or fetch it directly from the Cadence server. If you would like to use the first method, you can use the following CLI command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nThe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# Step 4: Call the replay method\n\nOnce you have the workflow history or have the connection to Cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nerr := replayer.ReplayWorkflowHistory(logger, history)\n\n// if workflow history is stored in a json file\nerr = replayer.ReplayWorkflowHistoryFromJSONFile(logger, jsonFileName)\n\n// if workflow history is stored in a json file and you only want to replay part of it\n// NOTE: lastEventID can't be set arbitrarily. It must be the end of of a history events batch\n// when in doubt, set to the eventID of decisionTaskStarted events.\nerr = replayer.ReplayPartialWorkflowHistoryFromJSONFile(logger, jsonFileName, lastEventID)\n\n// if you want to fetch workflow history directly from cadence server\n// please check the Worker Service page for how to create a cadence service client\nerr = replayer.ReplayWorkflowExecution(ctx, cadenceServiceClient, logger, domain, execution)\n\n\n# Step 5: Check returned error\n\nIf an error is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\nNote: currently an error will be returned if there are less than 3 events in the history. It is because the first 3 events in the history has nothing to do with the workflow code, so Replayer can't tell if there's a incompatible change or not.\n\n\n# Sample Replay Test\n\nThis sample is also available in our samples repo at here.\n\nfunc TestReplayWorkflowHistoryFromFile(t *testing.T) {\n\treplayer := worker.NewWorkflowReplayer()\n\treplayer.RegisterWorkflow(helloWorldWorkflow)\n\terr := replayer.ReplayWorkflowHistoryFromJSONFile(zaptest.NewLogger(t), \"helloworld.json\")\n\trequire.NoError(t, err)\n}\n\n\n\n# Workflow Shadower\n\nWorkflow Replayer works well when verifying the compatibility against a small number of workflow histories. If there are lots of workflows in production need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nWorkflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each of workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# Shadow Options\n\nComplete documentation on shadow options which includes default values, accepted values, etc. can be found here. The following sections are just a brief description of each option.\n\n# Scan Filters\n\n * WorkflowQuery: If you are familiar with our advanced visibility query syntax, you can specify a query directly. If specified, all other scan filters must be left empty.\n * WorkflowTypes: A list of workflow Type names.\n * WorkflowStatus: A list of workflow status.\n * WorkflowStartTimeFilter: Min and max timestamp for workflow start time.\n * SamplingRate: Sampling workflows from the scan result before executing the replay test.\n\n# Shadow Exit Condition\n\n * ExpirationInterval: Shadowing will exit when the specified interval has passed.\n * ShadowCount: Shadowing will exit after this number of workflow has been replayed. Note: replay maybe skipped due to errors like can't fetch history, history too short, etc. Skipped workflows won't be taken account into ShadowCount.\n\n# Shadow Mode\n\n * Normal: Shadowing will complete after all workflows matches WorkflowQuery (after sampling) have been replayed or when exit condition is met.\n * Continuous: A new round of shadowing will be started after all workflows matches WorkflowQuery have been replayed. There will be a 5 min wait period between each round, and currently this wait period is not configurable. Shadowing will complete only when ExitCondition is met. ExitCondition must be specified when using this mode.\n\n# Shadow Concurrency\n\n * Concurrency: workflow replay concurrency. If not specified, will be default to 1. For local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# Local Shadowing Test\n\nLocal shadowing test is similar to the replay test. First create a workflow shadower with optional shadow and replay options, then register the workflow that need to be shadowed. Finally, call the Run method to start the shadowing. The method will return if shadowing has finished or any non-deterministic error is found.\n\nHere's a simple example. The example is also available here.\n\nfunc TestShadowWorkflow(t *testing.T) {\n\toptions := worker.ShadowOptions{\n\t\tWorkflowStartTimeFilter: worker.TimeFilter{\n\t\t\tMinTimestamp: time.Now().Add(-time.Hour),\n\t\t},\n\t\tExitCondition: worker.ShadowExitCondition{\n\t\t\tShadowCount: 10,\n\t\t},\n\t}\n\n // please check the Worker Service page for how to create a cadence service client\n\tservice := buildCadenceClient()\n\tshadower, err := worker.NewWorkflowShadower(service, \"samples-domain\", options, worker.ReplayOptions{}, zaptest.NewLogger(t))\n\tassert.NoError(t, err)\n\n\tshadower.RegisterWorkflowWithOptions(helloWorldWorkflow, workflow.RegisterOptions{Name: \"helloWorld\"})\n\tassert.NoError(t, shadower.Run())\n}\n\n\n\n# Shadowing Worker\n\nNOTE:\n\n * All shadow workflows are running in one Cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * The Cadence server used for scanning and getting workflow history will also be the Cadence server for running your shadow workflow. Currently, there's no way to specify different Cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nYour worker can also be configured to run in shadow mode to run shadow tests as a workflow. This is useful if there's a number of workflows need to be replayed. Using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. It can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nWhen running in shadow mode, the normal decision, activity and session worker will be disabled so that it won't update any production workflows. A special shadow activity worker will be started to execute activities for scanning and replaying workflows. The actual shadow workflow logic is controlled by Cadence server and your worker is only responsible for scanning and replaying workflows.\n\nReplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nTo enable the shadow mode, the only change needed is setting the EnableShadowWorker field in worker.Options to true, and then specify the ShadowOptions.\n\nRegistered workflows will be forwarded to the underlying WorkflowReplayer. DataConverter, WorkflowInterceptorChainFactories, ContextPropagators, and Tracer specified in the worker.Options will also be used as ReplayOptions. Since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.\n\nA sample setup can be found here.",normalizedContent:"# workflow replay and shadowing\n\nin the versioning section, we mentioned that incompatible changes to workflow definition code could cause non-deterministic issues when processing workflow tasks if versioning is not done correctly. however, it may be hard for you to tell if a particular change is incompatible or not and whether versioning logic is needed. to help you identify incompatible changes and catch them before production traffic is impacted, we implemented workflow replayer and workflow shadower.\n\n\n# workflow replayer\n\nworkflow replayer is a testing component for replaying existing workflow histories against a workflow definition. the replaying logic is the same as the one used for processing workflow tasks, so if there's any incompatible changes in the workflow definition, the replay test will fail.\n\n\n# write a replay test\n\n# step 1: create workflow replayer\n\ncreate a workflow replayer by:\n\nreplayer := worker.newworkflowreplayer()\n\n\nor if custom data converter, context propagator, interceptor, etc. is used in your workflow:\n\noptions := worker.replayoptions{\n dataconverter: mydataconverter,\n contextpropagators: []workflow.contextpropagator{\n mycontextpropagator,\n },\n workflowinterceptorchainfactories: []interceptors.workflowinterceptorfactory{\n myinterceptorfactory,\n },\n tracer: mytracer,\n}\nreplayer := worker.newworkflowreplaywithoptions(options)\n\n\n# step 2: register workflow definition\n\nnext, register your workflow definitions as you normally do. make sure workflows are registered the same way as they were when running and generating histories; otherwise the replay will not be able to find the corresponding definition.\n\nreplayer.registerworkflow(myworkflowfunc1)\nreplayer.registerworkflow(myworkflowfunc2, workflow.registeroptions{\n\tname: workflowname,\n})\n\n\n# step 3: prepare workflow histories\n\nreplayer can read workflow history from a local json file or fetch it directly from the cadence server. if you would like to use the first method, you can use the following cli command, otherwise you can skip to the next step.\n\ncadence --do workflow show --wid --rid --of \n\n\nthe dumped workflow history will be stored in the file at the path you specified in json format.\n\n# step 4: call the replay method\n\nonce you have the workflow history or have the connection to cadence server for fetching history, call one of the four replay methods to start the replay test.\n\n// if workflow history has been loaded into memory\nerr := replayer.replayworkflowhistory(logger, history)\n\n// if workflow history is stored in a json file\nerr = replayer.replayworkflowhistoryfromjsonfile(logger, jsonfilename)\n\n// if workflow history is stored in a json file and you only want to replay part of it\n// note: lasteventid can't be set arbitrarily. it must be the end of of a history events batch\n// when in doubt, set to the eventid of decisiontaskstarted events.\nerr = replayer.replaypartialworkflowhistoryfromjsonfile(logger, jsonfilename, lasteventid)\n\n// if you want to fetch workflow history directly from cadence server\n// please check the worker service page for how to create a cadence service client\nerr = replayer.replayworkflowexecution(ctx, cadenceserviceclient, logger, domain, execution)\n\n\n# step 5: check returned error\n\nif an error is returned from the replay method, it means there's a incompatible change in the workflow definition and the error message will contain more information regarding where the non-deterministic error happens.\n\nnote: currently an error will be returned if there are less than 3 events in the history. it is because the first 3 events in the history has nothing to do with the workflow code, so replayer can't tell if there's a incompatible change or not.\n\n\n# sample replay test\n\nthis sample is also available in our samples repo at here.\n\nfunc testreplayworkflowhistoryfromfile(t *testing.t) {\n\treplayer := worker.newworkflowreplayer()\n\treplayer.registerworkflow(helloworldworkflow)\n\terr := replayer.replayworkflowhistoryfromjsonfile(zaptest.newlogger(t), \"helloworld.json\")\n\trequire.noerror(t, err)\n}\n\n\n\n# workflow shadower\n\nworkflow replayer works well when verifying the compatibility against a small number of workflow histories. if there are lots of workflows in production need to be verified, dumping all histories manually clearly won't work. directly fetching histories from cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.\n\nworkflow shadower is built on top of workflow replayer to address this problem. the basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each of workflow in the scan result from cadence server and run the replay test. it can be run either as a test to serve local development purpose or as a workflow in your worker to continuously replay production workflows.\n\n\n# shadow options\n\ncomplete documentation on shadow options which includes default values, accepted values, etc. can be found here. the following sections are just a brief description of each option.\n\n# scan filters\n\n * workflowquery: if you are familiar with our advanced visibility query syntax, you can specify a query directly. if specified, all other scan filters must be left empty.\n * workflowtypes: a list of workflow type names.\n * workflowstatus: a list of workflow status.\n * workflowstarttimefilter: min and max timestamp for workflow start time.\n * samplingrate: sampling workflows from the scan result before executing the replay test.\n\n# shadow exit condition\n\n * expirationinterval: shadowing will exit when the specified interval has passed.\n * shadowcount: shadowing will exit after this number of workflow has been replayed. note: replay maybe skipped due to errors like can't fetch history, history too short, etc. skipped workflows won't be taken account into shadowcount.\n\n# shadow mode\n\n * normal: shadowing will complete after all workflows matches workflowquery (after sampling) have been replayed or when exit condition is met.\n * continuous: a new round of shadowing will be started after all workflows matches workflowquery have been replayed. there will be a 5 min wait period between each round, and currently this wait period is not configurable. shadowing will complete only when exitcondition is met. exitcondition must be specified when using this mode.\n\n# shadow concurrency\n\n * concurrency: workflow replay concurrency. if not specified, will be default to 1. for local shadowing, an error will be returned if a value higher than 1 is specified.\n\n\n# local shadowing test\n\nlocal shadowing test is similar to the replay test. first create a workflow shadower with optional shadow and replay options, then register the workflow that need to be shadowed. finally, call the run method to start the shadowing. the method will return if shadowing has finished or any non-deterministic error is found.\n\nhere's a simple example. the example is also available here.\n\nfunc testshadowworkflow(t *testing.t) {\n\toptions := worker.shadowoptions{\n\t\tworkflowstarttimefilter: worker.timefilter{\n\t\t\tmintimestamp: time.now().add(-time.hour),\n\t\t},\n\t\texitcondition: worker.shadowexitcondition{\n\t\t\tshadowcount: 10,\n\t\t},\n\t}\n\n // please check the worker service page for how to create a cadence service client\n\tservice := buildcadenceclient()\n\tshadower, err := worker.newworkflowshadower(service, \"samples-domain\", options, worker.replayoptions{}, zaptest.newlogger(t))\n\tassert.noerror(t, err)\n\n\tshadower.registerworkflowwithoptions(helloworldworkflow, workflow.registeroptions{name: \"helloworld\"})\n\tassert.noerror(t, shadower.run())\n}\n\n\n\n# shadowing worker\n\nnote:\n\n * all shadow workflows are running in one cadence system domain, and right now, every user domain can only have one shadow workflow at a time.\n * the cadence server used for scanning and getting workflow history will also be the cadence server for running your shadow workflow. currently, there's no way to specify different cadence servers for hosting the shadowing workflow and scanning/fetching workflow.\n\nyour worker can also be configured to run in shadow mode to run shadow tests as a workflow. this is useful if there's a number of workflows need to be replayed. using a workflow can make sure the shadowing won't accidentally fail in the middle and the replay load can be distributed by deploying more shadow mode workers. it can also be incorporated into your deployment process to make sure there's no failed replay checks before deploying your change to production workers.\n\nwhen running in shadow mode, the normal decision, activity and session worker will be disabled so that it won't update any production workflows. a special shadow activity worker will be started to execute activities for scanning and replaying workflows. the actual shadow workflow logic is controlled by cadence server and your worker is only responsible for scanning and replaying workflows.\n\nreplay succeed, skipped and failed metrics will be emitted by your worker when executing the shadow workflow and you can monitor those metrics to see if there's any incompatible changes.\n\nto enable the shadow mode, the only change needed is setting the enableshadowworker field in worker.options to true, and then specify the shadowoptions.\n\nregistered workflows will be forwarded to the underlying workflowreplayer. dataconverter, workflowinterceptorchainfactories, contextpropagators, and tracer specified in the worker.options will also be used as replayoptions. since all shadow workflows are running in one system domain, to avoid conflict, the actual task list name used will be domain-tasklist.\n\na sample setup can be found here.",charsets:{}},{title:"Workflow Non-deterministic errors",frontmatter:{layout:"default",title:"Workflow Non-deterministic errors",permalink:"/docs/go-client/workflow-non-deterministic-errors",readingShow:"top"},regularPath:"/docs/05-go-client/19-workflow-non-deterministic-error.html",relativePath:"docs/05-go-client/19-workflow-non-deterministic-error.md",key:"v-5df8103c",path:"/docs/go-client/workflow-non-deterministic-errors/",headers:[{level:2,title:"Root cause of non-deterministic errors",slug:"root-cause-of-non-deterministic-errors",normalizedTitle:"root cause of non-deterministic errors",charIndex:40},{level:2,title:"Decision tasks of workflow",slug:"decision-tasks-of-workflow",normalizedTitle:"decision tasks of workflow",charIndex:1533},{level:2,title:"Categories of non-deterministic errors",slug:"categories-of-non-deterministic-errors",normalizedTitle:"categories of non-deterministic errors",charIndex:5698},{level:3,title:"1. Missing decisions",slug:"_1-missing-decisions",normalizedTitle:"1. missing decisions",charIndex:6002},{level:3,title:"2. Extra decisions",slug:"_2-extra-decisions",normalizedTitle:"2. extra decisions",charIndex:6618},{level:3,title:"3. Mismatched decisions",slug:"_3-mismatched-decisions",normalizedTitle:"3. mismatched decisions",charIndex:7562},{level:3,title:"4. Decision state machine panic",slug:"_4-decision-state-machine-panic",normalizedTitle:"4. decision state machine panic",charIndex:8294},{level:2,title:"Common Q&A",slug:"common-q-a",normalizedTitle:"common q&a",charIndex:null},{level:3,title:"I want to change my workflow implementation. What code changes may produce non-deterministic errors?",slug:"i-want-to-change-my-workflow-implementation-what-code-changes-may-produce-non-deterministic-errors",normalizedTitle:"i want to change my workflow implementation. what code changes may produce non-deterministic errors?",charIndex:8843},{level:3,title:"What are some changes that will NOT trigger non-deterministic errors?",slug:"what-are-some-changes-that-will-not-trigger-non-deterministic-errors",normalizedTitle:"what are some changes that will not trigger non-deterministic errors?",charIndex:9548},{level:3,title:"I want to check if my code change will produce non-deterministic errors, how can I debug?",slug:"i-want-to-check-if-my-code-change-will-produce-non-deterministic-errors-how-can-i-debug",normalizedTitle:"i want to check if my code change will produce non-deterministic errors, how can i debug?",charIndex:10476}],codeSwitcherOptions:{},headersStr:"Root cause of non-deterministic errors Decision tasks of workflow Categories of non-deterministic errors 1. Missing decisions 2. Extra decisions 3. Mismatched decisions 4. Decision state machine panic Common Q&A I want to change my workflow implementation. What code changes may produce non-deterministic errors? What are some changes that will NOT trigger non-deterministic errors? I want to check if my code change will produce non-deterministic errors, how can I debug?",content:'# Workflow Non-deterministic errors\n\n\n# Root cause of non-deterministic errors\n\nCadence workflows are designed as long-running operations, and therefore the workflow code you write must be deterministic so that no matter how many time it is executed it always produce the same results.\n\nIn production environment, your workflow code will run on a distributed system orchestrated by clusters of machines. However, machine failures are inevitable and can happen anytime to your workflow host. If you have a workflow running for long period of time, maybe months even years, and it fails due to loss of a host, it will be resumed on another machine and continue the rest of its execution.\n\nConsider the following diagram where Workflow A is running on Host A but suddenly it crashes.\n\n\n\nWorkflow A then will be picked up by Host B and continues its execution. This process is called change of workflow ownership. However, after Host B gains ownership of the Workflow A, it does not have any information about its historical executions. For example, Workflow A may have executed many activities and it fails. Host B needs to redo all its history until the moment of failure. The process of reconstructing history of a workflow is called history replay.\n\nIn general, any errors occurs during the replay process are called non-deterministic errors. We will explore different types of non-deterministic errors in sections below but first let\'s try to understand how Cadence is able to perform the replay of workflow in case of failure.\n\n\n# Decision tasks of workflow\n\nIn the previous section, we learned that Cadence is able to replay workflow histories in case of failure. We will learn exactly how Cadence keeps track of histories and how they get replayed when necessary.\n\nWorkflow histories are built based on event-sourcing, and each history event are persisted in Cadence storage. In Cadence, we call these history events decision tasks, the foundation of history replay. Most decision tasks have three status - Scheduled, Started, Completed and we will go over decision tasks produced by each Cadence operation in section below.\n\nWhen changing a workflow ownership of host and replaying a workflow, the decision tasks are downloaded from database and persisted in memory. Then during the workflow replaying process, if Cadence finds a decision task already exists for a particular step, it will immediately return the value of a decision task instead of rerunning the whole workflow logic. Let\'s take a look at the following simple workflow implementation and explicitly list all decision tasks produced by this workflow.\n\nfunc SimpleWorkflow(ctx workflow.Context) error {\n\tao := workflow.ActivityOptions{\n\t\t...\n\t}\n\tctx = workflow.WithActivityOptions(ctx, ao)\n\n\tvar a int\n\terr := workflow.ExecuteActivity(ctx, ActivityA).Get(ctx, &a)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.Sleep(time.Minute)\n\n\terr = workflow.ExecuteActivity(ctx, ActivityB, a).Get(ctx, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.Sleep(time.Hour)\n\treturn nil\n}\n\n\nIn this workflow, when it starts, it first execute ActivityA and then assign the result to an integer. It sleeps for one minute and then use the integer as an input argument to execute ActivityB. Finally it sleeps for one hour and completes.\n\nThe following table lists the decision tasks stack produced by this workflow. It may look overwhelming first but if you associate each decision task with its corresponding Cadence operation, it becomes self-explanatory.\n\nID DECISION TASK TYPE EXPLANATION\n1 WorkflowStarted the recorded StartWorkflow call\'s data, which usually\n schedules a new decision task immediately\n2 DecisionTaskScheduled workflow worker polling for work\n3 DecisionTaskStarted worker gets the type SimpleWorkflow, lookup registred funcs,\n deserialize input, call it\n4 DecisionTaskCompleted worker finishes\n5 ActivityTaskScheduled activity available for a worker\n6 ActivityTaskStarted activity worker polls and gets type ActivityA and do the job\n7 ActivityTaskCompleted activity work completed with result of var a\n8 DecisionTaskScheduled triggered by ActivityCompleted. server schedule next task\n9 DecisionTaskStarted \n10 DecisionTaskCompleted \n11 TimerStarted decision scheduled a timer for 1 minute\n12 TimerFired fired after 1 minute\n13 DecisionTaskScheduled triggered by TimerFired\n14 DecisionTaskStarted \n15 DecisionTaskCompleted \n16 ActivityTaskScheduled ActivityB scheduled by decision with param a\n17 ActivityTaskStarted started by worker\n18 ActivityTaskCompleted completed with nil\n19 DecisionTaskScheduled triggered by ActivityCompleted\n20 DecisionTaskStarted \n21 DecisionTaskCompleted \n22 TimerStarted decision scheduled a timer for 1 hour\n23 TimerFired fired after 1 hour\n24 DecisionTaskScheduled triggered by TimerFired\n25 DecisionTaskStarted \n26 DecisionTaskCompleted \n27 WorkflowCompleted completed by decision (the function call returned)\n\nAs you may observe that this stack has strict orders. The whole point of the table above is that if the code you write involves some orchestration by Cadence, either your worker or Cadence server, they produce decision tasks. When your workflow gets replayed, it will strive to reconstruct this stack. Therefore, code changes to your workflow needs to make sure that they do not mess up with these decision tasks, which trigger non-deterministic errors. Then let\'s explore different types of non-deterministic errors and their root causes.\n\n\n# Categories of non-deterministic errors\n\nProgrammatically, Cadence surfaces 4 categories of non-deterministic errors. With understanding of decision tasks in the previous section and combining the error messages, you should be able to pinpoint what code changes may yield to non-deterministic errors.\n\n\n# 1. Missing decisions\n\nfmt.Errorf("nondeterministic workflow: missing replay decision for %s", util.HistoryEventToString(e))\n\n\nFor source code click here\n\nThis means after replay code, the decision is scheduled less than history events. Using the previous history as an example, when the workflow is waiting at the one hour timer(event ID 22), if we delete the line of :\n\nworkflow.Sleep(time.Hour)\n\n\nand restart worker, then it will run into this error. Because in the history, the workflow has a timer event that is supposed to fire in one hour. However, during replay, there is no logic to schedule that timer.\n\n\n# 2. Extra decisions\n\nfmt.Errorf("nondeterministic workflow: extra replay decision for %s", util.DecisionToString(d))\n\n\nFor source code click here\n\nThis is basically the opposite of the previous case, which means that during replay, Cadence generates more decisions than those in history events. Using the previous history as an example, when the workflow is waiting at the one hour timer(event ID 22), if we change the line of:\n\nerr = workflow.ExecuteActivity(ctx, activityB, a).Get(ctx, nil)\n\n\nto\n\nfb := workflow.ExecuteActivity(ctx, activityB, a)\nfc := workflow.ExecuteActivity(ctx, activityC, a)\nerr = fb.Get(ctx,nil)\nif err != nil {\n\treturn err\n}\nerr = fc.Get(ctx,nil)\nif err != nil {\n\treturn err\n}\n\n\nAnd restart worker, then it will run into this error. Because in the history, the workflow has scheduled only activityB after the one minute timer, however, during replay, there are two activities scheduled in a decision (in parallel).\n\n\n# 3. Mismatched decisions\n\nfmt.Errorf("nondeterministic workflow: history event is %s, replay decision is %s",util.HistoryEventToString(e), util.DecisionToString(d))\n\n\nFor source code click here\n\nThis means after replay code, the decision scheduled is different than the one in history. Using the previous history as an example, when the workflow is waiting at the one hour timer(event ID 22), if we change the line of :\n\nerr = workflow.ExecuteActivity(ctx, ActivityB, a).Get(ctx, nil)\n\n\nto\n\nerr = workflow.ExecuteActivity(ctx, ActivityC, a).Get(ctx, nil)\n\n\nAnd restart worker, then it will run into this error. Because in the history, the workflow has scheduled ActivityB with input a, but during replay, it schedules ActivityC.\n\n\n# 4. Decision state machine panic\n\nfmt.Sprintf("unknown decision %v, possible causes are nondeterministic workflow definition code"+" or incompatible change in the workflow definition", id)\n\n\nFor source code click here\n\nThis usually means workflow history is corrupted due to some bug. For example, the same activity can be scheduled and differentiated by activityID. So ActivityIDs for different activities are supposed to be unique in workflow history. If however we have an ActivityID collision, replay will run into this error.\n\n\n# Common Q&A\n\n\n# I want to change my workflow implementation. What code changes may produce non-deterministic errors?\n\nAs we discussed in previous sections, if your changes change decision tasks, then they will probably lead to non-deterministic errors. These are some common changes that can be categorized by these previous 4 types mentioned above.\n\n 1. Changing the order of executing Cadence defined operations, such as activities, timer, child workflows, signals, cancelRequest.\n 2. Change the duration of a timer\n 3. Use build-in goroutine of golang instead of using workflow.Go\n 4. Use build-in channel of golang instead of using workflow.Channel\n 5. Use build-in sleep function instead of using workflow.Sleep\n\n\n# What are some changes that will NOT trigger non-deterministic errors?\n\nCode changes that are free of non-deterministic erorrs normally do not involve decision tasks in Cadence.\n\n 1. Activity input and output changes do not directly cause non-deterministic errors because the contents are not checked. However, changes may produce serialization errors based on your data converter implementation (type or number-of-arg changes are particularly prone to problems, so we recommend you always use a single struct). Cadence uses json.Marshal and json.Unmarshal (with Decoder.UseNumber()) by default.\n 2. Code changes that does not modify history events are safe to be checked in. For example, logging or metrics implementations.\n 3. Change of retry policies, as these are not compared. Adding or removing retry policies is also safe. Changes will only take effect on new calls however, not ones that have already been scheduled.\n\n\n# I want to check if my code change will produce non-deterministic errors, how can I debug?\n\nCadence provides replayer test, which functions as an unit test on your local machine to replay your workflow history comparing to your potential code change. If you introduce a non-deterministic change and your history triggers it, the test should fail. Check out this page for more details.',normalizedContent:'# workflow non-deterministic errors\n\n\n# root cause of non-deterministic errors\n\ncadence workflows are designed as long-running operations, and therefore the workflow code you write must be deterministic so that no matter how many time it is executed it always produce the same results.\n\nin production environment, your workflow code will run on a distributed system orchestrated by clusters of machines. however, machine failures are inevitable and can happen anytime to your workflow host. if you have a workflow running for long period of time, maybe months even years, and it fails due to loss of a host, it will be resumed on another machine and continue the rest of its execution.\n\nconsider the following diagram where workflow a is running on host a but suddenly it crashes.\n\n\n\nworkflow a then will be picked up by host b and continues its execution. this process is called change of workflow ownership. however, after host b gains ownership of the workflow a, it does not have any information about its historical executions. for example, workflow a may have executed many activities and it fails. host b needs to redo all its history until the moment of failure. the process of reconstructing history of a workflow is called history replay.\n\nin general, any errors occurs during the replay process are called non-deterministic errors. we will explore different types of non-deterministic errors in sections below but first let\'s try to understand how cadence is able to perform the replay of workflow in case of failure.\n\n\n# decision tasks of workflow\n\nin the previous section, we learned that cadence is able to replay workflow histories in case of failure. we will learn exactly how cadence keeps track of histories and how they get replayed when necessary.\n\nworkflow histories are built based on event-sourcing, and each history event are persisted in cadence storage. in cadence, we call these history events decision tasks, the foundation of history replay. most decision tasks have three status - scheduled, started, completed and we will go over decision tasks produced by each cadence operation in section below.\n\nwhen changing a workflow ownership of host and replaying a workflow, the decision tasks are downloaded from database and persisted in memory. then during the workflow replaying process, if cadence finds a decision task already exists for a particular step, it will immediately return the value of a decision task instead of rerunning the whole workflow logic. let\'s take a look at the following simple workflow implementation and explicitly list all decision tasks produced by this workflow.\n\nfunc simpleworkflow(ctx workflow.context) error {\n\tao := workflow.activityoptions{\n\t\t...\n\t}\n\tctx = workflow.withactivityoptions(ctx, ao)\n\n\tvar a int\n\terr := workflow.executeactivity(ctx, activitya).get(ctx, &a)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.sleep(time.minute)\n\n\terr = workflow.executeactivity(ctx, activityb, a).get(ctx, nil)\n\tif err != nil {\n\t\treturn err\n\t}\n\n\tworkflow.sleep(time.hour)\n\treturn nil\n}\n\n\nin this workflow, when it starts, it first execute activitya and then assign the result to an integer. it sleeps for one minute and then use the integer as an input argument to execute activityb. finally it sleeps for one hour and completes.\n\nthe following table lists the decision tasks stack produced by this workflow. it may look overwhelming first but if you associate each decision task with its corresponding cadence operation, it becomes self-explanatory.\n\nid decision task type explanation\n1 workflowstarted the recorded startworkflow call\'s data, which usually\n schedules a new decision task immediately\n2 decisiontaskscheduled workflow worker polling for work\n3 decisiontaskstarted worker gets the type simpleworkflow, lookup registred funcs,\n deserialize input, call it\n4 decisiontaskcompleted worker finishes\n5 activitytaskscheduled activity available for a worker\n6 activitytaskstarted activity worker polls and gets type activitya and do the job\n7 activitytaskcompleted activity work completed with result of var a\n8 decisiontaskscheduled triggered by activitycompleted. server schedule next task\n9 decisiontaskstarted \n10 decisiontaskcompleted \n11 timerstarted decision scheduled a timer for 1 minute\n12 timerfired fired after 1 minute\n13 decisiontaskscheduled triggered by timerfired\n14 decisiontaskstarted \n15 decisiontaskcompleted \n16 activitytaskscheduled activityb scheduled by decision with param a\n17 activitytaskstarted started by worker\n18 activitytaskcompleted completed with nil\n19 decisiontaskscheduled triggered by activitycompleted\n20 decisiontaskstarted \n21 decisiontaskcompleted \n22 timerstarted decision scheduled a timer for 1 hour\n23 timerfired fired after 1 hour\n24 decisiontaskscheduled triggered by timerfired\n25 decisiontaskstarted \n26 decisiontaskcompleted \n27 workflowcompleted completed by decision (the function call returned)\n\nas you may observe that this stack has strict orders. the whole point of the table above is that if the code you write involves some orchestration by cadence, either your worker or cadence server, they produce decision tasks. when your workflow gets replayed, it will strive to reconstruct this stack. therefore, code changes to your workflow needs to make sure that they do not mess up with these decision tasks, which trigger non-deterministic errors. then let\'s explore different types of non-deterministic errors and their root causes.\n\n\n# categories of non-deterministic errors\n\nprogrammatically, cadence surfaces 4 categories of non-deterministic errors. with understanding of decision tasks in the previous section and combining the error messages, you should be able to pinpoint what code changes may yield to non-deterministic errors.\n\n\n# 1. missing decisions\n\nfmt.errorf("nondeterministic workflow: missing replay decision for %s", util.historyeventtostring(e))\n\n\nfor source code click here\n\nthis means after replay code, the decision is scheduled less than history events. using the previous history as an example, when the workflow is waiting at the one hour timer(event id 22), if we delete the line of :\n\nworkflow.sleep(time.hour)\n\n\nand restart worker, then it will run into this error. because in the history, the workflow has a timer event that is supposed to fire in one hour. however, during replay, there is no logic to schedule that timer.\n\n\n# 2. extra decisions\n\nfmt.errorf("nondeterministic workflow: extra replay decision for %s", util.decisiontostring(d))\n\n\nfor source code click here\n\nthis is basically the opposite of the previous case, which means that during replay, cadence generates more decisions than those in history events. using the previous history as an example, when the workflow is waiting at the one hour timer(event id 22), if we change the line of:\n\nerr = workflow.executeactivity(ctx, activityb, a).get(ctx, nil)\n\n\nto\n\nfb := workflow.executeactivity(ctx, activityb, a)\nfc := workflow.executeactivity(ctx, activityc, a)\nerr = fb.get(ctx,nil)\nif err != nil {\n\treturn err\n}\nerr = fc.get(ctx,nil)\nif err != nil {\n\treturn err\n}\n\n\nand restart worker, then it will run into this error. because in the history, the workflow has scheduled only activityb after the one minute timer, however, during replay, there are two activities scheduled in a decision (in parallel).\n\n\n# 3. mismatched decisions\n\nfmt.errorf("nondeterministic workflow: history event is %s, replay decision is %s",util.historyeventtostring(e), util.decisiontostring(d))\n\n\nfor source code click here\n\nthis means after replay code, the decision scheduled is different than the one in history. using the previous history as an example, when the workflow is waiting at the one hour timer(event id 22), if we change the line of :\n\nerr = workflow.executeactivity(ctx, activityb, a).get(ctx, nil)\n\n\nto\n\nerr = workflow.executeactivity(ctx, activityc, a).get(ctx, nil)\n\n\nand restart worker, then it will run into this error. because in the history, the workflow has scheduled activityb with input a, but during replay, it schedules activityc.\n\n\n# 4. decision state machine panic\n\nfmt.sprintf("unknown decision %v, possible causes are nondeterministic workflow definition code"+" or incompatible change in the workflow definition", id)\n\n\nfor source code click here\n\nthis usually means workflow history is corrupted due to some bug. for example, the same activity can be scheduled and differentiated by activityid. so activityids for different activities are supposed to be unique in workflow history. if however we have an activityid collision, replay will run into this error.\n\n\n# common q&a\n\n\n# i want to change my workflow implementation. what code changes may produce non-deterministic errors?\n\nas we discussed in previous sections, if your changes change decision tasks, then they will probably lead to non-deterministic errors. these are some common changes that can be categorized by these previous 4 types mentioned above.\n\n 1. changing the order of executing cadence defined operations, such as activities, timer, child workflows, signals, cancelrequest.\n 2. change the duration of a timer\n 3. use build-in goroutine of golang instead of using workflow.go\n 4. use build-in channel of golang instead of using workflow.channel\n 5. use build-in sleep function instead of using workflow.sleep\n\n\n# what are some changes that will not trigger non-deterministic errors?\n\ncode changes that are free of non-deterministic erorrs normally do not involve decision tasks in cadence.\n\n 1. activity input and output changes do not directly cause non-deterministic errors because the contents are not checked. however, changes may produce serialization errors based on your data converter implementation (type or number-of-arg changes are particularly prone to problems, so we recommend you always use a single struct). cadence uses json.marshal and json.unmarshal (with decoder.usenumber()) by default.\n 2. code changes that does not modify history events are safe to be checked in. for example, logging or metrics implementations.\n 3. change of retry policies, as these are not compared. adding or removing retry policies is also safe. changes will only take effect on new calls however, not ones that have already been scheduled.\n\n\n# i want to check if my code change will produce non-deterministic errors, how can i debug?\n\ncadence provides replayer test, which functions as an unit test on your local machine to replay your workflow history comparing to your potential code change. if you introduce a non-deterministic change and your history triggers it, the test should fail. check out this page for more details.',charsets:{}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/go-client",readingShow:"top"},regularPath:"/docs/05-go-client/",relativePath:"docs/05-go-client/index.md",key:"v-740be4db",path:"/docs/go-client/",headers:[{level:2,title:"Overview",slug:"overview",normalizedTitle:"overview",charIndex:16},{level:2,title:"Links",slug:"links",normalizedTitle:"links",charIndex:712}],codeSwitcherOptions:{},headersStr:"Overview Links",content:"# Go client\n\n\n# Overview\n\nGo client attempts to follow Go language conventions. The conversion of a Go program to the fault-oblivious function is expected to be pretty mechanical.\n\nCadence requires determinism of the code. It supports deterministic execution of the multithreaded code and constructs like select that are non-deterministic by Go design. The Cadence solution is to provide corresponding constructs in the form of interfaces that have similar capability but support deterministic execution.\n\nFor example, instead of native Go channels, code must use the workflow.Channel interface. Instead of select, the workflow.Selector interface must be used.\n\nFor more information, see Creating Workflows.\n\n\n# Links\n\n * GitHub project: https://github.com/uber-go/cadence-client\n * Samples: https://github.com/uber-common/cadence-samples\n * GoDoc documentation: https://godoc.org/go.uber.org/cadence",normalizedContent:"# go client\n\n\n# overview\n\ngo client attempts to follow go language conventions. the conversion of a go program to the fault-oblivious function is expected to be pretty mechanical.\n\ncadence requires determinism of the code. it supports deterministic execution of the multithreaded code and constructs like select that are non-deterministic by go design. the cadence solution is to provide corresponding constructs in the form of interfaces that have similar capability but support deterministic execution.\n\nfor example, instead of native go channels, code must use the workflow.channel interface. instead of select, the workflow.selector interface must be used.\n\nfor more information, see creating workflows.\n\n\n# links\n\n * github project: https://github.com/uber-go/cadence-client\n * samples: https://github.com/uber-common/cadence-samples\n * godoc documentation: https://godoc.org/go.uber.org/cadence",charsets:{}},{title:"Cluster Configuration",frontmatter:{layout:"default",title:"Cluster Configuration",permalink:"/docs/operation-guide/setup",readingShow:"top"},regularPath:"/docs/07-operation-guide/01-setup.html",relativePath:"docs/07-operation-guide/01-setup.md",key:"v-6be5daf6",path:"/docs/operation-guide/setup/",headers:[{level:2,title:"Static configuration",slug:"static-configuration",normalizedTitle:"static configuration",charIndex:818},{level:3,title:"Configuration Directory and Files",slug:"configuration-directory-and-files",normalizedTitle:"configuration directory and files",charIndex:843},{level:3,title:"Understand the basic static configuration",slug:"understand-the-basic-static-configuration",normalizedTitle:"understand the basic static configuration",charIndex:2745},{level:3,title:"The full list of static configuration",slug:"the-full-list-of-static-configuration",normalizedTitle:"the full list of static configuration",charIndex:17568},{level:2,title:"Dynamic Configuration",slug:"dynamic-configuration",normalizedTitle:"dynamic configuration",charIndex:234},{level:3,title:"How to update Dynamic Configuration",slug:"how-to-update-dynamic-configuration",normalizedTitle:"how to update dynamic configuration",charIndex:21945},{level:2,title:"Other Advanced Features",slug:"other-advanced-features",normalizedTitle:"other advanced features",charIndex:25391},{level:2,title:"Deployment & Release",slug:"deployment-release",normalizedTitle:"deployment & release",charIndex:null},{level:2,title:"Stress/Bench Test a cluster",slug:"stress-bench-test-a-cluster",normalizedTitle:"stress/bench test a cluster",charIndex:26347}],codeSwitcherOptions:{},headersStr:"Static configuration Configuration Directory and Files Understand the basic static configuration The full list of static configuration Dynamic Configuration How to update Dynamic Configuration Other Advanced Features Deployment & Release Stress/Bench Test a cluster",content:'# Cluster Configuration\n\nThis section will help to understand what you need for setting up a Cadence cluster.\n\nYou should understand some basic static configuration of Cadence cluster.\n\nThere are also many other configuration called "Dynamic Configuration" for fine tuning the cluster. The default values are good to go for small clusters.\n\nCadence’s minimum dependency is a database(Cassandra or SQL based like MySQL/Postgres). Cadence uses it for persistence. All instances of Cadence clusters are stateless.\n\nFor production you also need a metric server(Prometheus/Statsd/M3/etc).\n\nFor advanced features Cadence depends on others like Elastisearch/OpenSearch+Kafka if you need Advanced visibility feature to search workflows. Cadence will depends on a blob store like S3 if you need to enable archival feature.\n\n\n# Static configuration\n\n\n# Configuration Directory and Files\n\nThe default directory for configuration files is named config/. This directory contains various configuration files, but not all files will necessarily be used in every scenario.\n\n# Combining Configuration Files\n\n * Base Configuration: The base.yaml file is always loaded first, providing a common configuration that applies to all environments.\n * Runtime Environment File: The second file to be loaded is specific to the runtime environment. The environment name can be specified through the $CADENCE_ENVIRONMENT environment variable or passed as a command-line argument. If neither option is specified, development.yaml is used by default.\n * Availability Zone File: If an availability zone is specified (either through the $CADENCE_AVAILABILITY_ZONE environment variable or as a command-line argument), a file named after the zone will be merged. For example, if you specify "az1" as the zone, production_az1.yaml will be used as well.\n\nTo merge base.yaml, production.yaml, and production_az1.yaml files, you need to specify "production" as the runtime environment and "az1" as the zone.\n\n// base.yaml -> production.yaml -> production_az1.yaml = final configuration\n\n\n# Using Environment Variables\n\nConfiguration values can be provided using environment variables with a specific syntax. $VAR: This notation will be replaced with the value of the specified environment variable. If the environment variable is not set, the value will be left blank. You can declare a default value using the syntax {$VAR:default}. This means that if the environment variable VAR is not set, the default value will be used instead.\n\nNote: If you want to include the $ symbol literally in your configuration file (without interpreting it as an environment variable substitution), escape it by using $$. This will prevent it from being replaced by an environment variable value.\n\n\n# Understand the basic static configuration\n\nThere are quite many configs in Cadence. Here are the most basic configuration that you should understand.\n\nCONFIG NAME EXPLANATION RECOMMENDED VALUE\nnumHistoryShards This is the most important one in Cadence config.It will be 1K~16K depending on the size ranges of the cluster you\n a fixed number in the cluster forever. The only way to expect to run, and the instance size. Typically 2K for SQL\n change it is to migrate to another cluster. Refer to Migrate based persistence, and 8K for Cassandra based.\n cluster section.\n \n Some facts about it:\n 1. Each workflow will be mapped to a single shard. Within a\n shard, all the workflow creation/updates are serialized.\n 2. Each shard will be assigned to only one History node to\n own the shard, using a Consistent Hashing Ring. Each shard\n will consume a small amount of memory/CPU to do background\n processing. Therefore, a single History node cannot own too\n many shards. You may need to figure out a good number range\n based on your instance size(memory/CPU).\n 3. Also, you can’t add an infinite number of nodes to a\n cluster because this config is fixed. When the number of\n History nodes is closed or equal to numHistoryShards, there\n will be some History nodes that have no shards assigned to\n it. This will be wasting resources.\n \n Based on above, you don’t want to have a small number of\n shards which will limit the maximum size of your cluster.\n You also don’t want to have a too big number, which will\n require you to have a quite big initial size of the cluster.\n Also, typically a production cluster will start with a\n smaller number and then we add more nodes/hosts to it. But\n to keep high availability, it’s recommended to use at least\n 4 nodes for each service(Frontend/History/Matching) at the\n beginning.\nringpop This is the config to let all nodes of all services For dns mode: Recommended to put the DNS of Frontend service\n connected to each other. ALL the bootstrap nodes MUST be \n reachable by ringpop when a service is starting up, within a For hosts or hostfile mode: A list of Frontend service node\n MaxJoinDuration. defaultMaxJoinDuration is 2 minutes. addresses if using hosts mode. Make sure all the bootstrap\n nodes are reachable at startup.\n It’s not required that bootstrap nodes need to be\n Frontend/History or Matching. In fact, it can be running\n none of them as long as it runs Ringpop protocol.\npublicClient The Cadence Frontend service addresses that internal Cadence Recommended be DNS of Frontend service, so that requests\n system(like system workflows) need to talk to. will be distributed to all Frontend nodes.\n \n After connected, all nodes in Ringpop will form a ring with Using localhost+Port or local container IP address+Port will\n identifiers of what service they serve. Ideally Cadence not work if the IP/container is not running frontend\n should be able to get Frontend address from there. But\n Ringpop doesn’t expose this API yet.\nservices.NAME.rpc Configuration of how to listen to network ports and serve Name: Use as recommended in development.yaml. bindOnIP : an\n traffic. IP address that the container will serve the traffic with\n \n bindOnLocalHost:true will bind on 127.0.0.1. It’s mostly for\n local development. In production usually you have to specify\n the IP that containers will use by using bindOnIP\n \n NAME is the matter for the “--services” option in the server\n startup command.\nservices.NAME.pprof Golang profiling service , will bind on the same IP as RPC a port that you want to serve pprof request\nservices.Name.metrics See Metrics&Logging section cc\nclusterMetadata Cadence cluster configuration. As explanation.\n \n enableGlobalDomain:true will enable Cadence Cross datacenter\n replication(aka XDC) feature.\n \n failoverVersionIncrement: This decides the maximum clusters\n that you will have replicated to each other at the same\n time. For example 10 is sufficient for most cases.\n \n masterClusterName: a master cluster must be one of the\n enabled clusters, usually the very first cluster to start.\n It is only meaningful for internal purposes.\n \n currentClusterName: current cluster name using this config\n file.\n \n clusterInformation is a map from clusterName to the cluster\n configure\n \n initialFailoverVersion: each cluster must use a different\n value from 0 to failoverVersionIncrement-1.\n \n rpcName: must be “cadence-frontend”. Can be improved in this\n issue.\n \n rpcAddress: the address to talk to the Frontend of the\n cluster for inter-cluster replication.\n \n Note that even if you don’t need XDC replication right now,\n if you want to migrate data stores in the future, you should\n enable xdc from every beginning. You just need to use the\n same name of cluster for both masterClusterName and\n currentClusterName.\n \n Go to cross dc replication for how to configure replication\n in production\ndcRedirectionPolicy For allowing forwarding frontend requests from passive “selected-apis-forwarding”\n cluster to active clusters.\narchival This is for archival history feature, skip if you don’t need N/A\n it. Go to workflow archival for how to configure archival in\n production\nblobstore This is also for archival history feature Default cadence N/A\n server is using file based blob store implementation.\ndomainDefaults default config for each domain. Right now only being used N/A\n for Archival feature.\ndynamicconfig (previously known as dynamicConfigClient) Dynamic config is a config manager that enables you to Same as the sample development config\n change configs without restarting servers. It’s a good way\n for Cadence to keep high availability and make things easy\n to configure.\n \n By default Cadence server uses filebased client which allows\n you to override default configs using a YAML file. However,\n this approach can be cumbersome in production environment\n because it\'s the operator\'s responsibility to sync the YAML\n files across Cadence nodes.\n \n Therefore, we provide another option, configstore client,\n that stores config changes in the persistent data store for\n Cadence (e.g., Cassandra database) rather than the YAML\n file. This approach shifts the responsibility of syncing\n config changes from the operator to Cadence service. You can\n use Cadence CLI commands to list/get/update/restore config\n changes.\n \n You can also implement the dynamic config interface if you\n have a better way to manage configs.\npersistence Configuration for data store / persistence layer. As explanation\n \n Values of DefaultStore VisibilityStore\n AdvancedVisibilityStore should be keys of map DataStores.\n \n DefaultStore is for core Cadence functionality.\n \n VisibilityStore is for basic visibility feature\n \n AdvancedVisibilityStore is for advanced visibility\n \n Go to advanced visibility for detailed configuration of\n advanced visibility. See persistence documentation about\n using different database for Cadence\n\n\n# The full list of static configuration\n\nStarting from v0.21.0, all the static configuration are defined by GoDocs in details.\n\nVERSION GODOCS LINK GITHUB LINK\nv0.21.0 Configuration Docs Configuration\n...other higher versions ...Replace the version in the URL of v0.21.0 ...Replace the version in the URL of v0.21.0\n\nFor earlier versions, you can find all the configurations similarly:\n\nVERSION GODOCS LINK GITHUB LINK\nv0.20.0 Configuration Docs Configuration\nv0.19.2 Configuration Docs Configuration\nv0.18.2 Configuration Docs Configuration\nv0.17.0 Configuration Docs Configuration\n...other lower versions ...Replace the version in the URL of v0.20.0 ...Replace the version in the URL of v0.20.0\n\n\n# Dynamic Configuration\n\nDynamic configuration is for fine tuning a Cadence cluster.\n\nThere are a lot more dynamic configurations than static configurations. Most of the default values are good for small clusters. As a cluster is scaled up, you may look for tuning it for the optimal performance.\n\nStarting from v0.21.0 with this change, all the dynamic configuration are well defined by GoDocs.\n\nVERSION GODOCS LINK GITHUB LINK\nv0.21.0 Dynamic Configuration Docs Dynamic Configuration\n...other higher versions ...Replace the version in the URL of v0.21.0 ...Replace the version in the URL of v0.21.0\n\nFor earlier versions, you can find all the configurations similarly:\n\nVERSION GODOCS LINK GITHUB LINK\nv0.20.0 Dynamic Configuration Docs Dynamic Configuration\nv0.19.2 Dynamic Configuration Docs Dynamic Configuration\nv0.18.2 Dynamic Configuration Docs Dynamic Configuration\nv0.17.0 Dynamic Configuration Docs Dynamic Configuration\n...other lower versions ...Replace the version in the URL of v0.20.0 ...Replace the version in the URL of v0.20.0\n\nHowever, the GoDocs in earlier versions don\'t contain detailed information. You need to look it up the newer version of GoDocs.\nFor example, search for "EnableGlobalDomain" in Dynamic Configuration Comments in v0.21.0 or Docs of v0.21.0, as the usage of DynamicConfiguration never changes.\n\n * KeyName is the key that you will use in the dynamicconfig yaml content\n * Default value is the default value\n * Value type indicates the type that you should change the yaml value of:\n * Int should be integer like 123\n * Float should be number like 123.4\n * Duration should be Golang duration like: 10s, 2m, 5h for 10 seconds, 2 minutes and 5 hours.\n * Bool should be true or false\n * Map should be map of yaml\n * Allowed filters indicates what kinds of filters you can set as constraints with the dynamic configuration.\n * DomainName can be used with domainName\n * N/A means no filters can be set. The config will be global.\n\nFor example, if you want to change the ratelimiting for List API, below is the config:\n\n// FrontendVisibilityListMaxQPS is max qps frontend can list open/close workflows\n// KeyName: frontend.visibilityListMaxQPS\n// Value type: Int\n// Default value: 10\n// Allowed filters: DomainName\nFrontendVisibilityListMaxQPS\n\n\nThen you can add the config like:\n\nfrontend.visibilityListMaxQPS:\n - value: 1000\n constraints:\n domainName: "domainA"\n - value: 2000\n constraints:\n domainName: "domainB" \n\n\nYou will expect to see domainA will be able to perform 1K List operation per second, while domainB can perform 2K per second.\n\nNOTE 1: the size related configuration numbers are based on byte.\n\nNOTE 2: for .persistenceMaxQPS versus .persistenceGlobalMaxQPS --- persistenceMaxQPS is local for single node while persistenceGlobalMaxQPS is global for all node. persistenceGlobalMaxQPS is preferred if set as greater than zero. But by default it is zero so persistenceMaxQPS is being used.\n\n\n# How to update Dynamic Configuration\n\n# File-based client\n\nBy default, Cadence uses file-based client to manage dynamic configurations. Following are the approaches to changing dynamic configs using a yaml file.\n\n * Local docker-compose by mounting volume: 1. Change the dynamic configs in cadence/config/dynamicconfig/development.yaml. 2. Update the cadence section in the docker compose file and mount dynamicconfig folder to host machine like the following:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n ...(don\'t change anything here)\n environment:\n ...(don\'t change anything here)\n - "DYNAMIC_CONFIG_FILE_PATH=/etc/custom-dynamicconfig/development.yaml"\n volumes:\n - "/Users//cadence/config/dynamicconfig:/etc/custom-dynamicconfig"\n\n\n * Local docker-compose by logging into the container: run docker exec -it docker_cadence_1 /bin/bash to login your container. Then vi config/dynamicconfig/development.yaml to make any change. After you changed the config, use docker restart docker_cadence_1 to restart the cadence instance. Note that you can also use this approach to change static config, but it must be changed through config/config_template.yaml instead of config/docker.yaml because config/docker.yaml is generated on startup.\n\n * In production cluster: Follow this example of Helm Chart to deploy Cadence, update dynamic config here and restart the cluster.\n\n * DEBUG: How to make sure your updates on dynamicconfig is loaded? for example, if you added the following to development.yaml\n\nfrontend.visibilityListMaxQPS:\n - value: 10000\n\n\nAfter restarting Cadence instances, execute a command like this to let Cadence load the config(it\'s lazy loading when using it). cadence --domain <> workflow list\n\nThen you should see the logs like below\n\ncadence_1 | {"level":"info","ts":"2021-05-07T18:43:07.869Z","msg":"First loading dynamic config","service":"cadence-frontend","key":"frontend.visibilityListMaxQPS,domainName:sample,clusterName:primary","value":"10000","default-value":"10","logging-call-at":"config.go:93"}\n\n\n# Config store client\n\nYou can set the dynamicconfig client in the static configuration to configstore in order to store config changes in a database, as shown below.\n\ndynamicconfig:\n client: configstore\n configstore:\n pollInterval: "10s"\n updateRetryAttempts: 2\n FetchTimeout: "2s"\n UpdateTimeout: "2s"\n\n\nIf you are still using the deprecated config dynamicConfigClient like below, you need to replace it with the new dynamicconfig as shown above to use configstore client.\n\ndynamicConfigClient:\n filepath: "/etc/cadence/config/dynamicconfig/config.yaml"\n pollInterval: "10s"\n\n\nAfter changing the client to configstore and restarting Cadence, you can manage dynamic configs using cadence admin config CLI commands. You may need to set your custom dynamic configs again as the previous configs are not automatically migrated from the YAML file to the database.\n\n * cadence admin config listdc lists all dynamic config overrides\n * cadence admin config getdc --dynamic_config_name gets the value of a specific dynamic config\n * cadence admin config updc --dynamic_config_name --dynamic_config_value \'{"Value": }\' updates the value of a specific dynamic config\n * cadence admin config resdc --dynamic_config_name restores a specific dynamic config to its default value\n\n\n# Other Advanced Features\n\n * Go to advanced visibility for how to configure advanced visibility in production.\n\n * Go to workflow archival for how to configure archival in production.\n\n * Go to cross dc replication for how to configure replication in production.\n\n\n# Deployment & Release\n\nKubernetes is the most popular way to deploy Cadence cluster. And easiest way is to use Cadence Helm Charts that maintained by a community project.\n\nIf you are looking for deploying Cadence using other technologies, then it\'s reccomended to use Cadence docker images. You can use offical ones, or you may customize it based on what you need. See Cadence docker package for how to run the images.\n\nIt\'s always recommended to use the latest release. See Cadence release pages.\n\nPlease subscribe the release of project by :\n\nGo to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".\n\nAnd see how to upgrade a Cadence cluster\n\n\n# Stress/Bench Test a cluster\n\nIt\'s recommended to run bench test on your cluster following this package to see the maximum throughput that it can take, whenever you change some setup.',normalizedContent:'# cluster configuration\n\nthis section will help to understand what you need for setting up a cadence cluster.\n\nyou should understand some basic static configuration of cadence cluster.\n\nthere are also many other configuration called "dynamic configuration" for fine tuning the cluster. the default values are good to go for small clusters.\n\ncadence’s minimum dependency is a database(cassandra or sql based like mysql/postgres). cadence uses it for persistence. all instances of cadence clusters are stateless.\n\nfor production you also need a metric server(prometheus/statsd/m3/etc).\n\nfor advanced features cadence depends on others like elastisearch/opensearch+kafka if you need advanced visibility feature to search workflows. cadence will depends on a blob store like s3 if you need to enable archival feature.\n\n\n# static configuration\n\n\n# configuration directory and files\n\nthe default directory for configuration files is named config/. this directory contains various configuration files, but not all files will necessarily be used in every scenario.\n\n# combining configuration files\n\n * base configuration: the base.yaml file is always loaded first, providing a common configuration that applies to all environments.\n * runtime environment file: the second file to be loaded is specific to the runtime environment. the environment name can be specified through the $cadence_environment environment variable or passed as a command-line argument. if neither option is specified, development.yaml is used by default.\n * availability zone file: if an availability zone is specified (either through the $cadence_availability_zone environment variable or as a command-line argument), a file named after the zone will be merged. for example, if you specify "az1" as the zone, production_az1.yaml will be used as well.\n\nto merge base.yaml, production.yaml, and production_az1.yaml files, you need to specify "production" as the runtime environment and "az1" as the zone.\n\n// base.yaml -> production.yaml -> production_az1.yaml = final configuration\n\n\n# using environment variables\n\nconfiguration values can be provided using environment variables with a specific syntax. $var: this notation will be replaced with the value of the specified environment variable. if the environment variable is not set, the value will be left blank. you can declare a default value using the syntax {$var:default}. this means that if the environment variable var is not set, the default value will be used instead.\n\nnote: if you want to include the $ symbol literally in your configuration file (without interpreting it as an environment variable substitution), escape it by using $$. this will prevent it from being replaced by an environment variable value.\n\n\n# understand the basic static configuration\n\nthere are quite many configs in cadence. here are the most basic configuration that you should understand.\n\nconfig name explanation recommended value\nnumhistoryshards this is the most important one in cadence config.it will be 1k~16k depending on the size ranges of the cluster you\n a fixed number in the cluster forever. the only way to expect to run, and the instance size. typically 2k for sql\n change it is to migrate to another cluster. refer to migrate based persistence, and 8k for cassandra based.\n cluster section.\n \n some facts about it:\n 1. each workflow will be mapped to a single shard. within a\n shard, all the workflow creation/updates are serialized.\n 2. each shard will be assigned to only one history node to\n own the shard, using a consistent hashing ring. each shard\n will consume a small amount of memory/cpu to do background\n processing. therefore, a single history node cannot own too\n many shards. you may need to figure out a good number range\n based on your instance size(memory/cpu).\n 3. also, you can’t add an infinite number of nodes to a\n cluster because this config is fixed. when the number of\n history nodes is closed or equal to numhistoryshards, there\n will be some history nodes that have no shards assigned to\n it. this will be wasting resources.\n \n based on above, you don’t want to have a small number of\n shards which will limit the maximum size of your cluster.\n you also don’t want to have a too big number, which will\n require you to have a quite big initial size of the cluster.\n also, typically a production cluster will start with a\n smaller number and then we add more nodes/hosts to it. but\n to keep high availability, it’s recommended to use at least\n 4 nodes for each service(frontend/history/matching) at the\n beginning.\nringpop this is the config to let all nodes of all services for dns mode: recommended to put the dns of frontend service\n connected to each other. all the bootstrap nodes must be \n reachable by ringpop when a service is starting up, within a for hosts or hostfile mode: a list of frontend service node\n maxjoinduration. defaultmaxjoinduration is 2 minutes. addresses if using hosts mode. make sure all the bootstrap\n nodes are reachable at startup.\n it’s not required that bootstrap nodes need to be\n frontend/history or matching. in fact, it can be running\n none of them as long as it runs ringpop protocol.\npublicclient the cadence frontend service addresses that internal cadence recommended be dns of frontend service, so that requests\n system(like system workflows) need to talk to. will be distributed to all frontend nodes.\n \n after connected, all nodes in ringpop will form a ring with using localhost+port or local container ip address+port will\n identifiers of what service they serve. ideally cadence not work if the ip/container is not running frontend\n should be able to get frontend address from there. but\n ringpop doesn’t expose this api yet.\nservices.name.rpc configuration of how to listen to network ports and serve name: use as recommended in development.yaml. bindonip : an\n traffic. ip address that the container will serve the traffic with\n \n bindonlocalhost:true will bind on 127.0.0.1. it’s mostly for\n local development. in production usually you have to specify\n the ip that containers will use by using bindonip\n \n name is the matter for the “--services” option in the server\n startup command.\nservices.name.pprof golang profiling service , will bind on the same ip as rpc a port that you want to serve pprof request\nservices.name.metrics see metrics&logging section cc\nclustermetadata cadence cluster configuration. as explanation.\n \n enableglobaldomain:true will enable cadence cross datacenter\n replication(aka xdc) feature.\n \n failoverversionincrement: this decides the maximum clusters\n that you will have replicated to each other at the same\n time. for example 10 is sufficient for most cases.\n \n masterclustername: a master cluster must be one of the\n enabled clusters, usually the very first cluster to start.\n it is only meaningful for internal purposes.\n \n currentclustername: current cluster name using this config\n file.\n \n clusterinformation is a map from clustername to the cluster\n configure\n \n initialfailoverversion: each cluster must use a different\n value from 0 to failoverversionincrement-1.\n \n rpcname: must be “cadence-frontend”. can be improved in this\n issue.\n \n rpcaddress: the address to talk to the frontend of the\n cluster for inter-cluster replication.\n \n note that even if you don’t need xdc replication right now,\n if you want to migrate data stores in the future, you should\n enable xdc from every beginning. you just need to use the\n same name of cluster for both masterclustername and\n currentclustername.\n \n go to cross dc replication for how to configure replication\n in production\ndcredirectionpolicy for allowing forwarding frontend requests from passive “selected-apis-forwarding”\n cluster to active clusters.\narchival this is for archival history feature, skip if you don’t need n/a\n it. go to workflow archival for how to configure archival in\n production\nblobstore this is also for archival history feature default cadence n/a\n server is using file based blob store implementation.\ndomaindefaults default config for each domain. right now only being used n/a\n for archival feature.\ndynamicconfig (previously known as dynamicconfigclient) dynamic config is a config manager that enables you to same as the sample development config\n change configs without restarting servers. it’s a good way\n for cadence to keep high availability and make things easy\n to configure.\n \n by default cadence server uses filebased client which allows\n you to override default configs using a yaml file. however,\n this approach can be cumbersome in production environment\n because it\'s the operator\'s responsibility to sync the yaml\n files across cadence nodes.\n \n therefore, we provide another option, configstore client,\n that stores config changes in the persistent data store for\n cadence (e.g., cassandra database) rather than the yaml\n file. this approach shifts the responsibility of syncing\n config changes from the operator to cadence service. you can\n use cadence cli commands to list/get/update/restore config\n changes.\n \n you can also implement the dynamic config interface if you\n have a better way to manage configs.\npersistence configuration for data store / persistence layer. as explanation\n \n values of defaultstore visibilitystore\n advancedvisibilitystore should be keys of map datastores.\n \n defaultstore is for core cadence functionality.\n \n visibilitystore is for basic visibility feature\n \n advancedvisibilitystore is for advanced visibility\n \n go to advanced visibility for detailed configuration of\n advanced visibility. see persistence documentation about\n using different database for cadence\n\n\n# the full list of static configuration\n\nstarting from v0.21.0, all the static configuration are defined by godocs in details.\n\nversion godocs link github link\nv0.21.0 configuration docs configuration\n...other higher versions ...replace the version in the url of v0.21.0 ...replace the version in the url of v0.21.0\n\nfor earlier versions, you can find all the configurations similarly:\n\nversion godocs link github link\nv0.20.0 configuration docs configuration\nv0.19.2 configuration docs configuration\nv0.18.2 configuration docs configuration\nv0.17.0 configuration docs configuration\n...other lower versions ...replace the version in the url of v0.20.0 ...replace the version in the url of v0.20.0\n\n\n# dynamic configuration\n\ndynamic configuration is for fine tuning a cadence cluster.\n\nthere are a lot more dynamic configurations than static configurations. most of the default values are good for small clusters. as a cluster is scaled up, you may look for tuning it for the optimal performance.\n\nstarting from v0.21.0 with this change, all the dynamic configuration are well defined by godocs.\n\nversion godocs link github link\nv0.21.0 dynamic configuration docs dynamic configuration\n...other higher versions ...replace the version in the url of v0.21.0 ...replace the version in the url of v0.21.0\n\nfor earlier versions, you can find all the configurations similarly:\n\nversion godocs link github link\nv0.20.0 dynamic configuration docs dynamic configuration\nv0.19.2 dynamic configuration docs dynamic configuration\nv0.18.2 dynamic configuration docs dynamic configuration\nv0.17.0 dynamic configuration docs dynamic configuration\n...other lower versions ...replace the version in the url of v0.20.0 ...replace the version in the url of v0.20.0\n\nhowever, the godocs in earlier versions don\'t contain detailed information. you need to look it up the newer version of godocs.\nfor example, search for "enableglobaldomain" in dynamic configuration comments in v0.21.0 or docs of v0.21.0, as the usage of dynamicconfiguration never changes.\n\n * keyname is the key that you will use in the dynamicconfig yaml content\n * default value is the default value\n * value type indicates the type that you should change the yaml value of:\n * int should be integer like 123\n * float should be number like 123.4\n * duration should be golang duration like: 10s, 2m, 5h for 10 seconds, 2 minutes and 5 hours.\n * bool should be true or false\n * map should be map of yaml\n * allowed filters indicates what kinds of filters you can set as constraints with the dynamic configuration.\n * domainname can be used with domainname\n * n/a means no filters can be set. the config will be global.\n\nfor example, if you want to change the ratelimiting for list api, below is the config:\n\n// frontendvisibilitylistmaxqps is max qps frontend can list open/close workflows\n// keyname: frontend.visibilitylistmaxqps\n// value type: int\n// default value: 10\n// allowed filters: domainname\nfrontendvisibilitylistmaxqps\n\n\nthen you can add the config like:\n\nfrontend.visibilitylistmaxqps:\n - value: 1000\n constraints:\n domainname: "domaina"\n - value: 2000\n constraints:\n domainname: "domainb" \n\n\nyou will expect to see domaina will be able to perform 1k list operation per second, while domainb can perform 2k per second.\n\nnote 1: the size related configuration numbers are based on byte.\n\nnote 2: for .persistencemaxqps versus .persistenceglobalmaxqps --- persistencemaxqps is local for single node while persistenceglobalmaxqps is global for all node. persistenceglobalmaxqps is preferred if set as greater than zero. but by default it is zero so persistencemaxqps is being used.\n\n\n# how to update dynamic configuration\n\n# file-based client\n\nby default, cadence uses file-based client to manage dynamic configurations. following are the approaches to changing dynamic configs using a yaml file.\n\n * local docker-compose by mounting volume: 1. change the dynamic configs in cadence/config/dynamicconfig/development.yaml. 2. update the cadence section in the docker compose file and mount dynamicconfig folder to host machine like the following:\n\ncadence:\n image: ubercadence/server:master-auto-setup\n ports:\n ...(don\'t change anything here)\n environment:\n ...(don\'t change anything here)\n - "dynamic_config_file_path=/etc/custom-dynamicconfig/development.yaml"\n volumes:\n - "/users//cadence/config/dynamicconfig:/etc/custom-dynamicconfig"\n\n\n * local docker-compose by logging into the container: run docker exec -it docker_cadence_1 /bin/bash to login your container. then vi config/dynamicconfig/development.yaml to make any change. after you changed the config, use docker restart docker_cadence_1 to restart the cadence instance. note that you can also use this approach to change static config, but it must be changed through config/config_template.yaml instead of config/docker.yaml because config/docker.yaml is generated on startup.\n\n * in production cluster: follow this example of helm chart to deploy cadence, update dynamic config here and restart the cluster.\n\n * debug: how to make sure your updates on dynamicconfig is loaded? for example, if you added the following to development.yaml\n\nfrontend.visibilitylistmaxqps:\n - value: 10000\n\n\nafter restarting cadence instances, execute a command like this to let cadence load the config(it\'s lazy loading when using it). cadence --domain <> workflow list\n\nthen you should see the logs like below\n\ncadence_1 | {"level":"info","ts":"2021-05-07t18:43:07.869z","msg":"first loading dynamic config","service":"cadence-frontend","key":"frontend.visibilitylistmaxqps,domainname:sample,clustername:primary","value":"10000","default-value":"10","logging-call-at":"config.go:93"}\n\n\n# config store client\n\nyou can set the dynamicconfig client in the static configuration to configstore in order to store config changes in a database, as shown below.\n\ndynamicconfig:\n client: configstore\n configstore:\n pollinterval: "10s"\n updateretryattempts: 2\n fetchtimeout: "2s"\n updatetimeout: "2s"\n\n\nif you are still using the deprecated config dynamicconfigclient like below, you need to replace it with the new dynamicconfig as shown above to use configstore client.\n\ndynamicconfigclient:\n filepath: "/etc/cadence/config/dynamicconfig/config.yaml"\n pollinterval: "10s"\n\n\nafter changing the client to configstore and restarting cadence, you can manage dynamic configs using cadence admin config cli commands. you may need to set your custom dynamic configs again as the previous configs are not automatically migrated from the yaml file to the database.\n\n * cadence admin config listdc lists all dynamic config overrides\n * cadence admin config getdc --dynamic_config_name gets the value of a specific dynamic config\n * cadence admin config updc --dynamic_config_name --dynamic_config_value \'{"value": }\' updates the value of a specific dynamic config\n * cadence admin config resdc --dynamic_config_name restores a specific dynamic config to its default value\n\n\n# other advanced features\n\n * go to advanced visibility for how to configure advanced visibility in production.\n\n * go to workflow archival for how to configure archival in production.\n\n * go to cross dc replication for how to configure replication in production.\n\n\n# deployment & release\n\nkubernetes is the most popular way to deploy cadence cluster. and easiest way is to use cadence helm charts that maintained by a community project.\n\nif you are looking for deploying cadence using other technologies, then it\'s reccomended to use cadence docker images. you can use offical ones, or you may customize it based on what you need. see cadence docker package for how to run the images.\n\nit\'s always recommended to use the latest release. see cadence release pages.\n\nplease subscribe the release of project by :\n\ngo to https://github.com/uber/cadence -> click the right top "watch" button -> custom -> "release".\n\nand see how to upgrade a cadence cluster\n\n\n# stress/bench test a cluster\n\nit\'s recommended to run bench test on your cluster following this package to see the maximum throughput that it can take, whenever you change some setup.',charsets:{cjk:!0}},{title:"Introduction",frontmatter:{layout:"default",title:"Introduction",permalink:"/docs/cli",readingShow:"top"},regularPath:"/docs/06-cli/",relativePath:"docs/06-cli/index.md",key:"v-6fa6d57b",path:"/docs/cli/",headers:[{level:2,title:"Using the CLI",slug:"using-the-cli",normalizedTitle:"using the cli",charIndex:237},{level:3,title:"Homebrew",slug:"homebrew",normalizedTitle:"homebrew",charIndex:255},{level:3,title:"Docker",slug:"docker",normalizedTitle:"docker",charIndex:492},{level:3,title:"Build it yourself",slug:"build-it-yourself",normalizedTitle:"build it yourself",charIndex:2034},{level:2,title:"Documentation",slug:"documentation",normalizedTitle:"documentation",charIndex:2418},{level:2,title:"Environment variables",slug:"environment-variables",normalizedTitle:"environment variables",charIndex:6296},{level:2,title:"Quick Start",slug:"quick-start",normalizedTitle:"quick start",charIndex:6577},{level:3,title:"Domain operation examples",slug:"domain-operation-examples",normalizedTitle:"domain operation examples",charIndex:6936},{level:3,title:"Workflow operation examples",slug:"workflow-operation-examples",normalizedTitle:"workflow operation examples",charIndex:7463}],codeSwitcherOptions:{},headersStr:"Using the CLI Homebrew Docker Build it yourself Documentation Environment variables Quick Start Domain operation examples Workflow operation examples",content:'# Command Line Interface\n\nThe Cadence is a command-line tool you can use to perform various on a Cadence server. It can perform operations such as register, update, and describe as well as operations like start , show history, and .\n\n\n# Using the CLI\n\n\n# Homebrew\n\nbrew install cadence-workflow\n\n\nAfter the installation is done, you can use CLI:\n\ncadence --help\n\n\nThis will always install the latest version. Follow this instructions if you need to install older versions of Cadence CLI.\n\n\n# Docker\n\nThe Cadence can be used directly from the Docker Hub image ubercadence/cli or by building the tool locally.\n\nExample of using the docker image to describe a\n\ndocker run -it --rm ubercadence/cli:master --address --domain samples-domain domain describe\n\n\nmaster will be the latest CLI binary from the project. But you can specify a version to best match your server version:\n\ndocker run -it --rm ubercadence/cli: --address --domain samples-domain domain describe\n\n\nFor example docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe will be the CLI that is released as part of the v0.21.3 release. See docker hub page for all the CLI image tags. Note that CLI versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. That\'s because the CLI version doesn\'t change in those versions.\n\nNOTE: On Docker versions 18.03 and later, you may get a "connection refused" error when connecting to local server. You can work around this by setting the host to "host.docker.internal" (see here for more info).\n\ndocker run -it --rm ubercadence/cli:master --address host.docker.internal:7933 --domain samples-domain domain describe\n\n\nNOTE: Be sure to update your image when you want to try new features: docker pull ubercadence/cli:master\n\nNOTE: If you are running docker-compose Cadence server, you can also logon to the container to execute CLI:\n\ndocker exec -it docker_cadence_1 /bin/bash\n\n# cadence --address $(hostname -i):7933 --do samples domain register\n\n\n\n# Build it yourself\n\nTo build the tool locally, clone the Cadence server repo, check out the version tag (e.g. git checkout v0.21.3) and run make tools. This produces an executable called cadence. With a local build, the same command to describe a would look like this:\n\ncadence --domain samples-domain domain describe\n\n\nAlternatively, you can build the CLI image, see instructions\n\n\n# Documentation\n\nCLI are documented by --help or -h in ANY tab of all levels:\n\n$cadence --help\nNAME:\n cadence - A command-line tool for cadence users\n\nUSAGE:\n cadence [global options] command [command options] [arguments...]\n\nVERSION:\n 0.18.4\n\nCOMMANDS:\n domain, d Operate cadence domain\n workflow, wf Operate cadence workflow\n tasklist, tl Operate cadence tasklist\n admin, adm Run admin operation\n cluster, cl Operate cadence cluster\n help, h Shows a list of commands or help for one command\n\nGLOBAL OPTIONS:\n --address value, --ad value host:port for cadence frontend service [$CADENCE_CLI_ADDRESS]\n --domain value, --do value cadence workflow domain [$CADENCE_CLI_DOMAIN]\n --context_timeout value, --ct value optional timeout for context of RPC call in seconds (default: 5) [$CADENCE_CONTEXT_TIMEOUT]\n --help, -h show help\n --version, -v print the version\n\n\nAnd\n\n$cadence workflow -h\nNAME:\n cadence workflow - Operate cadence workflow\n\nUSAGE:\n cadence workflow command [command options] [arguments...]\n\nCOMMANDS:\n activity, act operate activities of workflow\n show show workflow history\n showid show workflow history with given workflow_id and run_id (a shortcut of `show -w -r `). run_id is only required for archived history\n start start a new workflow execution\n run start a new workflow execution and get workflow progress\n cancel, c cancel a workflow execution\n signal, s signal a workflow execution\n signalwithstart signal the current open workflow if exists, or attempt to start a new run based on IDResuePolicy and signals it\n terminate, term terminate a new workflow execution\n list, l list open or closed workflow executions\n listall, la list all open or closed workflow executions\n listarchived list archived workflow executions\n scan, sc, scanall scan workflow executions (need to enable Cadence server on ElasticSearch). It will be faster than listall, but result are not sorted.\n count, cnt count number of workflow executions (need to enable Cadence server on ElasticSearch)\n query query workflow execution\n stack query workflow execution with __stack_trace as query type\n describe, desc show information of workflow execution\n describeid, descid show information of workflow execution with given workflow_id and optional run_id (a shortcut of `describe -w -r `)\n observe, ob show the progress of workflow history\n observeid, obid show the progress of workflow history with given workflow_id and optional run_id (a shortcut of `observe -w -r `)\n reset, rs reset the workflow, by either eventID or resetType.\n reset-batch reset workflow in batch by resetType: LastDecisionCompleted,LastContinuedAsNew,BadBinary,DecisionCompletedTime,FirstDecisionScheduled,LastDecisionScheduled,FirstDecisionCompletedTo get base workflowIDs/runIDs to reset, source is from input file or visibility query.\n batch batch operation on a list of workflows from query.\n\nOPTIONS:\n --help, -h show help\n\n\n$cadence wf signal -h\nNAME:\n cadence workflow signal - signal a workflow execution\n\nUSAGE:\n cadence workflow signal [command options] [arguments...]\n\nOPTIONS:\n --workflow_id value, --wid value, -w value WorkflowID\n --run_id value, --rid value, -r value RunID\n --name value, -n value SignalName\n --input value, -i value Input for the signal, in JSON format.\n --input_file value, --if value Input for the signal from JSON file.\n\n\n\nAnd etc.\n\nThe example commands below will use cadence for brevity.\n\n\n# Environment variables\n\nSetting environment variables for repeated parameters can shorten the commands.\n\n * CADENCE_CLI_ADDRESS - host:port for Cadence frontend service, the default is for the local server\n * CADENCE_CLI_DOMAIN - default , so you don\'t need to specify --domain\n\n\n# Quick Start\n\nRun cadence for help on top level commands and global options Run cadence domain for help on operations Run cadence workflow for help on operations Run cadence tasklist for help on tasklist operations (cadence help, cadence help [domain|workflow] will also print help messages)\n\nNote: make sure you have a Cadence server running before using\n\n\n# Domain operation examples\n\n * Register a new named "samples-domain":\n\ncadence --domain samples-domain domain register\n# OR using short alias\ncadence --do samples-domain d re \n\n\nIf your Cadence cluster has enable global domain(XDC replication), then you have to specify the replicaiton settings when registering a domain:\n\ncadence --domains amples-domain domain register --active_cluster clusterNameA --clusters clusterNameA clusterNameB\n\n\n * View "samples-domain" details:\n\ncadence --domain samples-domain domain describe\n\n\n\n# Workflow operation examples\n\nThe following examples assume the CADENCE_CLI_DOMAIN environment variable is set.\n\n# Run workflow\n\nStart a and see its progress. This command doesn\'t finish until completes.\n\ncadence workflow run --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow run\ncadence workflow run -h\n\n\nBrief explanation: To run a , the user must specify the following:\n\n 1. Tasklist name (--tl)\n 2. Workflow type (--wt)\n 3. Execution start to close timeout in seconds (--et)\n 4. Input in JSON format (--i) (optional)\n\ns example uses this cadence-samples workflow and takes a string as input with the -i \'"cadence"\' parameter. Single quotes (\'\') are used to wrap input as JSON.\n\nNote: You need to start the so that the can make progress. (Run make && ./bin/helloworld -m worker in cadence-samples to start the )\n\n# Show running workers of a tasklist\n\ncadence tasklist desc --tl helloWorldGroup\n\n\n# Start workflow\n\ncadence workflow start --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow start\ncadence workflow start -h\n\n# for a workflow with multiple inputs, separate each json with space/newline like\ncadence workflow start --tl helloWorldGroup --wt main.WorkflowWith3Args --et 60 -i \'"your_input_string" 123 {"Name":"my-string", "Age":12345}\'\n\n\nThe start command is similar to the run command, but immediately returns the workflow_id and run_id after starting the . Use the show command to view the \'s history/progress.\n\n# Reuse the same workflow id when starting/running a workflow\n\nUse option --workflowidreusepolicy or --wrp to configure the reuse policy. Option 0 AllowDuplicateFailedOnly: Allow starting a using the same when a with the same is not already running and the last execution close state is one of [terminated, cancelled, timedout, failed]. Option 1 AllowDuplicate: Allow starting a using the same when a with the same is not already running. Option 2 RejectDuplicate: Do not allow starting a using the same as a previous .\n\n# use AllowDuplicateFailedOnly option to start a workflow\ncadence workflow start --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\' --wid "" --wrp 0\n\n# use AllowDuplicate option to run a workflow\ncadence workflow run --tl helloWorldGroup --wt main.Workflow --et 60 -i \'"cadence"\' --wid "" --wrp 1\n\n\n# Start a workflow with a memo\n\nMemos are immutable key/value pairs that can be attached to a run when starting the . These are visible when listing . More information on memos can be found here.\n\ncadence wf start -tl helloWorldGroup -wt main.Workflow -et 60 -i \'"cadence"\' -memo_key ‘“Service” “Env” “Instance”’ -memo ‘“serverName1” “test” 5’\n\n\n# Show workflow history\n\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest run history of that workflow_id\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# Show workflow execution information\n\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest workflow execution of that workflow_id\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# List closed or open workflow executions\n\ncadence workflow list\n\n# default will only show one page, to view more items, use --more flag\ncadence workflow list -m\n\n\nUse --query to list with SQL like\n\ncadence workflow list --query "WorkflowType=\'main.SampleParentWorkflow\' AND CloseTime = missing "\n\n\nThis will return all open with workflowType as "main.SampleParentWorkflow".\n\n# Query workflow execution\n\n# use custom query type\ncadence workflow query -w -r --qt \n\n# use build-in query type "__stack_trace" which is supported by Cadence client library\ncadence workflow query -w -r --qt __stack_trace\n# a shortcut to query using __stack_trace is (without --qt flag)\ncadence workflow stack -w -r \n\n\n# Signal, cancel, terminate workflow\n\n# signal\ncadence workflow signal -w -r -n -i \'"signal-value"\'\n\n# cancel\ncadence workflow cancel -w -r \n\n# terminate\ncadence workflow terminate -w -r --reason\n\n\nTerminating a running will record a WorkflowExecutionTerminated as the closing in the history. No more will be scheduled for a terminated . Canceling a running will record a WorkflowExecutionCancelRequested in the history, and a new will be scheduled. The has a chance to do some clean up work after cancellation.\n\n# Signal, cancel, terminate workflows as a batch job\n\nBatch job is based on List Workflow Query(--query). It supports , cancel and terminate as batch job type. For terminating as batch job, it will terminte the children recursively.\n\nStart a batch job(using as batch type):\n\ncadence --do samples-domain wf batch start --query "WorkflowType=\'main.SampleParentWorkflow\' AND CloseTime=missing" --reason "test" --bt signal --sig testname\nThis batch job will be operating on 5 workflows.\nPlease confirm[Yes/No]:yes\n{\n "jobID": "",\n "msg": "batch job is started"\n}\n\n\n\nYou need to remember the JobID or use List command to get all your batch jobs:\n\ncadence --do samples-domain wf batch list\n\n\nDescribe the progress of a batch job:\n\ncadence --do samples-domain wf batch desc -jid \n\n\nTerminate a batch job:\n\ncadence --do samples-domain wf batch terminate -jid \n\n\nNote that the operation performed by a batch will not be rolled back by terminating the batch. However, you can use reset to rollback your .\n\n# Restart, reset workflow\n\nThe Reset command allows resetting a to a particular point and continue running from there. There are a lot of use cases:\n\n * Rerun a failed from the beginning with the same start parameters.\n * Rerun a failed from the failing point without losing the achieved progress(history).\n * After deploying new code, reset an open to let the run to different flows.\n\nYou can reset to some predefined types:\n\ncadence workflow reset -w -r --reset_type --reason "some_reason"\n\n\n * FirstDecisionCompleted: reset to the beginning of the history.\n * LastDecisionCompleted: reset to the end of the history.\n * LastContinuedAsNew: reset to the end of the history for the previous run.\n\nIf you are familiar with the Cadence history , You can also reset to any finish by using:\n\ncadence workflow reset -w -r --event_id --reason "some_reason"\n\n\nSome things to note:\n\n * When reset, a new run will be kicked off with the same workflowID. But if there is a running execution for the workflow(workflowID), the current run will be terminated.\n * decision_finish_event_id is the ID of of the type: DecisionTaskComplete/DecisionTaskFailed/DecisionTaskTimeout.\n * To restart a from the beginning, reset to the first finish .\n\nTo reset multiple , you can use batch reset command:\n\ncadence workflow reset-batch --input_file --reset_type --reason "some_reason"\n\n\n# Recovery from bad deployment -- auto-reset workflow\n\nIf a bad deployment lets a run into a wrong state, you might want to reset the to the point that the bad deployment started to run. But usually it is not easy to find out all the impacted, and every reset point for each . In this case, auto-reset will automatically reset all the given a bad deployment identifier.\n\nLet\'s get familiar with some concepts. Each deployment will have an identifier, we call it "Binary Checksum" as it is usually generated by the md5sum of a binary file. For a , each binary checksum will be associated with an auto-reset point, which contains a runID, an eventID, and the created_time that binary/deployment made the first for the .\n\nTo find out which binary checksum of the bad deployment to reset, you should be aware of at least one running into a bad state. Use the describe command with --reset_points_only option to show all the reset points:\n\ncadence wf desc -w --reset_points_only\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| BINARY CHECKSUM | CREATE TIME | RUNID | EVENTID |\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| c84c5afa552613a83294793f4e664a7f | 2019-05-24 10:01:00.398455019 | 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 | 4 |\n| aae748fdc557a3f873adbe1dd066713f | 2019-05-24 11:01:00.067691445 | d42d21b8-2adb-4313-b069-3837d44d6ce6 | 4 |\n...\n...\n\n\nThen use this command to tell Cadence to auto-reset all impacted by the bad deployment. The command will store the bad binary checksum into info and trigger a process to reset all your .\n\ncadence --do domain update --add_bad_binary aae748fdc557a3f873adbe1dd066713f --reason "rollback bad deployment"\n\n\nAs you add the bad binary checksum to your , Cadence will not dispatch any to the bad binary. So make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). Otherwise your can\'t make any progress after auto-reset.',normalizedContent:'# command line interface\n\nthe cadence is a command-line tool you can use to perform various on a cadence server. it can perform operations such as register, update, and describe as well as operations like start , show history, and .\n\n\n# using the cli\n\n\n# homebrew\n\nbrew install cadence-workflow\n\n\nafter the installation is done, you can use cli:\n\ncadence --help\n\n\nthis will always install the latest version. follow this instructions if you need to install older versions of cadence cli.\n\n\n# docker\n\nthe cadence can be used directly from the docker hub image ubercadence/cli or by building the tool locally.\n\nexample of using the docker image to describe a\n\ndocker run -it --rm ubercadence/cli:master --address --domain samples-domain domain describe\n\n\nmaster will be the latest cli binary from the project. but you can specify a version to best match your server version:\n\ndocker run -it --rm ubercadence/cli: --address --domain samples-domain domain describe\n\n\nfor example docker run --rm ubercadence/cli:0.21.3 --domain samples-domain domain describe will be the cli that is released as part of the v0.21.3 release. see docker hub page for all the cli image tags. note that cli versions of 0.20.0 works for all server versions of 0.12 to 0.19 as well. that\'s because the cli version doesn\'t change in those versions.\n\nnote: on docker versions 18.03 and later, you may get a "connection refused" error when connecting to local server. you can work around this by setting the host to "host.docker.internal" (see here for more info).\n\ndocker run -it --rm ubercadence/cli:master --address host.docker.internal:7933 --domain samples-domain domain describe\n\n\nnote: be sure to update your image when you want to try new features: docker pull ubercadence/cli:master\n\nnote: if you are running docker-compose cadence server, you can also logon to the container to execute cli:\n\ndocker exec -it docker_cadence_1 /bin/bash\n\n# cadence --address $(hostname -i):7933 --do samples domain register\n\n\n\n# build it yourself\n\nto build the tool locally, clone the cadence server repo, check out the version tag (e.g. git checkout v0.21.3) and run make tools. this produces an executable called cadence. with a local build, the same command to describe a would look like this:\n\ncadence --domain samples-domain domain describe\n\n\nalternatively, you can build the cli image, see instructions\n\n\n# documentation\n\ncli are documented by --help or -h in any tab of all levels:\n\n$cadence --help\nname:\n cadence - a command-line tool for cadence users\n\nusage:\n cadence [global options] command [command options] [arguments...]\n\nversion:\n 0.18.4\n\ncommands:\n domain, d operate cadence domain\n workflow, wf operate cadence workflow\n tasklist, tl operate cadence tasklist\n admin, adm run admin operation\n cluster, cl operate cadence cluster\n help, h shows a list of commands or help for one command\n\nglobal options:\n --address value, --ad value host:port for cadence frontend service [$cadence_cli_address]\n --domain value, --do value cadence workflow domain [$cadence_cli_domain]\n --context_timeout value, --ct value optional timeout for context of rpc call in seconds (default: 5) [$cadence_context_timeout]\n --help, -h show help\n --version, -v print the version\n\n\nand\n\n$cadence workflow -h\nname:\n cadence workflow - operate cadence workflow\n\nusage:\n cadence workflow command [command options] [arguments...]\n\ncommands:\n activity, act operate activities of workflow\n show show workflow history\n showid show workflow history with given workflow_id and run_id (a shortcut of `show -w -r `). run_id is only required for archived history\n start start a new workflow execution\n run start a new workflow execution and get workflow progress\n cancel, c cancel a workflow execution\n signal, s signal a workflow execution\n signalwithstart signal the current open workflow if exists, or attempt to start a new run based on idresuepolicy and signals it\n terminate, term terminate a new workflow execution\n list, l list open or closed workflow executions\n listall, la list all open or closed workflow executions\n listarchived list archived workflow executions\n scan, sc, scanall scan workflow executions (need to enable cadence server on elasticsearch). it will be faster than listall, but result are not sorted.\n count, cnt count number of workflow executions (need to enable cadence server on elasticsearch)\n query query workflow execution\n stack query workflow execution with __stack_trace as query type\n describe, desc show information of workflow execution\n describeid, descid show information of workflow execution with given workflow_id and optional run_id (a shortcut of `describe -w -r `)\n observe, ob show the progress of workflow history\n observeid, obid show the progress of workflow history with given workflow_id and optional run_id (a shortcut of `observe -w -r `)\n reset, rs reset the workflow, by either eventid or resettype.\n reset-batch reset workflow in batch by resettype: lastdecisioncompleted,lastcontinuedasnew,badbinary,decisioncompletedtime,firstdecisionscheduled,lastdecisionscheduled,firstdecisioncompletedto get base workflowids/runids to reset, source is from input file or visibility query.\n batch batch operation on a list of workflows from query.\n\noptions:\n --help, -h show help\n\n\n$cadence wf signal -h\nname:\n cadence workflow signal - signal a workflow execution\n\nusage:\n cadence workflow signal [command options] [arguments...]\n\noptions:\n --workflow_id value, --wid value, -w value workflowid\n --run_id value, --rid value, -r value runid\n --name value, -n value signalname\n --input value, -i value input for the signal, in json format.\n --input_file value, --if value input for the signal from json file.\n\n\n\nand etc.\n\nthe example commands below will use cadence for brevity.\n\n\n# environment variables\n\nsetting environment variables for repeated parameters can shorten the commands.\n\n * cadence_cli_address - host:port for cadence frontend service, the default is for the local server\n * cadence_cli_domain - default , so you don\'t need to specify --domain\n\n\n# quick start\n\nrun cadence for help on top level commands and global options run cadence domain for help on operations run cadence workflow for help on operations run cadence tasklist for help on tasklist operations (cadence help, cadence help [domain|workflow] will also print help messages)\n\nnote: make sure you have a cadence server running before using\n\n\n# domain operation examples\n\n * register a new named "samples-domain":\n\ncadence --domain samples-domain domain register\n# or using short alias\ncadence --do samples-domain d re \n\n\nif your cadence cluster has enable global domain(xdc replication), then you have to specify the replicaiton settings when registering a domain:\n\ncadence --domains amples-domain domain register --active_cluster clusternamea --clusters clusternamea clusternameb\n\n\n * view "samples-domain" details:\n\ncadence --domain samples-domain domain describe\n\n\n\n# workflow operation examples\n\nthe following examples assume the cadence_cli_domain environment variable is set.\n\n# run workflow\n\nstart a and see its progress. this command doesn\'t finish until completes.\n\ncadence workflow run --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow run\ncadence workflow run -h\n\n\nbrief explanation: to run a , the user must specify the following:\n\n 1. tasklist name (--tl)\n 2. workflow type (--wt)\n 3. execution start to close timeout in seconds (--et)\n 4. input in json format (--i) (optional)\n\ns example uses this cadence-samples workflow and takes a string as input with the -i \'"cadence"\' parameter. single quotes (\'\') are used to wrap input as json.\n\nnote: you need to start the so that the can make progress. (run make && ./bin/helloworld -m worker in cadence-samples to start the )\n\n# show running workers of a tasklist\n\ncadence tasklist desc --tl helloworldgroup\n\n\n# start workflow\n\ncadence workflow start --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\'\n\n# view help messages for workflow start\ncadence workflow start -h\n\n# for a workflow with multiple inputs, separate each json with space/newline like\ncadence workflow start --tl helloworldgroup --wt main.workflowwith3args --et 60 -i \'"your_input_string" 123 {"name":"my-string", "age":12345}\'\n\n\nthe start command is similar to the run command, but immediately returns the workflow_id and run_id after starting the . use the show command to view the \'s history/progress.\n\n# reuse the same workflow id when starting/running a workflow\n\nuse option --workflowidreusepolicy or --wrp to configure the reuse policy. option 0 allowduplicatefailedonly: allow starting a using the same when a with the same is not already running and the last execution close state is one of [terminated, cancelled, timedout, failed]. option 1 allowduplicate: allow starting a using the same when a with the same is not already running. option 2 rejectduplicate: do not allow starting a using the same as a previous .\n\n# use allowduplicatefailedonly option to start a workflow\ncadence workflow start --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\' --wid "" --wrp 0\n\n# use allowduplicate option to run a workflow\ncadence workflow run --tl helloworldgroup --wt main.workflow --et 60 -i \'"cadence"\' --wid "" --wrp 1\n\n\n# start a workflow with a memo\n\nmemos are immutable key/value pairs that can be attached to a run when starting the . these are visible when listing . more information on memos can be found here.\n\ncadence wf start -tl helloworldgroup -wt main.workflow -et 60 -i \'"cadence"\' -memo_key ‘“service” “env” “instance”’ -memo ‘“servername1” “test” 5’\n\n\n# show workflow history\n\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest run history of that workflow_id\ncadence workflow show -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow showid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# show workflow execution information\n\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717 -r 866ae14c-88cf-4f1e-980f-571e031d71b0\n# a shortcut of this is (without -w -r flag)\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717 866ae14c-88cf-4f1e-980f-571e031d71b0\n\n# if run_id is not provided, it will show the latest workflow execution of that workflow_id\ncadence workflow describe -w 3ea6b242-b23c-4279-bb13-f215661b4717\n# a shortcut of this is\ncadence workflow describeid 3ea6b242-b23c-4279-bb13-f215661b4717\n\n\n# list closed or open workflow executions\n\ncadence workflow list\n\n# default will only show one page, to view more items, use --more flag\ncadence workflow list -m\n\n\nuse --query to list with sql like\n\ncadence workflow list --query "workflowtype=\'main.sampleparentworkflow\' and closetime = missing "\n\n\nthis will return all open with workflowtype as "main.sampleparentworkflow".\n\n# query workflow execution\n\n# use custom query type\ncadence workflow query -w -r --qt \n\n# use build-in query type "__stack_trace" which is supported by cadence client library\ncadence workflow query -w -r --qt __stack_trace\n# a shortcut to query using __stack_trace is (without --qt flag)\ncadence workflow stack -w -r \n\n\n# signal, cancel, terminate workflow\n\n# signal\ncadence workflow signal -w -r -n -i \'"signal-value"\'\n\n# cancel\ncadence workflow cancel -w -r \n\n# terminate\ncadence workflow terminate -w -r --reason\n\n\nterminating a running will record a workflowexecutionterminated as the closing in the history. no more will be scheduled for a terminated . canceling a running will record a workflowexecutioncancelrequested in the history, and a new will be scheduled. the has a chance to do some clean up work after cancellation.\n\n# signal, cancel, terminate workflows as a batch job\n\nbatch job is based on list workflow query(--query). it supports , cancel and terminate as batch job type. for terminating as batch job, it will terminte the children recursively.\n\nstart a batch job(using as batch type):\n\ncadence --do samples-domain wf batch start --query "workflowtype=\'main.sampleparentworkflow\' and closetime=missing" --reason "test" --bt signal --sig testname\nthis batch job will be operating on 5 workflows.\nplease confirm[yes/no]:yes\n{\n "jobid": "",\n "msg": "batch job is started"\n}\n\n\n\nyou need to remember the jobid or use list command to get all your batch jobs:\n\ncadence --do samples-domain wf batch list\n\n\ndescribe the progress of a batch job:\n\ncadence --do samples-domain wf batch desc -jid \n\n\nterminate a batch job:\n\ncadence --do samples-domain wf batch terminate -jid \n\n\nnote that the operation performed by a batch will not be rolled back by terminating the batch. however, you can use reset to rollback your .\n\n# restart, reset workflow\n\nthe reset command allows resetting a to a particular point and continue running from there. there are a lot of use cases:\n\n * rerun a failed from the beginning with the same start parameters.\n * rerun a failed from the failing point without losing the achieved progress(history).\n * after deploying new code, reset an open to let the run to different flows.\n\nyou can reset to some predefined types:\n\ncadence workflow reset -w -r --reset_type --reason "some_reason"\n\n\n * firstdecisioncompleted: reset to the beginning of the history.\n * lastdecisioncompleted: reset to the end of the history.\n * lastcontinuedasnew: reset to the end of the history for the previous run.\n\nif you are familiar with the cadence history , you can also reset to any finish by using:\n\ncadence workflow reset -w -r --event_id --reason "some_reason"\n\n\nsome things to note:\n\n * when reset, a new run will be kicked off with the same workflowid. but if there is a running execution for the workflow(workflowid), the current run will be terminated.\n * decision_finish_event_id is the id of of the type: decisiontaskcomplete/decisiontaskfailed/decisiontasktimeout.\n * to restart a from the beginning, reset to the first finish .\n\nto reset multiple , you can use batch reset command:\n\ncadence workflow reset-batch --input_file --reset_type --reason "some_reason"\n\n\n# recovery from bad deployment -- auto-reset workflow\n\nif a bad deployment lets a run into a wrong state, you might want to reset the to the point that the bad deployment started to run. but usually it is not easy to find out all the impacted, and every reset point for each . in this case, auto-reset will automatically reset all the given a bad deployment identifier.\n\nlet\'s get familiar with some concepts. each deployment will have an identifier, we call it "binary checksum" as it is usually generated by the md5sum of a binary file. for a , each binary checksum will be associated with an auto-reset point, which contains a runid, an eventid, and the created_time that binary/deployment made the first for the .\n\nto find out which binary checksum of the bad deployment to reset, you should be aware of at least one running into a bad state. use the describe command with --reset_points_only option to show all the reset points:\n\ncadence wf desc -w --reset_points_only\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| binary checksum | create time | runid | eventid |\n+----------------------------------+--------------------------------+--------------------------------------+---------+\n| c84c5afa552613a83294793f4e664a7f | 2019-05-24 10:01:00.398455019 | 2dd29ab7-2dd8-4668-83e0-89cae261cfb1 | 4 |\n| aae748fdc557a3f873adbe1dd066713f | 2019-05-24 11:01:00.067691445 | d42d21b8-2adb-4313-b069-3837d44d6ce6 | 4 |\n...\n...\n\n\nthen use this command to tell cadence to auto-reset all impacted by the bad deployment. the command will store the bad binary checksum into info and trigger a process to reset all your .\n\ncadence --do domain update --add_bad_binary aae748fdc557a3f873adbe1dd066713f --reason "rollback bad deployment"\n\n\nas you add the bad binary checksum to your , cadence will not dispatch any to the bad binary. so make sure that you have rolled back to a good deployment(or roll out new bits with bug fixes). otherwise your can\'t make any progress after auto-reset.',charsets:{cjk:!0}},{title:"Cluster Monitoring",frontmatter:{layout:"default",title:"Cluster Monitoring",permalink:"/docs/operation-guide/monitor",readingShow:"top"},regularPath:"/docs/07-operation-guide/03-monitoring.html",relativePath:"docs/07-operation-guide/03-monitoring.md",key:"v-1a836dbc",path:"/docs/operation-guide/monitor/",headers:[{level:2,title:"Instructions",slug:"instructions",normalizedTitle:"instructions",charIndex:25},{level:2,title:"DataDog dashboard templates",slug:"datadog-dashboard-templates",normalizedTitle:"datadog dashboard templates",charIndex:2407},{level:2,title:"Grafana+Prometheus dashboard templates",slug:"grafana-prometheus-dashboard-templates",normalizedTitle:"grafana+prometheus dashboard templates",charIndex:3295},{level:2,title:"Periodic tests(Canary) for health check",slug:"periodic-tests-canary-for-health-check",normalizedTitle:"periodic tests(canary) for health check",charIndex:3981},{level:2,title:"Cadence Frontend Monitoring",slug:"cadence-frontend-monitoring",normalizedTitle:"cadence frontend monitoring",charIndex:4197},{level:3,title:"Service Availability(server metrics)",slug:"service-availability-server-metrics",normalizedTitle:"service availability(server metrics)",charIndex:4399},{level:3,title:"StartWorkflow Per Second",slug:"startworkflow-per-second",normalizedTitle:"startworkflow per second",charIndex:4917},{level:3,title:"Activities Started Per Second",slug:"activities-started-per-second",normalizedTitle:"activities started per second",charIndex:5291},{level:3,title:"Decisions Started Per Second",slug:"decisions-started-per-second",normalizedTitle:"decisions started per second",charIndex:5622},{level:3,title:"Periodical Test Suite Success(aka Canary)",slug:"periodical-test-suite-success-aka-canary",normalizedTitle:"periodical test suite success(aka canary)",charIndex:5960},{level:3,title:"Frontend all API per second",slug:"frontend-all-api-per-second",normalizedTitle:"frontend all api per second",charIndex:6306},{level:3,title:"Frontend API per second (breakdown per operation)",slug:"frontend-api-per-second-breakdown-per-operation",normalizedTitle:"frontend api per second (breakdown per operation)",charIndex:6553},{level:3,title:"Frontend API errors per second(breakdown per operation)",slug:"frontend-api-errors-per-second-breakdown-per-operation",normalizedTitle:"frontend api errors per second(breakdown per operation)",charIndex:6833},{level:3,title:"Frontend Regular API Latency",slug:"frontend-regular-api-latency",normalizedTitle:"frontend regular api latency",charIndex:9890},{level:3,title:"Frontend ListWorkflow API Latency",slug:"frontend-listworkflow-api-latency",normalizedTitle:"frontend listworkflow api latency",charIndex:10636},{level:3,title:"Frontend Long Poll API Latency",slug:"frontend-long-poll-api-latency",normalizedTitle:"frontend long poll api latency",charIndex:11243},{level:3,title:"Frontend Get History/Query Workflow API Latency",slug:"frontend-get-history-query-workflow-api-latency",normalizedTitle:"frontend get history/query workflow api latency",charIndex:11923},{level:3,title:"Frontend WorkflowClient API per seconds by domain",slug:"frontend-workflowclient-api-per-seconds-by-domain",normalizedTitle:"frontend workflowclient api per seconds by domain",charIndex:12700},{level:2,title:"Cadence Application Monitoring",slug:"cadence-application-monitoring",normalizedTitle:"cadence application monitoring",charIndex:13351},{level:3,title:"Workflow Start and Successful completion",slug:"workflow-start-and-successful-completion",normalizedTitle:"workflow start and successful completion",charIndex:13560},{level:3,title:"Workflow Failure",slug:"workflow-failure",normalizedTitle:"workflow failure",charIndex:14392},{level:3,title:"Decision Poll Counters",slug:"decision-poll-counters",normalizedTitle:"decision poll counters",charIndex:15449},{level:3,title:"DecisionTasks Scheduled per second",slug:"decisiontasks-scheduled-per-second",normalizedTitle:"decisiontasks scheduled per second",charIndex:16462},{level:3,title:"Decision Scheduled To Start Latency",slug:"decision-scheduled-to-start-latency",normalizedTitle:"decision scheduled to start latency",charIndex:16798},{level:3,title:"Decision Execution Failure",slug:"decision-execution-failure",normalizedTitle:"decision execution failure",charIndex:17930},{level:3,title:"Decision Execution Timeout",slug:"decision-execution-timeout",normalizedTitle:"decision execution timeout",charIndex:18452},{level:3,title:"Workflow End to End Latency",slug:"workflow-end-to-end-latency",normalizedTitle:"workflow end to end latency",charIndex:18962},{level:3,title:"Workflow Panic and NonDeterministicError",slug:"workflow-panic-and-nondeterministicerror",normalizedTitle:"workflow panic and nondeterministicerror",charIndex:19678},{level:3,title:"Workflow Sticky Cache Hit Rate and Miss Count",slug:"workflow-sticky-cache-hit-rate-and-miss-count",normalizedTitle:"workflow sticky cache hit rate and miss count",charIndex:20254},{level:3,title:"Activity Task Operations",slug:"activity-task-operations",normalizedTitle:"activity task operations",charIndex:21458},{level:3,title:"Local Activity Task Operations",slug:"local-activity-task-operations",normalizedTitle:"local activity task operations",charIndex:21873},{level:3,title:"Activity Execution Latency",slug:"activity-execution-latency",normalizedTitle:"activity execution latency",charIndex:22097},{level:3,title:"Activity Poll Counters",slug:"activity-poll-counters",normalizedTitle:"activity poll counters",charIndex:22715},{level:3,title:"ActivityTasks Scheduled per second",slug:"activitytasks-scheduled-per-second",normalizedTitle:"activitytasks scheduled per second",charIndex:23808},{level:3,title:"Activity Scheduled To Start Latency",slug:"activity-scheduled-to-start-latency",normalizedTitle:"activity scheduled to start latency",charIndex:24146},{level:3,title:"Activity Failure",slug:"activity-failure",normalizedTitle:"activity failure",charIndex:25061},{level:3,title:"Service API success rate",slug:"service-api-success-rate",normalizedTitle:"service api success rate",charIndex:26435},{level:3,title:"Service API Latency",slug:"service-api-latency",normalizedTitle:"service api latency",charIndex:27418},{level:3,title:"Service API Breakdown",slug:"service-api-breakdown",normalizedTitle:"service api breakdown",charIndex:27768},{level:3,title:"Service API Error Breakdown",slug:"service-api-error-breakdown",normalizedTitle:"service api error breakdown",charIndex:28087},{level:3,title:"Max Event Blob size",slug:"max-event-blob-size",normalizedTitle:"max event blob size",charIndex:28316},{level:3,title:"Max History Size",slug:"max-history-size",normalizedTitle:"max history size",charIndex:28917},{level:3,title:"Max History Length",slug:"max-history-length",normalizedTitle:"max history length",charIndex:29680},{level:2,title:"Cadence History Service Monitoring",slug:"cadence-history-service-monitoring",normalizedTitle:"cadence history service monitoring",charIndex:30220},{level:3,title:"History shard movements",slug:"history-shard-movements",normalizedTitle:"history shard movements",charIndex:30351},{level:3,title:"Transfer Tasks Per Second",slug:"transfer-tasks-per-second",normalizedTitle:"transfer tasks per second",charIndex:31134},{level:3,title:"Timer Tasks Per Second",slug:"timer-tasks-per-second",normalizedTitle:"timer tasks per second",charIndex:31491},{level:3,title:"Transfer Tasks Per Domain",slug:"transfer-tasks-per-domain",normalizedTitle:"transfer tasks per domain",charIndex:31844},{level:3,title:"Timer Tasks Per Domain",slug:"timer-tasks-per-domain",normalizedTitle:"timer tasks per domain",charIndex:32026},{level:3,title:"Transfer Latency by Type",slug:"transfer-latency-by-type",normalizedTitle:"transfer latency by type",charIndex:32202},{level:3,title:"Timer Task Latency by type",slug:"timer-task-latency-by-type",normalizedTitle:"timer task latency by type",charIndex:33084},{level:3,title:"NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics",slug:"note-task-queue-latency-vs-executing-latency-vs-processing-latency-in-transfer-timer-task-latency-metrics",normalizedTitle:"note: task queue latency vs executing latency vs processing latency in transfer & timer task latency metrics",charIndex:null},{level:3,title:"Transfer Task Latency Per Domain",slug:"transfer-task-latency-per-domain",normalizedTitle:"transfer task latency per domain",charIndex:34475},{level:3,title:"Timer Task Latency Per Domain",slug:"timer-task-latency-per-domain",normalizedTitle:"timer task latency per domain",charIndex:34632},{level:3,title:"History API per Second",slug:"history-api-per-second",normalizedTitle:"history api per second",charIndex:34786},{level:3,title:"History API Errors per Second",slug:"history-api-errors-per-second",normalizedTitle:"history api errors per second",charIndex:34933},{level:3,title:"Max History Size",slug:"max-history-size-2",normalizedTitle:"max history size",charIndex:28917},{level:3,title:"Max History Length",slug:"max-history-length-2",normalizedTitle:"max history length",charIndex:29680},{level:3,title:"Max Event Blob Size",slug:"max-event-blob-size-2",normalizedTitle:"max event blob size",charIndex:38417},{level:2,title:"Cadence Matching Service Monitoring",slug:"cadence-matching-service-monitoring",normalizedTitle:"cadence matching service monitoring",charIndex:38816},{level:3,title:"Matching APIs per Second",slug:"matching-apis-per-second",normalizedTitle:"matching apis per second",charIndex:39200},{level:3,title:"Matching API Errors per Second",slug:"matching-api-errors-per-second",normalizedTitle:"matching api errors per second",charIndex:39392},{level:3,title:"Matching Regular API Latency",slug:"matching-regular-api-latency",normalizedTitle:"matching regular api latency",charIndex:43179},{level:3,title:"Sync Match Latency:",slug:"sync-match-latency",normalizedTitle:"sync match latency:",charIndex:43446},{level:3,title:"Async match Latency",slug:"async-match-latency",normalizedTitle:"async match latency",charIndex:43936},{level:2,title:"Cadence Default Persistence Monitoring",slug:"cadence-default-persistence-monitoring",normalizedTitle:"cadence default persistence monitoring",charIndex:44299},{level:3,title:"Persistence Availability",slug:"persistence-availability",normalizedTitle:"persistence availability",charIndex:44408},{level:3,title:"Persistence By Service TPS",slug:"persistence-by-service-tps",normalizedTitle:"persistence by service tps",charIndex:45440},{level:3,title:"Persistence By Operation TPS",slug:"persistence-by-operation-tps",normalizedTitle:"persistence by operation tps",charIndex:45738},{level:3,title:"Persistence By Operation Latency",slug:"persistence-by-operation-latency",normalizedTitle:"persistence by operation latency",charIndex:46098},{level:3,title:"Persistence Error By Operation Count",slug:"persistence-error-by-operation-count",normalizedTitle:"persistence error by operation count",charIndex:46759},{level:2,title:"Cadence Advanced Visibility Persistence Monitoring(if applicable)",slug:"cadence-advanced-visibility-persistence-monitoring-if-applicable",normalizedTitle:"cadence advanced visibility persistence monitoring(if applicable)",charIndex:50700},{level:3,title:"Persistence Availability",slug:"persistence-availability-2",normalizedTitle:"persistence availability",charIndex:44408},{level:3,title:"Persistence By Service TPS",slug:"persistence-by-service-tps-2",normalizedTitle:"persistence by service tps",charIndex:45440},{level:3,title:"Persistence By Operation TPS(read: ES, write: Kafka)",slug:"persistence-by-operation-tps-read-es-write-kafka",normalizedTitle:"persistence by operation tps(read: es, write: kafka)",charIndex:51861},{level:3,title:"Persistence By Operation Latency(in seconds) (read: ES, write: Kafka)",slug:"persistence-by-operation-latency-in-seconds-read-es-write-kafka",normalizedTitle:"persistence by operation latency(in seconds) (read: es, write: kafka)",charIndex:52153},{level:3,title:"Persistence Error By Operation Count (read: ES, write: Kafka)",slug:"persistence-error-by-operation-count-read-es-write-kafka",normalizedTitle:"persistence error by operation count (read: es, write: kafka)",charIndex:52474},{level:3,title:"Kafka->ES processor counter",slug:"kafka-es-processor-counter",normalizedTitle:"kafka->es processor counter",charIndex:null},{level:3,title:"Kafka->ES processor error",slug:"kafka-es-processor-error",normalizedTitle:"kafka->es processor error",charIndex:null},{level:3,title:"Kafka->ES processor latency",slug:"kafka-es-processor-latency",normalizedTitle:"kafka->es processor latency",charIndex:null},{level:2,title:"Cadence Dependency Metrics Monitor suggestion",slug:"cadence-dependency-metrics-monitor-suggestion",normalizedTitle:"cadence dependency metrics monitor suggestion",charIndex:54250},{level:3,title:"Computing platform metrics for Cadence deployment",slug:"computing-platform-metrics-for-cadence-deployment",normalizedTitle:"computing platform metrics for cadence deployment",charIndex:54300},{level:3,title:"Database",slug:"database",normalizedTitle:"database",charIndex:54488},{level:3,title:"Kafka (if applicable)",slug:"kafka-if-applicable",normalizedTitle:"kafka (if applicable)",charIndex:54651},{level:3,title:"ElasticSearch (if applicable)",slug:"elasticsearch-if-applicable",normalizedTitle:"elasticsearch (if applicable)",charIndex:54709},{level:2,title:"Cadence Service SLO Recommendation",slug:"cadence-service-slo-recommendation",normalizedTitle:"cadence service slo recommendation",charIndex:54775}],codeSwitcherOptions:{},headersStr:"Instructions DataDog dashboard templates Grafana+Prometheus dashboard templates Periodic tests(Canary) for health check Cadence Frontend Monitoring Service Availability(server metrics) StartWorkflow Per Second Activities Started Per Second Decisions Started Per Second Periodical Test Suite Success(aka Canary) Frontend all API per second Frontend API per second (breakdown per operation) Frontend API errors per second(breakdown per operation) Frontend Regular API Latency Frontend ListWorkflow API Latency Frontend Long Poll API Latency Frontend Get History/Query Workflow API Latency Frontend WorkflowClient API per seconds by domain Cadence Application Monitoring Workflow Start and Successful completion Workflow Failure Decision Poll Counters DecisionTasks Scheduled per second Decision Scheduled To Start Latency Decision Execution Failure Decision Execution Timeout Workflow End to End Latency Workflow Panic and NonDeterministicError Workflow Sticky Cache Hit Rate and Miss Count Activity Task Operations Local Activity Task Operations Activity Execution Latency Activity Poll Counters ActivityTasks Scheduled per second Activity Scheduled To Start Latency Activity Failure Service API success rate Service API Latency Service API Breakdown Service API Error Breakdown Max Event Blob size Max History Size Max History Length Cadence History Service Monitoring History shard movements Transfer Tasks Per Second Timer Tasks Per Second Transfer Tasks Per Domain Timer Tasks Per Domain Transfer Latency by Type Timer Task Latency by type NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics Transfer Task Latency Per Domain Timer Task Latency Per Domain History API per Second History API Errors per Second Max History Size Max History Length Max Event Blob Size Cadence Matching Service Monitoring Matching APIs per Second Matching API Errors per Second Matching Regular API Latency Sync Match Latency: Async match Latency Cadence Default Persistence Monitoring Persistence Availability Persistence By Service TPS Persistence By Operation TPS Persistence By Operation Latency Persistence Error By Operation Count Cadence Advanced Visibility Persistence Monitoring(if applicable) Persistence Availability Persistence By Service TPS Persistence By Operation TPS(read: ES, write: Kafka) Persistence By Operation Latency(in seconds) (read: ES, write: Kafka) Persistence Error By Operation Count (read: ES, write: Kafka) Kafka->ES processor counter Kafka->ES processor error Kafka->ES processor latency Cadence Dependency Metrics Monitor suggestion Computing platform metrics for Cadence deployment Database Kafka (if applicable) ElasticSearch (if applicable) Cadence Service SLO Recommendation",content:"# Cluster Monitoring\n\n\n# Instructions\n\nCadence emits metrics for both Server and client libraries:\n\n * Follow this example to emit client side metrics for Golang client\n \n * You can use other metrics emitter like M3\n * Alternatively, you can implement the tally Reporter interface\n\n * Follow this example to emit client side metrics for Java client if using 3.x client, or this example if using 2.x client.\n \n * You can use other metrics emitter like M3\n * Alternatively, you can implement the tally Reporter interface\n\n * For running Cadence services in production, please follow this example of hemlchart to emit server side metrics. Or you can follow the example of local environment to Prometheus. All services need to expose a HTTP port to provide metircs like below\n\nmetrics:\n prometheus:\n timerType: \"histogram\"\n listenAddress: \"0.0.0.0:8001\"\n\n\nThe rest of the instruction uses local environment as an example.\n\nFor testing local server emitting metrics to Promethues, the easiest way is to use docker-compose to start a local Cadence instance.\n\nMake sure to update the prometheus_config.yml to add \"host.docker.internal:9098\" to the scrape list before starting the docker-compose:\n\nglobal:\n scrape_interval: 5s\n external_labels:\n monitor: 'cadence-monitor'\nscrape_configs:\n - job_name: 'prometheus'\n static_configs:\n - targets: # addresses to scrape\n - 'cadence:9090'\n - 'cadence:8000'\n - 'cadence:8001'\n - 'cadence:8002'\n - 'cadence:8003'\n - 'host.docker.internal:9098'\n\n\nNote: host.docker.internal may not work for some docker versions\n\n * After updating the prometheus_config.yaml as above, run docker-compose up to start the local Cadence instance\n\n * Go the the sample repo, build the helloworld sample make helloworld and run the worker ./bin/helloworld -m worker, and then in another Shell start a workflow ./bin/helloworld\n\n * Go to your local Prometheus dashboard, you should be able to check the metrics emitted by handler from client/frontend/matching/history/sysWorker and confirm your services are healthy through targets\n\n * Go to local Grafana , login as admin/admin.\n\n * Configure Prometheus as datasource: use http://host.docker.internal:9090 as URL of prometheus.\n\n * Import the Grafana dashboard tempalte as JSON files.\n\nClient side dashboard looks like this:\n\nAnd server basic dashboard:\n\n\n# DataDog dashboard templates\n\nThis package contains examples of Cadence dashboards with DataDog.\n\n * Cadence-Client is the dashboard that includes all the metrics to help you understand Cadence client behavior. Most of these metrics are emitted by the client SDKs, with a few exceptions from server side (for example, workflow timeout).\n\n * Cadence-Server is the the server dashboard that you can use to monitor and undertand the health and status of your Cadence cluster.\n\nTo use DataDog with Cadence, follow this instruction to collect Prometheus metrics using DataDog agent.\n\nNOTE1: don't forget to adjust max_returned_metrics to a higher number(e.g. 100000). Otherwise DataDog agent won't be able to collect all metrics(default is 2000).\n\nNOTE2: the template contains templating variables $App and $Availability_Zone. Feel free to remove them if you don't have them in your setup.\n\n\n# Grafana+Prometheus dashboard templates\n\nThis package contains examples of Cadence dashboards with Prometheus.\n\n * Cadence-Client is the dashboard of client metrics, and a few server side metrics that belong to client side but have to be emitted by server(for example, workflow timeout).\n\n * Cadence-Server-Basic is the the basic server dashboard to monitor/navigate the health/status of a Cadence cluster.\n\n * Apart from the basic server dashboard, it's recommended to set up dashboards on different components for Cadence server: Frontend, History, Matching, Worker, Persistence, Archival, etc. Any contribution is always welcome to enrich the existing templates or new templates!\n\n\n# Periodic tests(Canary) for health check\n\nIt's recommended that you run periodical test to get signals on the healthness of your cluster. Please following instructions in our canary package to set these tests up.\n\n\n# Cadence Frontend Monitoring\n\nThis section describes recommended dashboards for monitoring Cadence services in your cluster. The structure mostly follows the DataDog dashboard template listed above.\n\n\n# Service Availability(server metrics)\n\n * Meaning: the availability of Cadence server using server metrics.\n * Suggested monitor: below 95% > 5 min then alert, below 99% for > 5 min triggers a warning\n * Monitor action: When fired, check if there is any persistence errors. If so then check the healthness of the database(may need to restart or scale up). If not then check the error logs.\n * Datadog query example\n\nsum:cadence_frontend.cadence_errors{*}\nsum:cadence_frontend.cadence_requests{*}\n(1 - a / b) * 100\n\n\n\n# StartWorkflow Per Second\n\n * Meaning: how many workflows are started per second. This helps determine if your server is overloaded.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation IN (startworkflowexecution,signalwithstartworkflowexecution))} by {operation}.as_rate()\n\n\n\n# Activities Started Per Second\n\n * Meaning: How many activities are started per second. Helps determine if the server is overloaded.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollforactivitytask} by {operation}.as_rate()\n\n\n\n# Decisions Started Per Second\n\n * Meaning: How many workflow decisions are started per second. Helps determine if the server is overloaded.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollfordecisiontask} by {operation}.as_rate()\n\n\n\n# Periodical Test Suite Success(aka Canary)\n\n * Meaning: The success counter of canary test suite\n * Suggested monitor: Monitor needed. If fired, look at the failed canary test case and investigate the reason of failure.\n * Datadog query example\n\nsum:cadence_history.workflow_success{workflowtype:workflow_sanity} by {workflowtype}.as_count()\n\n\n\n# Frontend all API per second\n\n * Meaning: all API on frontend per second. Information only.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{*}.as_rate()\n\n\n\n# Frontend API per second (breakdown per operation)\n\n * Meaning: API on frontend per second. Information only.\n * Suggested monitor: This is a business metrics. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# Frontend API errors per second(breakdown per operation)\n\n * Meaning: API error on frontend per second. Information only.\n * Suggested monitor: This is to facilitate investigation. No monitoring required.\n * Datadog query example\n\nsum:cadence_frontend.cadence_errors{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Frontend Regular API Latency\n\n * Meaning: The latency of regular core API -- excluding long-poll/queryWorkflow/getHistory/ListWorkflow/CountWorkflow API.\n * Suggested monitor: 95% of all apis and of all operations that take over 1.5 seconds triggers a warning, over 2 seconds triggers an alert\n * Monitor action: If fired, investigate the database read/write latency. May need to throttle some spiky traffic from certain domains, or scale up the database\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation NOT IN (pollfordecisiontask,pollforactivitytask,getworkflowexecutionhistory,queryworkflow,listworkflowexecutions,listclosedworkflowexecutions,listopenworkflowexecutions)) AND $pXXLatency} by {operation}\n\n\n\n# Frontend ListWorkflow API Latency\n\n * Meaning: The latency of ListWorkflow API.\n * Monitor: 95% of all apis and of all operations that take over 2 seconds triggers a warning, over 3 seconds triggers an alert\n * Monitor action: If fired, investigate the ElasticSearch read latency. May need to throttle some spiky traffic from certain domains, or scale up ElasticSearch cluster.\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation IN (listclosedworkflowexecutions,listopenworkflowexecutions,listworkflowexecutions,countworkflowexecutions)) AND $pXXLatency} by {operation}\n\n\n\n# Frontend Long Poll API Latency\n\n * Meaning: Long poll means that the worker is waiting for a task. The latency is an Indicator for how busy the worker is. Poll for activity task and poll for decision task are the types of long poll requests.The api call times out at 50 seconds if no task can be picked up.A very low latency could mean that more workers need to be added.\n * Suggested monitor: No monitor needed as long latency is expected.\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollforactivitytask} by {operation}\navg:cadence_frontend.cadence_latency.quantile{$pXXLatency,operation:pollfordecisiontask} by {operation}\n\n\n\n# Frontend Get History/Query Workflow API Latency\n\n * Meaning: GetHistory API acts like a long poll api, but there’s no explicit timeout. Long-poll of GetHistory is being used when WorkflowClient is waiting for the result of the workflow(essentially, WorkflowExecutionCompletedEvent). This latency depends on the time it takes for the workflow to complete. QueryWorkflow API latency is also unpredictable as it depends on the availability and performance of workflow workers, which are owned by the application and workflow implementation(may require replaying history).\n * Suggested monitor: No monitor needed\n * Datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation IN (getworkflowexecutionhistory,queryworkflow)) AND $pXXLatency} by {operation}\n\n\n\n# Frontend WorkflowClient API per seconds by domain\n\n * Meaning: Shows which domains are making the most requests using WorkflowClient(excluding worker API like PollForDecisionTask and RespondDecisionTaskCompleted). Used for troubleshooting. In the future it can be used to set some rate limiting per domain.\n * Suggested monitor: No monitor needed.\n * Datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation IN (signalwithstartworkflowexecution,signalworkflowexecution,startworkflowexecution,terminateworkflowexecution,resetworkflowexecution,requestcancelworkflowexecution,listworkflowexecutions))} by {domain,operation}.as_rate()\n\n\n\n# Cadence Application Monitoring\n\nThis section describes the recommended dashboards for monitoring Cadence application using metrics emitted by SDK. See the setup section about how to collect those metrics.\n\n\n# Workflow Start and Successful completion\n\n * Workflow successfully started/signalWithStart and completed/canceled/continuedAsNew\n * Monitor: not recommended\n * Datadog query example\n\nsum:cadence_client.cadence_workflow_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_completed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_canceled{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_continue_as_new{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_signal_with_start{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env,tasklist}.as_rate()\n\n\n\n# Workflow Failure\n\n * Metrics for all types of failures, including workflow failures(throw uncaught exceptions), workflow timeout and termination.\n * For timeout and termination, workflow worker doesn’t have a chance to emit metrics when it’s terminate, so the metric comes from the history service\n * Monitor: application should set monitor on timeout and failure to make sure workflow are not failing. Cancel/terminate are usually triggered by human intentionally.\n * When the metrics fire, go to Cadence UI to find the failed workflows and investigate the workflow history to understand the type of failure\n * Datadog query example\n\nsum:cadence_client.cadence_workflow_failed{$Domain,$Tasklist,$WorkflowType} by {workflowtype,domain,env}.as_count()\nsum:cadence_history.workflow_failed{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_terminate{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_timeout{$Domain,$WorkflowType} by {domain,env,workflowtype}.as_count()\n\n\n\n# Decision Poll Counters\n\n * Indicates if the workflow worker is available and is polling tasks. If the worker is not available no counters will show. Can also check if the worker is using the right task list. “No task” poll type means that the worker exists and is idle. The timeout for this long poll api is 50 seconds. If no task is received within 50 seconds, then an empty response will be returned and another long poll request will be sent.\n * Monitor: application can should monitor on it to make sure workers are available\n * When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * Datadog query example\n\nsum:cadence_client.cadence_decision_poll_total{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_failed{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_no_task{$Domain,$Tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_succeed{$Domain,$Tasklist}.as_count()\n\n\n\n# DecisionTasks Scheduled per second\n\n * Indicate how many decision tasks are scheduled\n * Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded\n * Datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:adddecisiontask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n\n\n\n# Decision Scheduled To Start Latency\n\n * If this latency is too high then either: The worker is not available or too busy after the task has been scheduled. The task list is overloaded(confirmed by DecisionTaskScheduled per second widget). By default a task list only has one partition and a partition can only be owned by one host and so the throughput of a task list is limited. More task lists can be added to scale or a scalable task list can be used to add more partitions.\n * Monitor: application can set monitor on it to make sure latency is tolerable\n * When fired, check if worker capacity is enough, then check if tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist\n * Datadog query example\n\navg:cadence_client.cadence_decision_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist}\n\n\n\n# Decision Execution Failure\n\n * This means some critical bugs in workflow code causing decision task execution failure\n * Monitor: application should set monitor on it to make sure no consistent failure\n * When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover\n * Datadog query example\n\nsum:cadence_client.cadence_decision_execution_failed{$Domain,$Tasklist} by {tasklist,workflowtype}.as_count()\n\n\n\n# Decision Execution Timeout\n\n * This means some critical bugs in workflow code causing decision task execution timeout\n * Monitor: application should set monitor on it to make sure no consistent timeout\n * When fired, you may need to terminate the problematic workflows to mitigate the issue. After you identify the bugs, you can fix the code and then reset the workflow to recover\n * Datadog query example\n\nsum:cadence_history.start_to_close_timeout{operation:timeractivetaskdecision*,$Domain}.as_count()\n\n\n\n# Workflow End to End Latency\n\n * This is for the client application to track their SLOs For example, if you expect a workflow to take duration d to complete, you can use this latency to set a monitor.\n * Monitor: application can monitor this metrics if expecting workflow to complete within a certain duration.\n * When fired, investigate the workflow history to see the workflow takes longer than expected to complete\n * Datadog query example\n\navg:cadence_client.cadence_workflow_endtoend_latency.median{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\navg:cadence_client.cadence_workflow_endtoend_latency.95percentile{$Domain,$Tasklist,$WorkflowType} by {env,domain,tasklist,workflowtype}\n\n\n\n# Workflow Panic and NonDeterministicError\n\n * These errors mean that there is a bug in the code and the deploy should be rolled back.\n * A monitor should be set on this metric\n * When fired, you may rollback the deployment to mitigate your issue. Usually this caused by bad (non-backward compatible) code change. After rollback, look at your worker error logs to see where the bug is.\n * Datadog query example\n\nsum:cadence_client.cadence_worker_panic{$Domain} by {env,domain}.as_rate()\nsum:cadence_client.cadence_non_deterministic_error{$Domain} by {env,domain}.as_rate()\n\n\n\n# Workflow Sticky Cache Hit Rate and Miss Count\n\n * This metric can be used for performance optimization. This can be improved by adding more worker instances, or adjust the workerOption(GoSDK) or WorkferFactoryOption(Java SDK). CacheHitRate too low means workers will have to replay history to rebuild the workflow stack when executing a decision task. Depending on the the history size\n * If less than 1MB, then it’s okay to be lower than 50%\n * If greater than 1MB, then it’s okay to be greater than 50%\n * If greater than 5MB, , then it’s okay to be greater than 60%\n * If greater than 10MB , then it’s okay to be greater than 70%\n * If greater than 20MB , then it’s okay to be greater than 80%\n * If greater than 30MB , then it’s okay to be greater than 90%\n * Workflow history size should never be greater than 50MB.\n * A monitor can be set on this metric, if performance is important.\n * When fired, adjust the stickyCacheSize in the WorkerFactoryOption, or add more workers\n * Datadog query example\n\nsum:cadence_client.cadence_sticky_cache_miss{$Domain} by {env,domain}.as_count()\nsum:cadence_client.cadence_sticky_cache_hit{$Domain} by {env,domain}.as_count()\n(b / (a+b)) * 100\n\n\n\n# Activity Task Operations\n\n * Activity started/completed counters\n * Monitor: not recommended\n * Datadog query example\n\nsum:cadence_client.cadence_activity_task_failed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_completed{$Domain,$Tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_timeouted{$Domain,$Tasklist} by {activitytype}.as_rate()\n\n\n\n# Local Activity Task Operations\n\n * Local Activity execution counters\n * Monitor: not recommended\n * Datadog query example\n\nsum:cadence_client.cadence_local_activity_total{$Domain,$Tasklist} by {activitytype}.as_count()\n\n\n\n# Activity Execution Latency\n\n * If it’s expected that an activity will take x amount of time to complete, a monitor on this metric could be helpful to enforce that expectation.\n * Monitor: application can set monitor on it if expecting workflow start/complete activities with certain latency\n * When fired, investigate the activity code and its dependencies\n * Datadog query example\n\navg:cadence_client.cadence_activity_execution_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_execution_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# Activity Poll Counters\n\n * Indicates the activity worker is available and is polling tasks. If the worker is not available no counters will show. Can also check if the worker is using the right task list. “No task” poll type means that the worker exists and is idle. The timeout for this long poll api is 50 seconds. If within that 50 seconds, no task is received then an empty response will be returned and another long poll request will be sent.\n * Monitor: application can set monitor on it to make sure activity workers are available\n * When fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * Datadog query example\n\nsum:cadence_client.cadence_activity_poll_total{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_failed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_succeed{$Domain,$Tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_no_task{$Domain,$Tasklist} by {activitytype}.as_count()\n\n\n\n# ActivityTasks Scheduled per second\n\n * Indicate how many activities tasks are scheduled\n * Monitor: not recommended -- Information only to know whether or not a tasklist is overloaded\n * Datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:addactivitytask,$Tasklist,$Domain} by {tasklist,domain}.as_rate()\n\n\n\n# Activity Scheduled To Start Latency\n\n * If the latency is too high either: The worker is not available or too busy There are too many activities scheduled into the same tasklist and the tasklist is not scalable. Same as Decision Scheduled To Start Latency\n * Monitor: application Should set monitor on it\n * When fired, check if workers are enough, then check if the tasklist is overloaded. If needed, contact the Cadence cluster Admin to enable scalable tasklist to add more partitions to the tasklist\n * Datadog query example\n\navg:cadence_client.cadence_activity_scheduled_to_start_latency.avg{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.max{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.95percentile{$Domain,$Tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# Activity Failure\n\n * A monitor on this metric will alert the team that activities are failing The activity timeout metrics are emitted by the history service, because a timeout causes a hard stop and the client doesn’t have time to emit metrics.\n * Monitor: application can set monitor on it\n * When fired, investigate the activity code and its dependencies\n * cadence_activity_execution_failed vs cadence_activity_task_failed: Only have different when using RetryPolicy cadence_activity_task_failed counter increase per activity attempt cadence_activity_execution_failed counter increase when activity fails after all attempts\n * should only monitor on cadence_activity_execution_failed\n * Datadog query example\n\nsum:cadence_client.cadence_activity_execution_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_panic{$Domain} by {domain,env}.as_count()\nsum:cadence_client.cadence_activity_task_failed{$Domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_canceled{$Domain} by {domain,env}.as_count()\nsum:cadence_history.heartbeat_timeout{$Domain} by {domain,env}.as_count()\nsum:cadence_history.schedule_to_start_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.start_to_close_timeout{$Domain} by {domain,env}.as_rate()\nsum:cadence_history.schedule_to_close_timeout{$Domain} by {domain,env}.as_count()\n\n\n\n# Service API success rate\n\n * The client’s experience of the service availability. It encompasses many apis. Things that could affect the service’s API success rate are:\n * Service availability\n * The network could have issues.\n * A required api is not available.\n * Client side errors like EntityNotExists, WorkflowAlreadyStarted etc. This means that application code has potential bugs of calling Cadence service.\n * Monitor: application can set monitor on it\n * When fired, check application logs to see if the error is Cadence server error or client side error. Error like EntityNotExists/ExecutionAlreadyStarted/QueryWorkflowFailed/etc are client side error, meaning that the application is misusing the APIs. If most errors are server side errors(internalServiceError), you can contact Cadence admin.\n * Datadog query example\n\nsum:cadence_client.cadence_error{*} by {domain}.as_count()\nsum:cadence_client.cadence_request{*} by {domain}.as_count()\n(1 - a / b) * 100\n\n\n\n# Service API Latency\n\n * The latency of the API, excluding long poll APIs.\n * Application can set monitor on certain APIs, if necessary.\n * Datadog query example\n\navg:cadence_client.cadence_latency.95percentile{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}\n\n\n\n# Service API Breakdown\n\n * A counter breakdown by API to help investigate availability\n * No monitor needed\n * Datadog query example\n\nsum:cadence_client.cadence_request{$Domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}.as_count()\n\n\n\n# Service API Error Breakdown\n\n * A counter breakdown by API error to help investigate availability\n * No monitor needed\n * Datadog query example\n\nsum:cadence_client.cadence_error{$Domain} by {cadence_metric_scope}.as_count()\n\n\n\n# Max Event Blob size\n\n * By default the max size is 2 MB. If the input is greater than the max size the server will reject the request. The size of a single history event. This applies to any event input, like start workflow event, start activity event, or signal event. It should never be greater than 2MB.\n * A monitor should be set on this metric.\n * When fired, please review the design/code ASAP to reduce the blob size. Reducing the input/output of workflow/activity/signal will help.\n * Datadog query example\n\n​​max:cadence_history.event_blob_size.quantile{!domain:all,$Domain} by {domain}\n\n\n\n# Max History Size\n\n * Workflow history cannot grow indefinitely. It will cause replay issues. If the workflow exceeds the history’s max size the workflow will be terminate automatically. The max size by default is 200 megabytes. As a suggestion for workflow design, workflow history should never grow greater than 50MB. Use continueAsNew to break long workflows into multiple runs.\n * A monitor should be set on this metric.\n * When fired, please review the design/code ASAP to reduce the history size. Reducing the input/output of workflow/activity/signal will help. Also you may need to use ContinueAsNew to break a single execution into smaller pieces.\n * Datadog query example\n\n​​max:cadence_history.history_size.quantile{!domain:all,$Domain} by {domain}\n\n\n\n# Max History Length\n\n * The number of events of workflow history. It should never be greater than 50K(workflow exceeding 200K events will be terminated by server). Use continueAsNew to break long workflows into multiple runs.\n * A monitor should be set on this metric.\n * When fired, please review the design/code ASAP to reduce the history length. You may need to use ContinueAsNew to break a single execution into smaller pieces.\n * Datadog query example\n\n​​max:cadence_history.history_count.quantile{!domain:all,$Domain} by {domain}\n\n\n\n# Cadence History Service Monitoring\n\nHistory is the most critical/core service for cadence which implements the workflow logic.\n\n\n# History shard movements\n\n * Should only happen during deployment or when the node restarts. If there’s shard movement without deployments then that’s unexpected and there’s probably a performance issue. The shard ownership is assigned by a particular history host, so if the shard is moving it’ll be hard for the frontend service to route a request to a particular history shard and to find it.\n * A monitor can be set to be alerted on shard movements without deployment.\n * Datadog query example\n\nsum:cadence_history.membership_changed_count{operation:shardcontroller}\nsum:cadence_history.shard_closed_count{operation:shardcontroller}\nsum:cadence_history.sharditem_created_count{operation:shardcontroller}\nsum:cadence_history.sharditem_removed_count{operation:shardcontroller}\n\n\n\n# Transfer Tasks Per Second\n\n * TransferTask is an internal background task that moves workflow state and transfers an action task from the history engine to another service(e.g. Matching service, ElasticSearch, etc)\n * No monitor needed\n * Datadog query example\n\nsum:cadence_history.task_requests{operation:transferactivetask*} by {operation}.as_rate()\n\n\n\n# Timer Tasks Per Second\n\n * Timer tasks are tasks that are scheduled to be triggered at a given time in future. For example, workflow.sleep() will wait an x amount of time then the task will be pushed somewhere for a worker to pick up.\n * Datadog query example\n\nsum:cadence_history.task_requests{operation:timeractivetask*} by {operation}.as_rate()\n\n\n\n# Transfer Tasks Per Domain\n\n * Count breakdown by domain\n * Datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:transferactive*} by {domain}.as_count()\n\n\n\n# Timer Tasks Per Domain\n\n * Count breakdown by domain\n * Datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:timeractive*} by {domain}.as_count()\n\n\n\n# Transfer Latency by Type\n\n * If latency is too high then it’s an issue for a workflow. For example, if transfer task latency is 5 second, then it takes 5 second for activity/decision to actual receive the task.\n * Monitor should be set on diffeernt types of latency. Note that queue_latency can go very high during deployment and it's expected. See below NOTE for explanation.\n * When fired, check if it’s due to some persistence issue. If so then investigate the database(may need to scale up) If not then see if need to scale up Cadence deployment(K8s instance)\n * Datadog query example\n\navg:cadence_history.task_latency.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:transfer*} by {operation}\n\n\n\n# Timer Task Latency by type\n\n * If latency is too high then it’s an issue for a workflow. For example, if you set the workflow.sleep() for 10 seconds and the timer latency is 5 secs then the workflow will sleep for 15 seconds.\n * Monitor should be set on diffeernt types of latency.\n * When fired, check if it’s due to some persistence issue. If so then investigate the database(may need to scale up) [Mostly] If not then see if need to scale up Cadence deployment(K8s instance)\n * Datadog query example\n\navg:cadence_history.task_latency.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pXXLatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pXXLatency,operation:timer*} by {operation}\n\n\n\n# NOTE: Task Queue Latency vs Executing Latency vs Processing Latency In Transfer & Timer Task Latency Metrics\n\n * task_latency_queue: “Queue Latency” is “end to end” latency for users. The latency could go to several minutes during deployment because of metrics being re-emitted (but the actual latency is not that high)\n * task_latency: “Executing latency” is the time from submission to executing pool to completion. It includes scheduling, retry and processing time of the task.\n * task_latency_processing: “Processing latency” is the processing time of the task of a single attempt(without retry)\n\n\n# Transfer Task Latency Per Domain\n\n * Latency breakdown by domain\n * No monitor needed.\n * Datadog query example: modify above queries to use domain tag.\n\n\n# Timer Task Latency Per Domain\n\n * Latency breakdown by domain\n * No monitor needed.\n * Datadog query example: modify above queries to use domain tag.\n\n\n# History API per Second\n\nInformation about history API Datadog query example\n\nsum:cadence_history.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# History API Errors per Second\n\n * Information about history API\n * No monitor needed\n * Datadog query example\n\nsum:cadence_history.cadence_errors{*} by {operation}.as_rate()\nsum:cadence_history.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Max History Size\n\nThe history size of the workflow cannot be too large otherwise it will cause performance issue during replay. The soft limit is 200MB. If exceeding workflow will be terminated by server.\n\n * No monitor needed\n * Datadog query is same as the client section\n\n\n# Max History Length\n\nSimilarly, the history length of the workflow cannot be too large otherwise it will cause performance issues during replay. The soft limit is 200K events. If exceeding, workflow will be terminated by server.\n\n * No monitor needed\n * Datadog query is same as the client section\n\n\n# Max Event Blob Size\n\n * The size of each event(e.g. Decided by input/output of workflow/activity/signal/chidlWorkflow/etc) cannot be too large otherwise it will also cause performance issue. The soft limit is 2MB. If exceeding, the requests will be rejected by server, meaning that workflow won’t be able to make any progress.\n * No monitor needed\n * Datadog query is same as the client section\n\n\n# Cadence Matching Service Monitoring\n\nMatching service is to match/assign tasks from cadence service to workers. Matching got the tasks from history service. If workers are active the task will be matched immediately , It’s called “sync match”. If workers are not available, matching will persist into database and then reload the tasks when workers are back(called “async match”)\n\n\n# Matching APIs per Second\n\n * API processed by matching service per second\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# Matching API Errors per Second\n\n * API errors by matching service per second\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.cadence_errors_per_tl{*} by {operation,domain,tasklist}.as_rate()\nsum:cadence_matching.cadence_errors_bad_request_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_request{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_shard_ownership_lost{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_event_already_started{*} by {operation,domain,tasklist}\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Matching Regular API Latency\n\n * Regular APIs are the APIs excluding long polls\n * No monitor needed\n * Datadog query example\n\navg:cadence_matching.cadence_latency_per_tl.quantile{$pXXLatency,!operation:pollfor*,!operation:queryworkflow} by {operation,tasklist}\n\n\n\n# Sync Match Latency:\n\n * If the latency is too high, probably the tasklist is overloaded. Consider using multiple tasklist, or enable scalable tasklist feature by adding more partition to the tasklist(default is one) To confirm if there are too many tasks being added to the tasklist, use “AddTasks per second - domain, tasklist breakdown”\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.syncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n\n\n\n# Async match Latency\n\n * If a match is done asynchronously it writes a match to the db to use later. Measures the time when the worker is not actively looking for tasks. If this is high, more workers are needed.\n * No monitor needed\n * Datadog query example\n\nsum:cadence_matching.asyncmatch_latency_per_tl.quantile{$pXXLatency} by {operation,tasklist,domain}\n\n\n\n# Cadence Default Persistence Monitoring\n\nThe following monotors should be set up for Cadence persistence.\n\n\n# Persistence Availability\n\n * The availability of the primary database for your Cadence server\n * Monitor required: Below 95% > 5min then alert, below 99% triggers a slack warning\n * When fired, check if it’s due to some persistence issue. If so then investigate the database(may need to scale up) [Mostly] If not then see if need to scale up Cadence deployment(K8s instance)\n * Datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_requests{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n(1 - e / f) * 100\n(1 - g / h) * 100\n\n\n\n# Persistence By Service TPS\n\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.persistence_requests{*}.as_rate()\nsum:cadence_history.persistence_requests{*}.as_rate()\nsum:cadence_worker.persistence_requests{*}.as_rate()\nsum:cadence_matching.persistence_requests{*}.as_rate()\n\n\n\n\n# Persistence By Operation TPS\n\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_history.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_rate()\n\n\n\n\n# Persistence By Operation Latency\n\n * Monitor required, alert if 95% of all operation latency is greater than 1 second for 5mins, warning if greater than 0.5 seconds\n * When fired, investigate the database(may need to scale up) [Mostly] If there’s a high latency, then there could be errors or something wrong with the db\n * Datadog query example\n\navg:cadence_matching.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_worker.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_frontend.persistence_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.persistence_latency.quantile{$pXXLatency} by {operation}\n\n\n\n# Persistence Error By Operation Count\n\n * It's to help investigate availability issue\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\n\nsum:cadence_frontend.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_history.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_matching.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_worker.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_bad_request{*} by {operation}.as_count()\n\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# Cadence Advanced Visibility Persistence Monitoring(if applicable)\n\nKafka & ElasticSearch are only for visibility. Only applicable if using advanced visibility. For writing visibility records, Cadence history service will write down the records into Kafka, and then Cadence worker service will read from Kafka and write into ElasticSearch(in batch, for performance optimization) For reading visibility records, Frontend service will query ElasticSearch directly.\n\n\n# Persistence Availability\n\n * The availability of Cadence server using database\n * Monitor can be set\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n\n\n\n# Persistence By Service TPS\n\n * The error of persistence API call by service\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*}.as_rate()\nsum:cadence_history.elasticsearch_requests{*}.as_rate()\n\n\n\n# Persistence By Operation TPS(read: ES, write: Kafka)\n\n * The rate of persistence API call by API\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_rate()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_rate()\n\n\n\n# Persistence By Operation Latency(in seconds) (read: ES, write: Kafka)\n\n * The latency of persistence API call\n * No monitor needed\n * Datadog query example\n\navg:cadence_frontend.elasticsearch_latency.quantile{$pXXLatency} by {operation}\navg:cadence_history.elasticsearch_latency.quantile{$pXXLatency} by {operation}\n\n\n\n# Persistence Error By Operation Count (read: ES, write: Kafka)\n\n * The error of persistence API call\n * No monitor needed\n * Datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\n\n\n\n# Kafka->ES processor counter\n\n * This is the metrics of a background processing: consuming Kafka messages and then populate to ElasticSearch in batch\n * Monitor on the running of the background processing(counter metrics is > 0)\n * When fired, restart Cadence service first to mitigate. Then look at logs to see why the process is stopped(process panic/error/etc). May consider add more pods (replicaCount) to sys-worker service for higher availability\n * Datadog query example\n\nsum:cadence_worker.es_processor_requests{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_retries{*} by {operation}.as_count()\n\n\n\n# Kafka->ES processor error\n\n * This is the error metrics of the above processing logic Almost all errors are retryable errors so it’s not a problem.\n * Need to monitor error\n * When fired, Go to Kibana to find logs about the error details. The most common error is missing the ElasticSearch index field -- an index field is added in dynamicconfig but not in ElasticSearch, or vice versa . If so, follow the runbook to add the field to ElasticSearch or dynamic config.\n * Datadog query example\n\nsum:cadence_worker.es_processor_error{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_corrupted_data{*} by {operation}.as_count()\n\n\n\n# Kafka->ES processor latency\n\n * The latency of the processing logic\n * No monitor needed\n * Datadog query example\n\nsum:cadence_worker.es_processor_process_msg_latency.quantile{$pXXLatency} by {operation}.as_count()\n\n\n\n# Cadence Dependency Metrics Monitor suggestion\n\n\n# Computing platform metrics for Cadence deployment\n\nCadence server being deployed on any computing platform(e.g. Kubernetese) should be monitored on the blow metrics:\n\n * CPU\n * Memory\n\n\n# Database\n\nDepends on which database, you should at least monitor on the below metrics\n\n * Disk Usage\n * CPU\n * Memory\n * Read API latency\n * Write API Latency\n\n\n# Kafka (if applicable)\n\n * Disk Usage\n * CPU\n * Memory\n\n\n# ElasticSearch (if applicable)\n\n * Disk Usage\n * CPU\n * Memory\n\n\n# Cadence Service SLO Recommendation\n\n * Core API availability: 99.9%\n * Core API latency: <1s\n * Overall task dispatch latency: <2s (queue_latency for transfer task and timer task)",normalizedContent:"# cluster monitoring\n\n\n# instructions\n\ncadence emits metrics for both server and client libraries:\n\n * follow this example to emit client side metrics for golang client\n \n * you can use other metrics emitter like m3\n * alternatively, you can implement the tally reporter interface\n\n * follow this example to emit client side metrics for java client if using 3.x client, or this example if using 2.x client.\n \n * you can use other metrics emitter like m3\n * alternatively, you can implement the tally reporter interface\n\n * for running cadence services in production, please follow this example of hemlchart to emit server side metrics. or you can follow the example of local environment to prometheus. all services need to expose a http port to provide metircs like below\n\nmetrics:\n prometheus:\n timertype: \"histogram\"\n listenaddress: \"0.0.0.0:8001\"\n\n\nthe rest of the instruction uses local environment as an example.\n\nfor testing local server emitting metrics to promethues, the easiest way is to use docker-compose to start a local cadence instance.\n\nmake sure to update the prometheus_config.yml to add \"host.docker.internal:9098\" to the scrape list before starting the docker-compose:\n\nglobal:\n scrape_interval: 5s\n external_labels:\n monitor: 'cadence-monitor'\nscrape_configs:\n - job_name: 'prometheus'\n static_configs:\n - targets: # addresses to scrape\n - 'cadence:9090'\n - 'cadence:8000'\n - 'cadence:8001'\n - 'cadence:8002'\n - 'cadence:8003'\n - 'host.docker.internal:9098'\n\n\nnote: host.docker.internal may not work for some docker versions\n\n * after updating the prometheus_config.yaml as above, run docker-compose up to start the local cadence instance\n\n * go the the sample repo, build the helloworld sample make helloworld and run the worker ./bin/helloworld -m worker, and then in another shell start a workflow ./bin/helloworld\n\n * go to your local prometheus dashboard, you should be able to check the metrics emitted by handler from client/frontend/matching/history/sysworker and confirm your services are healthy through targets\n\n * go to local grafana , login as admin/admin.\n\n * configure prometheus as datasource: use http://host.docker.internal:9090 as url of prometheus.\n\n * import the grafana dashboard tempalte as json files.\n\nclient side dashboard looks like this:\n\nand server basic dashboard:\n\n\n# datadog dashboard templates\n\nthis package contains examples of cadence dashboards with datadog.\n\n * cadence-client is the dashboard that includes all the metrics to help you understand cadence client behavior. most of these metrics are emitted by the client sdks, with a few exceptions from server side (for example, workflow timeout).\n\n * cadence-server is the the server dashboard that you can use to monitor and undertand the health and status of your cadence cluster.\n\nto use datadog with cadence, follow this instruction to collect prometheus metrics using datadog agent.\n\nnote1: don't forget to adjust max_returned_metrics to a higher number(e.g. 100000). otherwise datadog agent won't be able to collect all metrics(default is 2000).\n\nnote2: the template contains templating variables $app and $availability_zone. feel free to remove them if you don't have them in your setup.\n\n\n# grafana+prometheus dashboard templates\n\nthis package contains examples of cadence dashboards with prometheus.\n\n * cadence-client is the dashboard of client metrics, and a few server side metrics that belong to client side but have to be emitted by server(for example, workflow timeout).\n\n * cadence-server-basic is the the basic server dashboard to monitor/navigate the health/status of a cadence cluster.\n\n * apart from the basic server dashboard, it's recommended to set up dashboards on different components for cadence server: frontend, history, matching, worker, persistence, archival, etc. any contribution is always welcome to enrich the existing templates or new templates!\n\n\n# periodic tests(canary) for health check\n\nit's recommended that you run periodical test to get signals on the healthness of your cluster. please following instructions in our canary package to set these tests up.\n\n\n# cadence frontend monitoring\n\nthis section describes recommended dashboards for monitoring cadence services in your cluster. the structure mostly follows the datadog dashboard template listed above.\n\n\n# service availability(server metrics)\n\n * meaning: the availability of cadence server using server metrics.\n * suggested monitor: below 95% > 5 min then alert, below 99% for > 5 min triggers a warning\n * monitor action: when fired, check if there is any persistence errors. if so then check the healthness of the database(may need to restart or scale up). if not then check the error logs.\n * datadog query example\n\nsum:cadence_frontend.cadence_errors{*}\nsum:cadence_frontend.cadence_requests{*}\n(1 - a / b) * 100\n\n\n\n# startworkflow per second\n\n * meaning: how many workflows are started per second. this helps determine if your server is overloaded.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation in (startworkflowexecution,signalwithstartworkflowexecution))} by {operation}.as_rate()\n\n\n\n# activities started per second\n\n * meaning: how many activities are started per second. helps determine if the server is overloaded.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollforactivitytask} by {operation}.as_rate()\n\n\n\n# decisions started per second\n\n * meaning: how many workflow decisions are started per second. helps determine if the server is overloaded.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{operation:pollfordecisiontask} by {operation}.as_rate()\n\n\n\n# periodical test suite success(aka canary)\n\n * meaning: the success counter of canary test suite\n * suggested monitor: monitor needed. if fired, look at the failed canary test case and investigate the reason of failure.\n * datadog query example\n\nsum:cadence_history.workflow_success{workflowtype:workflow_sanity} by {workflowtype}.as_count()\n\n\n\n# frontend all api per second\n\n * meaning: all api on frontend per second. information only.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{*}.as_rate()\n\n\n\n# frontend api per second (breakdown per operation)\n\n * meaning: api on frontend per second. information only.\n * suggested monitor: this is a business metrics. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# frontend api errors per second(breakdown per operation)\n\n * meaning: api error on frontend per second. information only.\n * suggested monitor: this is to facilitate investigation. no monitoring required.\n * datadog query example\n\nsum:cadence_frontend.cadence_errors{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_frontend.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# frontend regular api latency\n\n * meaning: the latency of regular core api -- excluding long-poll/queryworkflow/gethistory/listworkflow/countworkflow api.\n * suggested monitor: 95% of all apis and of all operations that take over 1.5 seconds triggers a warning, over 2 seconds triggers an alert\n * monitor action: if fired, investigate the database read/write latency. may need to throttle some spiky traffic from certain domains, or scale up the database\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation not in (pollfordecisiontask,pollforactivitytask,getworkflowexecutionhistory,queryworkflow,listworkflowexecutions,listclosedworkflowexecutions,listopenworkflowexecutions)) and $pxxlatency} by {operation}\n\n\n\n# frontend listworkflow api latency\n\n * meaning: the latency of listworkflow api.\n * monitor: 95% of all apis and of all operations that take over 2 seconds triggers a warning, over 3 seconds triggers an alert\n * monitor action: if fired, investigate the elasticsearch read latency. may need to throttle some spiky traffic from certain domains, or scale up elasticsearch cluster.\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation in (listclosedworkflowexecutions,listopenworkflowexecutions,listworkflowexecutions,countworkflowexecutions)) and $pxxlatency} by {operation}\n\n\n\n# frontend long poll api latency\n\n * meaning: long poll means that the worker is waiting for a task. the latency is an indicator for how busy the worker is. poll for activity task and poll for decision task are the types of long poll requests.the api call times out at 50 seconds if no task can be picked up.a very low latency could mean that more workers need to be added.\n * suggested monitor: no monitor needed as long latency is expected.\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{$pxxlatency,operation:pollforactivitytask} by {operation}\navg:cadence_frontend.cadence_latency.quantile{$pxxlatency,operation:pollfordecisiontask} by {operation}\n\n\n\n# frontend get history/query workflow api latency\n\n * meaning: gethistory api acts like a long poll api, but there’s no explicit timeout. long-poll of gethistory is being used when workflowclient is waiting for the result of the workflow(essentially, workflowexecutioncompletedevent). this latency depends on the time it takes for the workflow to complete. queryworkflow api latency is also unpredictable as it depends on the availability and performance of workflow workers, which are owned by the application and workflow implementation(may require replaying history).\n * suggested monitor: no monitor needed\n * datadog query example\n\navg:cadence_frontend.cadence_latency.quantile{(operation in (getworkflowexecutionhistory,queryworkflow)) and $pxxlatency} by {operation}\n\n\n\n# frontend workflowclient api per seconds by domain\n\n * meaning: shows which domains are making the most requests using workflowclient(excluding worker api like pollfordecisiontask and responddecisiontaskcompleted). used for troubleshooting. in the future it can be used to set some rate limiting per domain.\n * suggested monitor: no monitor needed.\n * datadog query example\n\nsum:cadence_frontend.cadence_requests{(operation in (signalwithstartworkflowexecution,signalworkflowexecution,startworkflowexecution,terminateworkflowexecution,resetworkflowexecution,requestcancelworkflowexecution,listworkflowexecutions))} by {domain,operation}.as_rate()\n\n\n\n# cadence application monitoring\n\nthis section describes the recommended dashboards for monitoring cadence application using metrics emitted by sdk. see the setup section about how to collect those metrics.\n\n\n# workflow start and successful completion\n\n * workflow successfully started/signalwithstart and completed/canceled/continuedasnew\n * monitor: not recommended\n * datadog query example\n\nsum:cadence_client.cadence_workflow_start{$domain,$tasklist,$workflowtype} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_completed{$domain,$tasklist,$workflowtype} by {workflowtype,env,domain,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_canceled{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_continue_as_new{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env,tasklist}.as_rate()\nsum:cadence_client.cadence_workflow_signal_with_start{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env,tasklist}.as_rate()\n\n\n\n# workflow failure\n\n * metrics for all types of failures, including workflow failures(throw uncaught exceptions), workflow timeout and termination.\n * for timeout and termination, workflow worker doesn’t have a chance to emit metrics when it’s terminate, so the metric comes from the history service\n * monitor: application should set monitor on timeout and failure to make sure workflow are not failing. cancel/terminate are usually triggered by human intentionally.\n * when the metrics fire, go to cadence ui to find the failed workflows and investigate the workflow history to understand the type of failure\n * datadog query example\n\nsum:cadence_client.cadence_workflow_failed{$domain,$tasklist,$workflowtype} by {workflowtype,domain,env}.as_count()\nsum:cadence_history.workflow_failed{$domain,$workflowtype} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_terminate{$domain,$workflowtype} by {domain,env,workflowtype}.as_count()\nsum:cadence_history.workflow_timeout{$domain,$workflowtype} by {domain,env,workflowtype}.as_count()\n\n\n\n# decision poll counters\n\n * indicates if the workflow worker is available and is polling tasks. if the worker is not available no counters will show. can also check if the worker is using the right task list. “no task” poll type means that the worker exists and is idle. the timeout for this long poll api is 50 seconds. if no task is received within 50 seconds, then an empty response will be returned and another long poll request will be sent.\n * monitor: application can should monitor on it to make sure workers are available\n * when fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * datadog query example\n\nsum:cadence_client.cadence_decision_poll_total{$domain,$tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_failed{$domain,$tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_no_task{$domain,$tasklist}.as_count()\nsum:cadence_client.cadence_decision_poll_succeed{$domain,$tasklist}.as_count()\n\n\n\n# decisiontasks scheduled per second\n\n * indicate how many decision tasks are scheduled\n * monitor: not recommended -- information only to know whether or not a tasklist is overloaded\n * datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:adddecisiontask,$tasklist,$domain} by {tasklist,domain}.as_rate()\n\n\n\n# decision scheduled to start latency\n\n * if this latency is too high then either: the worker is not available or too busy after the task has been scheduled. the task list is overloaded(confirmed by decisiontaskscheduled per second widget). by default a task list only has one partition and a partition can only be owned by one host and so the throughput of a task list is limited. more task lists can be added to scale or a scalable task list can be used to add more partitions.\n * monitor: application can set monitor on it to make sure latency is tolerable\n * when fired, check if worker capacity is enough, then check if tasklist is overloaded. if needed, contact the cadence cluster admin to enable scalable tasklist to add more partitions to the tasklist\n * datadog query example\n\navg:cadence_client.cadence_decision_scheduled_to_start_latency.avg{$domain,$tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.max{$domain,$tasklist} by {env,domain,tasklist}\nmax:cadence_client.cadence_decision_scheduled_to_start_latency.95percentile{$domain,$tasklist} by {env,domain,tasklist}\n\n\n\n# decision execution failure\n\n * this means some critical bugs in workflow code causing decision task execution failure\n * monitor: application should set monitor on it to make sure no consistent failure\n * when fired, you may need to terminate the problematic workflows to mitigate the issue. after you identify the bugs, you can fix the code and then reset the workflow to recover\n * datadog query example\n\nsum:cadence_client.cadence_decision_execution_failed{$domain,$tasklist} by {tasklist,workflowtype}.as_count()\n\n\n\n# decision execution timeout\n\n * this means some critical bugs in workflow code causing decision task execution timeout\n * monitor: application should set monitor on it to make sure no consistent timeout\n * when fired, you may need to terminate the problematic workflows to mitigate the issue. after you identify the bugs, you can fix the code and then reset the workflow to recover\n * datadog query example\n\nsum:cadence_history.start_to_close_timeout{operation:timeractivetaskdecision*,$domain}.as_count()\n\n\n\n# workflow end to end latency\n\n * this is for the client application to track their slos for example, if you expect a workflow to take duration d to complete, you can use this latency to set a monitor.\n * monitor: application can monitor this metrics if expecting workflow to complete within a certain duration.\n * when fired, investigate the workflow history to see the workflow takes longer than expected to complete\n * datadog query example\n\navg:cadence_client.cadence_workflow_endtoend_latency.median{$domain,$tasklist,$workflowtype} by {env,domain,tasklist,workflowtype}\navg:cadence_client.cadence_workflow_endtoend_latency.95percentile{$domain,$tasklist,$workflowtype} by {env,domain,tasklist,workflowtype}\n\n\n\n# workflow panic and nondeterministicerror\n\n * these errors mean that there is a bug in the code and the deploy should be rolled back.\n * a monitor should be set on this metric\n * when fired, you may rollback the deployment to mitigate your issue. usually this caused by bad (non-backward compatible) code change. after rollback, look at your worker error logs to see where the bug is.\n * datadog query example\n\nsum:cadence_client.cadence_worker_panic{$domain} by {env,domain}.as_rate()\nsum:cadence_client.cadence_non_deterministic_error{$domain} by {env,domain}.as_rate()\n\n\n\n# workflow sticky cache hit rate and miss count\n\n * this metric can be used for performance optimization. this can be improved by adding more worker instances, or adjust the workeroption(gosdk) or workferfactoryoption(java sdk). cachehitrate too low means workers will have to replay history to rebuild the workflow stack when executing a decision task. depending on the the history size\n * if less than 1mb, then it’s okay to be lower than 50%\n * if greater than 1mb, then it’s okay to be greater than 50%\n * if greater than 5mb, , then it’s okay to be greater than 60%\n * if greater than 10mb , then it’s okay to be greater than 70%\n * if greater than 20mb , then it’s okay to be greater than 80%\n * if greater than 30mb , then it’s okay to be greater than 90%\n * workflow history size should never be greater than 50mb.\n * a monitor can be set on this metric, if performance is important.\n * when fired, adjust the stickycachesize in the workerfactoryoption, or add more workers\n * datadog query example\n\nsum:cadence_client.cadence_sticky_cache_miss{$domain} by {env,domain}.as_count()\nsum:cadence_client.cadence_sticky_cache_hit{$domain} by {env,domain}.as_count()\n(b / (a+b)) * 100\n\n\n\n# activity task operations\n\n * activity started/completed counters\n * monitor: not recommended\n * datadog query example\n\nsum:cadence_client.cadence_activity_task_failed{$domain,$tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_completed{$domain,$tasklist} by {activitytype}.as_rate()\nsum:cadence_client.cadence_activity_task_timeouted{$domain,$tasklist} by {activitytype}.as_rate()\n\n\n\n# local activity task operations\n\n * local activity execution counters\n * monitor: not recommended\n * datadog query example\n\nsum:cadence_client.cadence_local_activity_total{$domain,$tasklist} by {activitytype}.as_count()\n\n\n\n# activity execution latency\n\n * if it’s expected that an activity will take x amount of time to complete, a monitor on this metric could be helpful to enforce that expectation.\n * monitor: application can set monitor on it if expecting workflow start/complete activities with certain latency\n * when fired, investigate the activity code and its dependencies\n * datadog query example\n\navg:cadence_client.cadence_activity_execution_latency.avg{$domain,$tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_execution_latency.max{$domain,$tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# activity poll counters\n\n * indicates the activity worker is available and is polling tasks. if the worker is not available no counters will show. can also check if the worker is using the right task list. “no task” poll type means that the worker exists and is idle. the timeout for this long poll api is 50 seconds. if within that 50 seconds, no task is received then an empty response will be returned and another long poll request will be sent.\n * monitor: application can set monitor on it to make sure activity workers are available\n * when fires, investigate the worker deployment to see why they are not available, also check if they are using the right domain/tasklist\n * datadog query example\n\nsum:cadence_client.cadence_activity_poll_total{$domain,$tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_failed{$domain,$tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_succeed{$domain,$tasklist} by {activitytype}.as_count()\nsum:cadence_client.cadence_activity_poll_no_task{$domain,$tasklist} by {activitytype}.as_count()\n\n\n\n# activitytasks scheduled per second\n\n * indicate how many activities tasks are scheduled\n * monitor: not recommended -- information only to know whether or not a tasklist is overloaded\n * datadog query example\n\nsum:cadence_matching.cadence_requests_per_tl{*,operation:addactivitytask,$tasklist,$domain} by {tasklist,domain}.as_rate()\n\n\n\n# activity scheduled to start latency\n\n * if the latency is too high either: the worker is not available or too busy there are too many activities scheduled into the same tasklist and the tasklist is not scalable. same as decision scheduled to start latency\n * monitor: application should set monitor on it\n * when fired, check if workers are enough, then check if the tasklist is overloaded. if needed, contact the cadence cluster admin to enable scalable tasklist to add more partitions to the tasklist\n * datadog query example\n\navg:cadence_client.cadence_activity_scheduled_to_start_latency.avg{$domain,$tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.max{$domain,$tasklist} by {env,domain,tasklist,activitytype}\nmax:cadence_client.cadence_activity_scheduled_to_start_latency.95percentile{$domain,$tasklist} by {env,domain,tasklist,activitytype}\n\n\n\n# activity failure\n\n * a monitor on this metric will alert the team that activities are failing the activity timeout metrics are emitted by the history service, because a timeout causes a hard stop and the client doesn’t have time to emit metrics.\n * monitor: application can set monitor on it\n * when fired, investigate the activity code and its dependencies\n * cadence_activity_execution_failed vs cadence_activity_task_failed: only have different when using retrypolicy cadence_activity_task_failed counter increase per activity attempt cadence_activity_execution_failed counter increase when activity fails after all attempts\n * should only monitor on cadence_activity_execution_failed\n * datadog query example\n\nsum:cadence_client.cadence_activity_execution_failed{$domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_panic{$domain} by {domain,env}.as_count()\nsum:cadence_client.cadence_activity_task_failed{$domain} by {domain,env}.as_rate()\nsum:cadence_client.cadence_activity_task_canceled{$domain} by {domain,env}.as_count()\nsum:cadence_history.heartbeat_timeout{$domain} by {domain,env}.as_count()\nsum:cadence_history.schedule_to_start_timeout{$domain} by {domain,env}.as_rate()\nsum:cadence_history.start_to_close_timeout{$domain} by {domain,env}.as_rate()\nsum:cadence_history.schedule_to_close_timeout{$domain} by {domain,env}.as_count()\n\n\n\n# service api success rate\n\n * the client’s experience of the service availability. it encompasses many apis. things that could affect the service’s api success rate are:\n * service availability\n * the network could have issues.\n * a required api is not available.\n * client side errors like entitynotexists, workflowalreadystarted etc. this means that application code has potential bugs of calling cadence service.\n * monitor: application can set monitor on it\n * when fired, check application logs to see if the error is cadence server error or client side error. error like entitynotexists/executionalreadystarted/queryworkflowfailed/etc are client side error, meaning that the application is misusing the apis. if most errors are server side errors(internalserviceerror), you can contact cadence admin.\n * datadog query example\n\nsum:cadence_client.cadence_error{*} by {domain}.as_count()\nsum:cadence_client.cadence_request{*} by {domain}.as_count()\n(1 - a / b) * 100\n\n\n\n# service api latency\n\n * the latency of the api, excluding long poll apis.\n * application can set monitor on certain apis, if necessary.\n * datadog query example\n\navg:cadence_client.cadence_latency.95percentile{$domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}\n\n\n\n# service api breakdown\n\n * a counter breakdown by api to help investigate availability\n * no monitor needed\n * datadog query example\n\nsum:cadence_client.cadence_request{$domain,!cadence_metric_scope:cadence-pollforactivitytask,!cadence_metric_scope:cadence-pollfordecisiontask} by {cadence_metric_scope}.as_count()\n\n\n\n# service api error breakdown\n\n * a counter breakdown by api error to help investigate availability\n * no monitor needed\n * datadog query example\n\nsum:cadence_client.cadence_error{$domain} by {cadence_metric_scope}.as_count()\n\n\n\n# max event blob size\n\n * by default the max size is 2 mb. if the input is greater than the max size the server will reject the request. the size of a single history event. this applies to any event input, like start workflow event, start activity event, or signal event. it should never be greater than 2mb.\n * a monitor should be set on this metric.\n * when fired, please review the design/code asap to reduce the blob size. reducing the input/output of workflow/activity/signal will help.\n * datadog query example\n\n​​max:cadence_history.event_blob_size.quantile{!domain:all,$domain} by {domain}\n\n\n\n# max history size\n\n * workflow history cannot grow indefinitely. it will cause replay issues. if the workflow exceeds the history’s max size the workflow will be terminate automatically. the max size by default is 200 megabytes. as a suggestion for workflow design, workflow history should never grow greater than 50mb. use continueasnew to break long workflows into multiple runs.\n * a monitor should be set on this metric.\n * when fired, please review the design/code asap to reduce the history size. reducing the input/output of workflow/activity/signal will help. also you may need to use continueasnew to break a single execution into smaller pieces.\n * datadog query example\n\n​​max:cadence_history.history_size.quantile{!domain:all,$domain} by {domain}\n\n\n\n# max history length\n\n * the number of events of workflow history. it should never be greater than 50k(workflow exceeding 200k events will be terminated by server). use continueasnew to break long workflows into multiple runs.\n * a monitor should be set on this metric.\n * when fired, please review the design/code asap to reduce the history length. you may need to use continueasnew to break a single execution into smaller pieces.\n * datadog query example\n\n​​max:cadence_history.history_count.quantile{!domain:all,$domain} by {domain}\n\n\n\n# cadence history service monitoring\n\nhistory is the most critical/core service for cadence which implements the workflow logic.\n\n\n# history shard movements\n\n * should only happen during deployment or when the node restarts. if there’s shard movement without deployments then that’s unexpected and there’s probably a performance issue. the shard ownership is assigned by a particular history host, so if the shard is moving it’ll be hard for the frontend service to route a request to a particular history shard and to find it.\n * a monitor can be set to be alerted on shard movements without deployment.\n * datadog query example\n\nsum:cadence_history.membership_changed_count{operation:shardcontroller}\nsum:cadence_history.shard_closed_count{operation:shardcontroller}\nsum:cadence_history.sharditem_created_count{operation:shardcontroller}\nsum:cadence_history.sharditem_removed_count{operation:shardcontroller}\n\n\n\n# transfer tasks per second\n\n * transfertask is an internal background task that moves workflow state and transfers an action task from the history engine to another service(e.g. matching service, elasticsearch, etc)\n * no monitor needed\n * datadog query example\n\nsum:cadence_history.task_requests{operation:transferactivetask*} by {operation}.as_rate()\n\n\n\n# timer tasks per second\n\n * timer tasks are tasks that are scheduled to be triggered at a given time in future. for example, workflow.sleep() will wait an x amount of time then the task will be pushed somewhere for a worker to pick up.\n * datadog query example\n\nsum:cadence_history.task_requests{operation:timeractivetask*} by {operation}.as_rate()\n\n\n\n# transfer tasks per domain\n\n * count breakdown by domain\n * datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:transferactive*} by {domain}.as_count()\n\n\n\n# timer tasks per domain\n\n * count breakdown by domain\n * datadog query example\n\nsum:cadence_history.task_requests_per_domain{operation:timeractive*} by {domain}.as_count()\n\n\n\n# transfer latency by type\n\n * if latency is too high then it’s an issue for a workflow. for example, if transfer task latency is 5 second, then it takes 5 second for activity/decision to actual receive the task.\n * monitor should be set on diffeernt types of latency. note that queue_latency can go very high during deployment and it's expected. see below note for explanation.\n * when fired, check if it’s due to some persistence issue. if so then investigate the database(may need to scale up) if not then see if need to scale up cadence deployment(k8s instance)\n * datadog query example\n\navg:cadence_history.task_latency.quantile{$pxxlatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pxxlatency,operation:transfer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pxxlatency,operation:transfer*} by {operation}\n\n\n\n# timer task latency by type\n\n * if latency is too high then it’s an issue for a workflow. for example, if you set the workflow.sleep() for 10 seconds and the timer latency is 5 secs then the workflow will sleep for 15 seconds.\n * monitor should be set on diffeernt types of latency.\n * when fired, check if it’s due to some persistence issue. if so then investigate the database(may need to scale up) [mostly] if not then see if need to scale up cadence deployment(k8s instance)\n * datadog query example\n\navg:cadence_history.task_latency.quantile{$pxxlatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_processing.quantile{$pxxlatency,operation:timer*} by {operation}\navg:cadence_history.task_latency_queue.quantile{$pxxlatency,operation:timer*} by {operation}\n\n\n\n# note: task queue latency vs executing latency vs processing latency in transfer & timer task latency metrics\n\n * task_latency_queue: “queue latency” is “end to end” latency for users. the latency could go to several minutes during deployment because of metrics being re-emitted (but the actual latency is not that high)\n * task_latency: “executing latency” is the time from submission to executing pool to completion. it includes scheduling, retry and processing time of the task.\n * task_latency_processing: “processing latency” is the processing time of the task of a single attempt(without retry)\n\n\n# transfer task latency per domain\n\n * latency breakdown by domain\n * no monitor needed.\n * datadog query example: modify above queries to use domain tag.\n\n\n# timer task latency per domain\n\n * latency breakdown by domain\n * no monitor needed.\n * datadog query example: modify above queries to use domain tag.\n\n\n# history api per second\n\ninformation about history api datadog query example\n\nsum:cadence_history.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# history api errors per second\n\n * information about history api\n * no monitor needed\n * datadog query example\n\nsum:cadence_history.cadence_errors{*} by {operation}.as_rate()\nsum:cadence_history.cadence_errors_bad_request{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_not_active{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_service_busy{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_entity_not_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_execution_already_completed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_execution_already_started{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_already_exists{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_cancellation_already_requested{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_query_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_limit_exceeded{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_context_timeout{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_retry_task{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_bad_binary{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_client_version_not_supported{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_incomplete_history{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_nondeterministic{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_unauthorized{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_authorize_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_remote_syncmatch_failed{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_domain_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_identity_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_signal_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_workflow_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_request_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_task_list_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_id_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_activity_type_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_marker_name_exceeded_warn_limit{*} by {operation}.as_rate() \nsum:cadence_history.cadence_errors_timer_id_exceeded_warn_limit{*} by {operation}.as_rate() \n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# max history size\n\nthe history size of the workflow cannot be too large otherwise it will cause performance issue during replay. the soft limit is 200mb. if exceeding workflow will be terminated by server.\n\n * no monitor needed\n * datadog query is same as the client section\n\n\n# max history length\n\nsimilarly, the history length of the workflow cannot be too large otherwise it will cause performance issues during replay. the soft limit is 200k events. if exceeding, workflow will be terminated by server.\n\n * no monitor needed\n * datadog query is same as the client section\n\n\n# max event blob size\n\n * the size of each event(e.g. decided by input/output of workflow/activity/signal/chidlworkflow/etc) cannot be too large otherwise it will also cause performance issue. the soft limit is 2mb. if exceeding, the requests will be rejected by server, meaning that workflow won’t be able to make any progress.\n * no monitor needed\n * datadog query is same as the client section\n\n\n# cadence matching service monitoring\n\nmatching service is to match/assign tasks from cadence service to workers. matching got the tasks from history service. if workers are active the task will be matched immediately , it’s called “sync match”. if workers are not available, matching will persist into database and then reload the tasks when workers are back(called “async match”)\n\n\n# matching apis per second\n\n * api processed by matching service per second\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.cadence_requests{*} by {operation}.as_rate()\n\n\n\n# matching api errors per second\n\n * api errors by matching service per second\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.cadence_errors_per_tl{*} by {operation,domain,tasklist}.as_rate()\nsum:cadence_matching.cadence_errors_bad_request_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_request{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_not_active{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_service_busy{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_entity_not_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_execution_already_started{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_domain_already_exists{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_cancellation_already_requested{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_query_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_limit_exceeded{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_context_timeout{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_retry_task{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_bad_binary{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_client_version_not_supported{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_incomplete_history{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_nondeterministic{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_unauthorized{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_authorize_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed_per_tl{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_remote_syncmatch_failed{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_shard_ownership_lost{*} by {operation,domain,tasklist}\nsum:cadence_matching.cadence_errors_event_already_started{*} by {operation,domain,tasklist}\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# matching regular api latency\n\n * regular apis are the apis excluding long polls\n * no monitor needed\n * datadog query example\n\navg:cadence_matching.cadence_latency_per_tl.quantile{$pxxlatency,!operation:pollfor*,!operation:queryworkflow} by {operation,tasklist}\n\n\n\n# sync match latency:\n\n * if the latency is too high, probably the tasklist is overloaded. consider using multiple tasklist, or enable scalable tasklist feature by adding more partition to the tasklist(default is one) to confirm if there are too many tasks being added to the tasklist, use “addtasks per second - domain, tasklist breakdown”\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.syncmatch_latency_per_tl.quantile{$pxxlatency} by {operation,tasklist,domain}\n\n\n\n# async match latency\n\n * if a match is done asynchronously it writes a match to the db to use later. measures the time when the worker is not actively looking for tasks. if this is high, more workers are needed.\n * no monitor needed\n * datadog query example\n\nsum:cadence_matching.asyncmatch_latency_per_tl.quantile{$pxxlatency} by {operation,tasklist,domain}\n\n\n\n# cadence default persistence monitoring\n\nthe following monotors should be set up for cadence persistence.\n\n\n# persistence availability\n\n * the availability of the primary database for your cadence server\n * monitor required: below 95% > 5min then alert, below 99% triggers a slack warning\n * when fired, check if it’s due to some persistence issue. if so then investigate the database(may need to scale up) [mostly] if not then see if need to scale up cadence deployment(k8s instance)\n * datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_requests{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n(1 - e / f) * 100\n(1 - g / h) * 100\n\n\n\n# persistence by service tps\n\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.persistence_requests{*}.as_rate()\nsum:cadence_history.persistence_requests{*}.as_rate()\nsum:cadence_worker.persistence_requests{*}.as_rate()\nsum:cadence_matching.persistence_requests{*}.as_rate()\n\n\n\n\n# persistence by operation tps\n\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_history.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_worker.persistence_requests{*} by {operation}.as_rate()\nsum:cadence_matching.persistence_requests{*} by {operation}.as_rate()\n\n\n\n\n# persistence by operation latency\n\n * monitor required, alert if 95% of all operation latency is greater than 1 second for 5mins, warning if greater than 0.5 seconds\n * when fired, investigate the database(may need to scale up) [mostly] if there’s a high latency, then there could be errors or something wrong with the db\n * datadog query example\n\navg:cadence_matching.persistence_latency.quantile{$pxxlatency} by {operation}\navg:cadence_worker.persistence_latency.quantile{$pxxlatency} by {operation}\navg:cadence_frontend.persistence_latency.quantile{$pxxlatency} by {operation}\navg:cadence_history.persistence_latency.quantile{$pxxlatency} by {operation}\n\n\n\n# persistence error by operation count\n\n * it's to help investigate availability issue\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.persistence_errors{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors{*} by {operation}.as_count()\n\nsum:cadence_frontend.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_frontend.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_history.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_history.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_matching.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_matching.persistence_errors_bad_request{*} by {operation}.as_count()\n\nsum:cadence_worker.persistence_errors_shard_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_shard_ownership_lost{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_current_workflow_condition_failed{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_timeout{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_busy{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_entity_not_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_execution_already_started{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_domain_already_exists{*} by {operation}.as_count()\nsum:cadence_worker.persistence_errors_bad_request{*} by {operation}.as_count()\n\n\n\n * cadence_errors is internal service errors.\n * any cadence_errors_* is client side error\n\n\n# cadence advanced visibility persistence monitoring(if applicable)\n\nkafka & elasticsearch are only for visibility. only applicable if using advanced visibility. for writing visibility records, cadence history service will write down the records into kafka, and then cadence worker service will read from kafka and write into elasticsearch(in batch, for performance optimization) for reading visibility records, frontend service will query elasticsearch directly.\n\n\n# persistence availability\n\n * the availability of cadence server using database\n * monitor can be set\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_count()\n(1 - a / b) * 100\n(1 - c / d) * 100\n\n\n\n# persistence by service tps\n\n * the error of persistence api call by service\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*}.as_rate()\nsum:cadence_history.elasticsearch_requests{*}.as_rate()\n\n\n\n# persistence by operation tps(read: es, write: kafka)\n\n * the rate of persistence api call by api\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_requests{*} by {operation}.as_rate()\nsum:cadence_history.elasticsearch_requests{*} by {operation}.as_rate()\n\n\n\n# persistence by operation latency(in seconds) (read: es, write: kafka)\n\n * the latency of persistence api call\n * no monitor needed\n * datadog query example\n\navg:cadence_frontend.elasticsearch_latency.quantile{$pxxlatency} by {operation}\navg:cadence_history.elasticsearch_latency.quantile{$pxxlatency} by {operation}\n\n\n\n# persistence error by operation count (read: es, write: kafka)\n\n * the error of persistence api call\n * no monitor needed\n * datadog query example\n\nsum:cadence_frontend.elasticsearch_errors{*} by {operation}.as_count()\nsum:cadence_history.elasticsearch_errors{*} by {operation}.as_count()\n\n\n\n# kafka->es processor counter\n\n * this is the metrics of a background processing: consuming kafka messages and then populate to elasticsearch in batch\n * monitor on the running of the background processing(counter metrics is > 0)\n * when fired, restart cadence service first to mitigate. then look at logs to see why the process is stopped(process panic/error/etc). may consider add more pods (replicacount) to sys-worker service for higher availability\n * datadog query example\n\nsum:cadence_worker.es_processor_requests{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_retries{*} by {operation}.as_count()\n\n\n\n# kafka->es processor error\n\n * this is the error metrics of the above processing logic almost all errors are retryable errors so it’s not a problem.\n * need to monitor error\n * when fired, go to kibana to find logs about the error details. the most common error is missing the elasticsearch index field -- an index field is added in dynamicconfig but not in elasticsearch, or vice versa . if so, follow the runbook to add the field to elasticsearch or dynamic config.\n * datadog query example\n\nsum:cadence_worker.es_processor_error{*} by {operation}.as_count()\nsum:cadence_worker.es_processor_corrupted_data{*} by {operation}.as_count()\n\n\n\n# kafka->es processor latency\n\n * the latency of the processing logic\n * no monitor needed\n * datadog query example\n\nsum:cadence_worker.es_processor_process_msg_latency.quantile{$pxxlatency} by {operation}.as_count()\n\n\n\n# cadence dependency metrics monitor suggestion\n\n\n# computing platform metrics for cadence deployment\n\ncadence server being deployed on any computing platform(e.g. kubernetese) should be monitored on the blow metrics:\n\n * cpu\n * memory\n\n\n# database\n\ndepends on which database, you should at least monitor on the below metrics\n\n * disk usage\n * cpu\n * memory\n * read api latency\n * write api latency\n\n\n# kafka (if applicable)\n\n * disk usage\n * cpu\n * memory\n\n\n# elasticsearch (if applicable)\n\n * disk usage\n * cpu\n * memory\n\n\n# cadence service slo recommendation\n\n * core api availability: 99.9%\n * core api latency: <1s\n * overall task dispatch latency: <2s (queue_latency for transfer task and timer task)",charsets:{}},{title:"Cluster Maintenance",frontmatter:{layout:"default",title:"Cluster Maintenance",permalink:"/docs/operation-guide/maintain",readingShow:"top"},regularPath:"/docs/07-operation-guide/02-maintain.html",relativePath:"docs/07-operation-guide/02-maintain.md",key:"v-c3677d3c",path:"/docs/operation-guide/maintain/",headers:[{level:2,title:"Scale up & down Cluster",slug:"scale-up-down-cluster",normalizedTitle:"scale up & down cluster",charIndex:null},{level:2,title:"Scale up a tasklist using Scalable tasklist feature",slug:"scale-up-a-tasklist-using-scalable-tasklist-feature",normalizedTitle:"scale up a tasklist using scalable tasklist feature",charIndex:674},{level:2,title:"Restarting Cluster",slug:"restarting-cluster",normalizedTitle:"restarting cluster",charIndex:2978},{level:2,title:"Optimize SQL Persistence",slug:"optimize-sql-persistence",normalizedTitle:"optimize sql persistence",charIndex:3055},{level:2,title:"Upgrading Server",slug:"upgrading-server",normalizedTitle:"upgrading server",charIndex:4289},{level:3,title:"How to upgrade:",slug:"how-to-upgrade",normalizedTitle:"how to upgrade:",charIndex:5029},{level:3,title:"How to apply DB schema changes",slug:"how-to-apply-db-schema-changes",normalizedTitle:"how to apply db schema changes",charIndex:6295}],codeSwitcherOptions:{},headersStr:"Scale up & down Cluster Scale up a tasklist using Scalable tasklist feature Restarting Cluster Optimize SQL Persistence Upgrading Server How to upgrade: How to apply DB schema changes",content:'# Cluster Maintenance\n\nThis includes how to use and maintain a Cadence cluster for both clients and server clusters.\n\n\n# Scale up & down Cluster\n\n * When CPU/Memory is getting bottleneck on Cadence instances, you may scale up or add more instances.\n * Watch Cadence metrics\n * See if the external traffic to frontend is normal\n * If the slowness is due to too many tasks on a tasklist, you may need to scale up the tasklist\n * If persistence latency is getting too high, try scale up your DB instance\n * Never change the numOfShards of a cluster. If you need that because the current one is too small, follow the instructions to migrate your cluster to a new one.\n\n\n# Scale up a tasklist using Scalable tasklist feature\n\nBy default a tasklist is not scalable enough to support hundreds of tasks per second. That’s mainly because each tasklist is assigned to a Matching service node, and dispatching tasks in a tasklist is in sequence.\n\nIn the past, Cadence recommended using multiple tasklists to start workflow/activity. You need to make a list of tasklists and randomly pick one when starting workflows. And then when starting workers, let them listen to all the tasklists.\n\nNowadays, Cadence has a feature called “Scalable tasklist”. It will divide a tasklist into multiple logical partitions, which can distribute tasks to multiple Matching service nodes. By default this feature is not enabled because there is some performance penalty on the server side, plus it’s not common that a tasklist needs to support more than hundreds tasks per second.\n\nYou must make a dynamic configuration change in Cadence server to use this feature:\n\nmatching.numTasklistWritePartitions\n\nand\n\nmatching.numTasklistReadPartitions\n\nmatching.numTasklistWritePartitions is the number of partitions when a Cadence server sends a task to the tasklist. matching.numTasklistReadPartitions is the number of partitions when your worker accepts a task from the tasklist.\n\nThere are a few things to know when using this feature:\n\n * Always make sure matching.numTasklistWritePartitions <= matching.numTasklistReadPartitions . Otherwise there may be some tasks that are sent to a tasklist partition but no poller(worker) will be able to pick up.\n * Because of above, when scaling down the number of partitions, you must decrease the WritePartitions first, to wait for a certain time to ensure that tasks are drained, and then decrease ReadPartitions.\n * Both domain names and taskListName should be specified in the dynamic config. An example of using this feature. See more details about dynamic config format using file based dynamic config.\n\nmatching.numTasklistWritePartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\nmatching.numTasklistReadPartitions:\n - value: 10\n constraints:\n domainName: "samples-domain"\n taskListName: "aScalableTasklistName"\n\n\nNOTE: the value must be integer without double quotes.\n\n\n# Restarting Cluster\n\nMake sure rolling restart to keep high availability.\n\n\n# Optimize SQL Persistence\n\n * Connection is shared within a Cadence server host\n * For each host, The max number of connections it will consume is maxConn of defaultStore + maxConn of visibilityStore.\n * The total max number of connections your Cadence cluster will consume is the summary from all hosts(from Frontend/Matching/History/SysWorker services)\n * Frontend and history nodes need both default and visibility Stores, but matching and sys workers only need default Stores, they don\'t need to talk to visibility DBs.\n * For default Stores, history service will take the most connection, then Frontend/Matching. SysWorker will use much less than others\n * Default Stores is for Cadence’ core data model, which requires strong consistency. So it cannot use replicas. VisibilityStore is not for core data models. It’s recommended to use a separate DB for visibility store if using DB based visibility.\n * Visibility Stores usually take much less connection as the workload is much lightweight(less QPS and no explicit transactions).\n * Visibility Stores require eventual consistency for read. So it can use replicas.\n * MaxIdelConns should be less than MaxConns, so that the connections can be distributed better across hosts.\n\n\n# Upgrading Server\n\nTo get notified about release, please subscribe the release of project by : Go to https://github.com/uber/cadence -> Click the right top "Watch" button -> Custom -> "Release".\n\nIt\'s recommended to upgrade one minor version at a time. E.g, if you are at 0.10, you should upgrade to 0.11, stabilize it with running some normal workload to make sure that the upgraded server is happy with the schema changes. After ~1 hour, then upgrade to 0.12. then 0.13. etc.\n\nThe reason is that for each minor upgrade, you should be able to follow the release notes about what you should do for upgrading. The release notes may require you to run some commands. This will also help to narrow down the cause when something goes wrong.\n\n\n# How to upgrade:\n\nThings that you may need to do for upgrading a minor version(patch version upgrades should not need it):\n\n * Schema(DB/ElasticSearch) changes\n * Configuration format/layout changes\n * Data migration -- this is very rare. For example, upgrading from 0.15.x to 0.16.0 requires a data migration.\n\nYou should read through the release instruction for each minor release to understand what needs to be done.\n\n * Schema changes need to be applied before upgrading server\n * Upgrade MySQL/Postgres schema if applicable\n * Upgrade Cassandra schema if applicable\n * Upgrade ElasticSearch schema if applicable\n * Usually schema change is backward compatible. So rolling back usually is not a problem. It also means that Cadence allows running a mixed version of schema, as long as they are all greater than or equal to the required version of the server. Other requirements for upgrading should be found in the release notes. It may contain information about config changes, or special rollback instructions if normal rollback may cause problems.\n * Similarly, data migration should be done before upgrading the server binary.\n\nNOTE: Do not use “auto-setup” images to upgrade your schema. It\'s mainly for development. At most for initial setup only.\n\n\n# How to apply DB schema changes\n\nFor how to apply database schema, refer to this doc: SQL tool README Cassandra tool README\n\nThe tool makes use of a table called “schema_versions” to keep track of upgrading History. But there is no transaction guarantee for cross table operations. So in case of some error, you may need to fix or apply schema change manually. Also, the schema tool by default will upgrade schema to the latest, so no manual is required. ( you can also specify to let it upgrade to any place, like 0.14).\n\nDatabase schema changes are versioned in the folders: Versioned Schema Changes for Default Store and Versioned Schema Changes for Visibility Store if you use database for basic visibility instead of ElasticSearch.\n\nIf you use homebrew, the schema files are located at /usr/local/etc/cadence/schema/.\n\nAlternatively, you can checkout the repo and the release tag. E.g. git checkout v0.21.0 and then the schema files is at ./schema/',normalizedContent:'# cluster maintenance\n\nthis includes how to use and maintain a cadence cluster for both clients and server clusters.\n\n\n# scale up & down cluster\n\n * when cpu/memory is getting bottleneck on cadence instances, you may scale up or add more instances.\n * watch cadence metrics\n * see if the external traffic to frontend is normal\n * if the slowness is due to too many tasks on a tasklist, you may need to scale up the tasklist\n * if persistence latency is getting too high, try scale up your db instance\n * never change the numofshards of a cluster. if you need that because the current one is too small, follow the instructions to migrate your cluster to a new one.\n\n\n# scale up a tasklist using scalable tasklist feature\n\nby default a tasklist is not scalable enough to support hundreds of tasks per second. that’s mainly because each tasklist is assigned to a matching service node, and dispatching tasks in a tasklist is in sequence.\n\nin the past, cadence recommended using multiple tasklists to start workflow/activity. you need to make a list of tasklists and randomly pick one when starting workflows. and then when starting workers, let them listen to all the tasklists.\n\nnowadays, cadence has a feature called “scalable tasklist”. it will divide a tasklist into multiple logical partitions, which can distribute tasks to multiple matching service nodes. by default this feature is not enabled because there is some performance penalty on the server side, plus it’s not common that a tasklist needs to support more than hundreds tasks per second.\n\nyou must make a dynamic configuration change in cadence server to use this feature:\n\nmatching.numtasklistwritepartitions\n\nand\n\nmatching.numtasklistreadpartitions\n\nmatching.numtasklistwritepartitions is the number of partitions when a cadence server sends a task to the tasklist. matching.numtasklistreadpartitions is the number of partitions when your worker accepts a task from the tasklist.\n\nthere are a few things to know when using this feature:\n\n * always make sure matching.numtasklistwritepartitions <= matching.numtasklistreadpartitions . otherwise there may be some tasks that are sent to a tasklist partition but no poller(worker) will be able to pick up.\n * because of above, when scaling down the number of partitions, you must decrease the writepartitions first, to wait for a certain time to ensure that tasks are drained, and then decrease readpartitions.\n * both domain names and tasklistname should be specified in the dynamic config. an example of using this feature. see more details about dynamic config format using file based dynamic config.\n\nmatching.numtasklistwritepartitions:\n - value: 10\n constraints:\n domainname: "samples-domain"\n tasklistname: "ascalabletasklistname"\nmatching.numtasklistreadpartitions:\n - value: 10\n constraints:\n domainname: "samples-domain"\n tasklistname: "ascalabletasklistname"\n\n\nnote: the value must be integer without double quotes.\n\n\n# restarting cluster\n\nmake sure rolling restart to keep high availability.\n\n\n# optimize sql persistence\n\n * connection is shared within a cadence server host\n * for each host, the max number of connections it will consume is maxconn of defaultstore + maxconn of visibilitystore.\n * the total max number of connections your cadence cluster will consume is the summary from all hosts(from frontend/matching/history/sysworker services)\n * frontend and history nodes need both default and visibility stores, but matching and sys workers only need default stores, they don\'t need to talk to visibility dbs.\n * for default stores, history service will take the most connection, then frontend/matching. sysworker will use much less than others\n * default stores is for cadence’ core data model, which requires strong consistency. so it cannot use replicas. visibilitystore is not for core data models. it’s recommended to use a separate db for visibility store if using db based visibility.\n * visibility stores usually take much less connection as the workload is much lightweight(less qps and no explicit transactions).\n * visibility stores require eventual consistency for read. so it can use replicas.\n * maxidelconns should be less than maxconns, so that the connections can be distributed better across hosts.\n\n\n# upgrading server\n\nto get notified about release, please subscribe the release of project by : go to https://github.com/uber/cadence -> click the right top "watch" button -> custom -> "release".\n\nit\'s recommended to upgrade one minor version at a time. e.g, if you are at 0.10, you should upgrade to 0.11, stabilize it with running some normal workload to make sure that the upgraded server is happy with the schema changes. after ~1 hour, then upgrade to 0.12. then 0.13. etc.\n\nthe reason is that for each minor upgrade, you should be able to follow the release notes about what you should do for upgrading. the release notes may require you to run some commands. this will also help to narrow down the cause when something goes wrong.\n\n\n# how to upgrade:\n\nthings that you may need to do for upgrading a minor version(patch version upgrades should not need it):\n\n * schema(db/elasticsearch) changes\n * configuration format/layout changes\n * data migration -- this is very rare. for example, upgrading from 0.15.x to 0.16.0 requires a data migration.\n\nyou should read through the release instruction for each minor release to understand what needs to be done.\n\n * schema changes need to be applied before upgrading server\n * upgrade mysql/postgres schema if applicable\n * upgrade cassandra schema if applicable\n * upgrade elasticsearch schema if applicable\n * usually schema change is backward compatible. so rolling back usually is not a problem. it also means that cadence allows running a mixed version of schema, as long as they are all greater than or equal to the required version of the server. other requirements for upgrading should be found in the release notes. it may contain information about config changes, or special rollback instructions if normal rollback may cause problems.\n * similarly, data migration should be done before upgrading the server binary.\n\nnote: do not use “auto-setup” images to upgrade your schema. it\'s mainly for development. at most for initial setup only.\n\n\n# how to apply db schema changes\n\nfor how to apply database schema, refer to this doc: sql tool readme cassandra tool readme\n\nthe tool makes use of a table called “schema_versions” to keep track of upgrading history. but there is no transaction guarantee for cross table operations. so in case of some error, you may need to fix or apply schema change manually. also, the schema tool by default will upgrade schema to the latest, so no manual is required. ( you can also specify to let it upgrade to any place, like 0.14).\n\ndatabase schema changes are versioned in the folders: versioned schema changes for default store and versioned schema changes for visibility store if you use database for basic visibility instead of elasticsearch.\n\nif you use homebrew, the schema files are located at /usr/local/etc/cadence/schema/.\n\nalternatively, you can checkout the repo and the release tag. e.g. git checkout v0.21.0 and then the schema files is at ./schema/',charsets:{}},{title:"Cluster Troubleshooting",frontmatter:{layout:"default",title:"Cluster Troubleshooting",permalink:"/docs/operation-guide/troubleshooting",readingShow:"top"},regularPath:"/docs/07-operation-guide/04-troubleshooting.html",relativePath:"docs/07-operation-guide/04-troubleshooting.md",key:"v-6f38e6b6",path:"/docs/operation-guide/troubleshooting/",headers:[{level:2,title:"Errors",slug:"errors",normalizedTitle:"errors",charIndex:292},{level:2,title:"API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts",slug:"api-high-latency-timeout-task-disptaching-slowness-or-too-many-operations-onto-db-and-timeouts",normalizedTitle:"api high latency, timeout, task disptaching slowness or too many operations onto db and timeouts",charIndex:1003}],codeSwitcherOptions:{},headersStr:"Errors API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts",content:'# Cluster Troubleshooting\n\nThis section is to cover some common operation issues as a RunBook. Feel free to add more, or raise issues in the to ask for more in cadence-docs project.Or talk to us in Slack support channel!\n\nWe will keep adding more stuff. Any contribution is very welcome.\n\n\n# Errors\n\n * Persistence Max QPS Reached for List Operations\n * Check metrics to see how many List operations are performed per second on the domain. Alternatively you can enable debug log level to see more details of how a List request is ratelimited, if it\'s a staging/QA cluster.\n * Raise the ratelimiting for the domain if you believe the default ratelimit is too low\n * Failed to lock shard. Previous range ID: 132; new range ID: 133 and Failed to update shard. Previous range ID: 210; new range ID: 212\n * When this keep happening, it\'s very likely a critical configuration error. Either there are two clusters using the same database, or two clusters are using the same ringpop(bootstrap hosts).\n\n\n# API high latency, timeout, Task disptaching slowness Or Too many operations onto DB and timeouts\n\n * If it happens after you attemped to truncate tables inorder to reuse the same database/keyspace for a new cluster, it\'s possible that the data is not deleted completely. You should make sure to shutdown the Cadence when trucating, and make sure the database is cleaned. Alternatively, use a different keyspace/database is a safer way.\n\n * Timeout pushing task to matching engine, e.g. "Fail to process task","service":"cadence-history","shard-id":431,"address":"172.31.48.64:7934","component":"transfer-queue-processor","cluster-name":"active","shard-id":431,"queue-task-id":590357768,"queue-task-visibility-timestamp":1637356594382077880,"xdc-failover-version":-24,"queue-task-type":0,"wf-domain-id":"f4d6824f-9d24-4a82-81e0-e0e080be4c21","wf-id":"55d64d58-e398-4bf5-88bc-a4696a2ba87f:63ed7cda-afcf-41cd-9d5a-ee5e1b0f2844","wf-run-id":"53b52ee0-3218-418e-a9bf-7768e671f9c1","error":"code:deadline-exceeded message:timeout","lifecycle":"ProcessingFailed","logging-call-at":"task.go:331"\n \n * If this happens after traffic increased for a certain domain, it\'s likely that a tasklist is overloaded. Consider scale up the tasklist\n\n * If the request volume aligned with the traffic increased on all domain, consider scale up the cluster',normalizedContent:'# cluster troubleshooting\n\nthis section is to cover some common operation issues as a runbook. feel free to add more, or raise issues in the to ask for more in cadence-docs project.or talk to us in slack support channel!\n\nwe will keep adding more stuff. any contribution is very welcome.\n\n\n# errors\n\n * persistence max qps reached for list operations\n * check metrics to see how many list operations are performed per second on the domain. alternatively you can enable debug log level to see more details of how a list request is ratelimited, if it\'s a staging/qa cluster.\n * raise the ratelimiting for the domain if you believe the default ratelimit is too low\n * failed to lock shard. previous range id: 132; new range id: 133 and failed to update shard. previous range id: 210; new range id: 212\n * when this keep happening, it\'s very likely a critical configuration error. either there are two clusters using the same database, or two clusters are using the same ringpop(bootstrap hosts).\n\n\n# api high latency, timeout, task disptaching slowness or too many operations onto db and timeouts\n\n * if it happens after you attemped to truncate tables inorder to reuse the same database/keyspace for a new cluster, it\'s possible that the data is not deleted completely. you should make sure to shutdown the cadence when trucating, and make sure the database is cleaned. alternatively, use a different keyspace/database is a safer way.\n\n * timeout pushing task to matching engine, e.g. "fail to process task","service":"cadence-history","shard-id":431,"address":"172.31.48.64:7934","component":"transfer-queue-processor","cluster-name":"active","shard-id":431,"queue-task-id":590357768,"queue-task-visibility-timestamp":1637356594382077880,"xdc-failover-version":-24,"queue-task-type":0,"wf-domain-id":"f4d6824f-9d24-4a82-81e0-e0e080be4c21","wf-id":"55d64d58-e398-4bf5-88bc-a4696a2ba87f:63ed7cda-afcf-41cd-9d5a-ee5e1b0f2844","wf-run-id":"53b52ee0-3218-418e-a9bf-7768e671f9c1","error":"code:deadline-exceeded message:timeout","lifecycle":"processingfailed","logging-call-at":"task.go:331"\n \n * if this happens after traffic increased for a certain domain, it\'s likely that a tasklist is overloaded. consider scale up the tasklist\n\n * if the request volume aligned with the traffic increased on all domain, consider scale up the cluster',charsets:{}},{title:"Cluster Migration",frontmatter:{layout:"default",title:"Cluster Migration",permalink:"/docs/operation-guide/migration",readingShow:"top"},regularPath:"/docs/07-operation-guide/05-migration.html",relativePath:"docs/07-operation-guide/05-migration.md",key:"v-3569388c",path:"/docs/operation-guide/migration/",headers:[{level:2,title:"Migrate with naive approach",slug:"migrate-with-naive-approach",normalizedTitle:"migrate with naive approach",charIndex:397},{level:2,title:"Migrate with Global Domain Replication feature",slug:"migrate-with-global-domain-replication-feature",normalizedTitle:"migrate with global domain replication feature",charIndex:1261},{level:3,title:"Step 0 - Verify clusters' setup is correct",slug:"step-0-verify-clusters-setup-is-correct",normalizedTitle:"step 0 - verify clusters' setup is correct",charIndex:1513},{level:3,title:"Step 1 - Connect the two clusters using global domain(replication) feature",slug:"step-1-connect-the-two-clusters-using-global-domain-replication-feature",normalizedTitle:"step 1 - connect the two clusters using global domain(replication) feature",charIndex:2793},{level:3,title:"Step 2 - Test Replicating one domain",slug:"step-2-test-replicating-one-domain",normalizedTitle:"step 2 - test replicating one domain",charIndex:5051},{level:3,title:"Step 3 - Start to replicate all domains",slug:"step-3-start-to-replicate-all-domains",normalizedTitle:"step 3 - start to replicate all domains",charIndex:7266},{level:3,title:"Step 4 - Complete the migration",slug:"step-4-complete-the-migration",normalizedTitle:"step 4 - complete the migration",charIndex:8110}],codeSwitcherOptions:{},headersStr:"Migrate with naive approach Migrate with Global Domain Replication feature Step 0 - Verify clusters' setup is correct Step 1 - Connect the two clusters using global domain(replication) feature Step 2 - Test Replicating one domain Step 3 - Start to replicate all domains Step 4 - Complete the migration",content:'# Migrate Cadence cluster.\n\nThere could be some reasons that you need to migrate Cadence clusters:\n\n * Migrate to different storage, for example from Postgres/MySQL to Cassandra, or using multiple SQL database as a sharded SQL cluster for Cadence\n * Split traffic\n * Datacenter migration\n * Scale up -- to change numOfHistoryShards.\n\nBelow is two different approaches for migrating a cluster.\n\n\n# Migrate with naive approach\n\n 1. Set up a new Cadence cluster\n 2. Connect client workers to both old and new clusters\n 3. Change workflow code to start new workflows only in the new cluster\n 4. Wait for all old workflows to finish in the old cluster\n 5. Shutdown the old Cadence cluster and stop the client workers from connecting to it.\n\nNOTE 1: With this approach, workflow history/visibility will not be migrated to new cluster.\n\nNOTE 2: This is the only way to migrate a local domain, because a local domain cannot be converted to a global domain, even after a cluster enables XDC feature.\n\nNOTE 3: Starting from version 0.22.0, global domain is preferred/recommended. Please ensure you create and use global domains only. If you are using local domains, an easy way is to create a global domain and migrate to the new global domain using the above steps.\n\n\n# Migrate with Global Domain Replication feature\n\nNOTE 1: If a domain are NOT a global domain, you cannot use the XDC feature to migrate. The only way is to migrate in a naive approach\n\nNOTE 2: Only migrating to the same numHistoryShards is allowed.\n\n\n# Step 0 - Verify clusters\' setup is correct\n\n * Make sure the new cluster doesn’t already have the domain names that needs to be migrated (otherwise domain replication would fail).\n\nTo get all the domains from current cluster:\n\ncadence --address admin domain list\n\n\nThen For each global domain\n\ncadence --address --do domain describe\n\n\nto make sure it doesn\'t exist in the new cluster.\n\n * Target replication cluster should have numHistoryShards >= source cluster\n\n * Target cluster should have the same search attributes enabled in dynamic configuration and in ElasticSearch.\n \n * Check the dynamic configuration to see if they have the same list of frontend.validSearchAttributes. If any is missing in the new cluster, update the dynamic config for the new cluster.\n \n * Check results of the below command to make sure that the ES fields matched with the dynamic configuration\n\ncurl -u : -X GET https:///cadence-visibility-index -H \'Content-Type: application/json\'| jq .\n\n\nIf any search attribute is missing, add the missing search attributes to target cluster.\n\ncadence --address adm cluster add-search-attr --search_attr_key <> --search_attr_type <>\n\n\n\n# Step 1 - Connect the two clusters using global domain(replication) feature\n\nInclude the Cluster Information for both the old and new clusters in the ClusterMetadata config of both clusters. Example config for currentCluster\n\ndcRedirectionPolicy:\n policy: "all-domain-apis-forwarding" # use selected-apis-forwarding if using older versions don\'t support this policy\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: ""\n currentClusterName: ""\n clusterInformation:\n :\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: ""\n :\n enabled: true\n initialFailoverVersion: 0\n rpcName: "cadence-frontend"\n rpcAddress: ""\n\n\nfor newClusterName:\n\ndcRedirectionPolicy:\n policy: "all-domain-apis-forwarding"\n\nclusterMetadata:\n enableGlobalDomain: true\n failoverVersionIncrement: 10\n masterClusterName: ""\n currentClusterName: ""\n clusterInformation:\n :\n enabled: true\n initialFailoverVersion: 1\n rpcName: "cadence-frontend"\n rpcAddress: ""\n :\n enabled: true\n initialFailoverVersion: 0\n rpcName: "cadence-frontend"\n rpcAddress: ""\n\n\nDeploy the config. In older versions(<= v0.22), only selected-apis-forwarding is supported. This would require you to deploy a different set of workflow/activity connected to the new Cadence cluster during migration, if high availability/seamless migration is required. Because selected-apis-forwarding only forwarding the non-worker APIs.\n\nWith all-domain-apis-forwarding policy, all worker + non-worker APIs are forwarded by Cadence cluster. You don\'t need to make any deployment change to your workflow/activity workers during migration. Once migration, let all workers connect to the new Cadence cluster before removing/shutdown the old cluster.\n\nTherefore, it\'s recommended to upgrade your Cadence cluster to a higher version with all-domain-apis-forwarding policy supported. The below steps assuming you are using this policy.\n\n\n# Step 2 - Test Replicating one domain\n\nFirst of all, try replicating a single domain to make sure everything work. Here uses domain update to failover, you can also use managed failover feature to failover. You may use some testing domains for this like cadence-canary.\n\n * 2.1 Assuming the domain only contain currentCluster in the cluster list, let\'s add the new cluster to the domain.\n\ncadence --address --do domain update --clusters \n\n\nRun the command below to refresh the domain after adding a new cluster to the cluster list; we need to update the active_cluster to the same value that it appears to be.\n\ncadence --address --do domain update --active_cluster \n\n\n * 2.2 failover the domain to be active in new cluster\n\ncadence --address --do workflow-prototype domain update --active_cluster \n\n\nUse the domain describe command to verify the entire domain is replicated to the new cluster.\n\ncadence --address --do domain describe\n\n\nFind an open workflowID that we want to replicate (you can get it from the UI). Use this command to describe it to make sure it’s open and running:\n\ncadence --address --do workflow describe --workflow_id \n\n\nRun a signal command against any workflow and check that it was replicated to the new cluster. Example:\n\ncadence --address --do workflow signal --workflow_id --name \n\n\nThis command will send a noop signal to workflows to trigger a decision, which will trigger history replication if needed.\n\nVerify the workflow is replicated in the new cluster\n\ncadence --address --st --do workflow describe --workflow_id \n\n\nAlso compare the history between the two clusters:\n\ncadence --address --do workflow show --workflow_id \n\n\ncadence --address --do workflow show --workflow_id \n\n\n\n# Step 3 - Start to replicate all domains\n\nYou can repeat Step 2 for all the domains. Or you can use the managed failover feature to failover all the domains in the cluster with a single command. See more details in the global domain documentation.\n\nBecause replication cannot be triggered without a decision. Again best way is to send a garbage signal to all the workflows.\n\nIf advanced visibility is enabled, then use batch signal command to start a batch job to trigger replication for all open workflows:\n\ncadence --address --do workflow batch start --batch_type signal --query “CloseTime = missing” --signal_name --reason --input --yes\n\n\nWatch metrics & dashboard while this is happening. Also observe the signal batch job to make sure it\'s completed.\n\n\n# Step 4 - Complete the migration\n\nAfter a few days, make sure everything is stable on the new cluster. The old cluster should only be forwarding requests to new cluster.\n\nA few things need to do in order to shutdown the old cluster.\n\n * Migrate all applications to connect to the frontend of new cluster instead of relying on the forwarding\n * Watch metric dashboard to make sure no any traffic is happening on the old cluster\n * Delete the old cluster from domain cluster list. This needs to be done for every domain.\n\ncadence --address --do domain update --clusters \n\n\n * Delete the old cluster from the configuration of the new cluster.\n\nOnce above is done, you can shutdown the old cluster safely.',normalizedContent:'# migrate cadence cluster.\n\nthere could be some reasons that you need to migrate cadence clusters:\n\n * migrate to different storage, for example from postgres/mysql to cassandra, or using multiple sql database as a sharded sql cluster for cadence\n * split traffic\n * datacenter migration\n * scale up -- to change numofhistoryshards.\n\nbelow is two different approaches for migrating a cluster.\n\n\n# migrate with naive approach\n\n 1. set up a new cadence cluster\n 2. connect client workers to both old and new clusters\n 3. change workflow code to start new workflows only in the new cluster\n 4. wait for all old workflows to finish in the old cluster\n 5. shutdown the old cadence cluster and stop the client workers from connecting to it.\n\nnote 1: with this approach, workflow history/visibility will not be migrated to new cluster.\n\nnote 2: this is the only way to migrate a local domain, because a local domain cannot be converted to a global domain, even after a cluster enables xdc feature.\n\nnote 3: starting from version 0.22.0, global domain is preferred/recommended. please ensure you create and use global domains only. if you are using local domains, an easy way is to create a global domain and migrate to the new global domain using the above steps.\n\n\n# migrate with global domain replication feature\n\nnote 1: if a domain are not a global domain, you cannot use the xdc feature to migrate. the only way is to migrate in a naive approach\n\nnote 2: only migrating to the same numhistoryshards is allowed.\n\n\n# step 0 - verify clusters\' setup is correct\n\n * make sure the new cluster doesn’t already have the domain names that needs to be migrated (otherwise domain replication would fail).\n\nto get all the domains from current cluster:\n\ncadence --address admin domain list\n\n\nthen for each global domain\n\ncadence --address --do domain describe\n\n\nto make sure it doesn\'t exist in the new cluster.\n\n * target replication cluster should have numhistoryshards >= source cluster\n\n * target cluster should have the same search attributes enabled in dynamic configuration and in elasticsearch.\n \n * check the dynamic configuration to see if they have the same list of frontend.validsearchattributes. if any is missing in the new cluster, update the dynamic config for the new cluster.\n \n * check results of the below command to make sure that the es fields matched with the dynamic configuration\n\ncurl -u : -x get https:///cadence-visibility-index -h \'content-type: application/json\'| jq .\n\n\nif any search attribute is missing, add the missing search attributes to target cluster.\n\ncadence --address adm cluster add-search-attr --search_attr_key <> --search_attr_type <>\n\n\n\n# step 1 - connect the two clusters using global domain(replication) feature\n\ninclude the cluster information for both the old and new clusters in the clustermetadata config of both clusters. example config for currentcluster\n\ndcredirectionpolicy:\n policy: "all-domain-apis-forwarding" # use selected-apis-forwarding if using older versions don\'t support this policy\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: ""\n currentclustername: ""\n clusterinformation:\n :\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: ""\n :\n enabled: true\n initialfailoverversion: 0\n rpcname: "cadence-frontend"\n rpcaddress: ""\n\n\nfor newclustername:\n\ndcredirectionpolicy:\n policy: "all-domain-apis-forwarding"\n\nclustermetadata:\n enableglobaldomain: true\n failoverversionincrement: 10\n masterclustername: ""\n currentclustername: ""\n clusterinformation:\n :\n enabled: true\n initialfailoverversion: 1\n rpcname: "cadence-frontend"\n rpcaddress: ""\n :\n enabled: true\n initialfailoverversion: 0\n rpcname: "cadence-frontend"\n rpcaddress: ""\n\n\ndeploy the config. in older versions(<= v0.22), only selected-apis-forwarding is supported. this would require you to deploy a different set of workflow/activity connected to the new cadence cluster during migration, if high availability/seamless migration is required. because selected-apis-forwarding only forwarding the non-worker apis.\n\nwith all-domain-apis-forwarding policy, all worker + non-worker apis are forwarded by cadence cluster. you don\'t need to make any deployment change to your workflow/activity workers during migration. once migration, let all workers connect to the new cadence cluster before removing/shutdown the old cluster.\n\ntherefore, it\'s recommended to upgrade your cadence cluster to a higher version with all-domain-apis-forwarding policy supported. the below steps assuming you are using this policy.\n\n\n# step 2 - test replicating one domain\n\nfirst of all, try replicating a single domain to make sure everything work. here uses domain update to failover, you can also use managed failover feature to failover. you may use some testing domains for this like cadence-canary.\n\n * 2.1 assuming the domain only contain currentcluster in the cluster list, let\'s add the new cluster to the domain.\n\ncadence --address --do domain update --clusters \n\n\nrun the command below to refresh the domain after adding a new cluster to the cluster list; we need to update the active_cluster to the same value that it appears to be.\n\ncadence --address --do domain update --active_cluster \n\n\n * 2.2 failover the domain to be active in new cluster\n\ncadence --address --do workflow-prototype domain update --active_cluster \n\n\nuse the domain describe command to verify the entire domain is replicated to the new cluster.\n\ncadence --address --do domain describe\n\n\nfind an open workflowid that we want to replicate (you can get it from the ui). use this command to describe it to make sure it’s open and running:\n\ncadence --address --do workflow describe --workflow_id \n\n\nrun a signal command against any workflow and check that it was replicated to the new cluster. example:\n\ncadence --address --do workflow signal --workflow_id --name \n\n\nthis command will send a noop signal to workflows to trigger a decision, which will trigger history replication if needed.\n\nverify the workflow is replicated in the new cluster\n\ncadence --address --st --do workflow describe --workflow_id \n\n\nalso compare the history between the two clusters:\n\ncadence --address --do workflow show --workflow_id \n\n\ncadence --address --do workflow show --workflow_id \n\n\n\n# step 3 - start to replicate all domains\n\nyou can repeat step 2 for all the domains. or you can use the managed failover feature to failover all the domains in the cluster with a single command. see more details in the global domain documentation.\n\nbecause replication cannot be triggered without a decision. again best way is to send a garbage signal to all the workflows.\n\nif advanced visibility is enabled, then use batch signal command to start a batch job to trigger replication for all open workflows:\n\ncadence --address --do workflow batch start --batch_type signal --query “closetime = missing” --signal_name --reason --input --yes\n\n\nwatch metrics & dashboard while this is happening. also observe the signal batch job to make sure it\'s completed.\n\n\n# step 4 - complete the migration\n\nafter a few days, make sure everything is stable on the new cluster. the old cluster should only be forwarding requests to new cluster.\n\na few things need to do in order to shutdown the old cluster.\n\n * migrate all applications to connect to the frontend of new cluster instead of relying on the forwarding\n * watch metric dashboard to make sure no any traffic is happening on the old cluster\n * delete the old cluster from domain cluster list. this needs to be done for every domain.\n\ncadence --address --do domain update --clusters \n\n\n * delete the old cluster from the configuration of the new cluster.\n\nonce above is done, you can shutdown the old cluster safely.',charsets:{cjk:!0}},{title:"Overview",frontmatter:{layout:"default",title:"Overview",permalink:"/docs/operation-guide",readingShow:"top"},regularPath:"/docs/07-operation-guide/",relativePath:"docs/07-operation-guide/index.md",key:"v-fc381aca",path:"/docs/operation-guide/",codeSwitcherOptions:{},headersStr:null,content:"# Operation Guide Overview\n\nThis document will cover things that you need to know to run a Cadence cluster in production. Topics including: setup, monitoring, maintenance and troubleshooting.",normalizedContent:"# operation guide overview\n\nthis document will cover things that you need to know to run a cadence cluster in production. topics including: setup, monitoring, maintenance and troubleshooting.",charsets:{}},{title:"Timeouts",frontmatter:{layout:"default",title:"Timeouts",permalink:"/docs/workflow-troubleshooting/timeouts",readingShow:"top"},regularPath:"/docs/08-workflow-troubleshooting/01-timeouts.html",relativePath:"docs/08-workflow-troubleshooting/01-timeouts.md",key:"v-3f3e4754",path:"/docs/workflow-troubleshooting/timeouts/",headers:[{level:2,title:"Missing Pollers",slug:"missing-pollers",normalizedTitle:"missing pollers",charIndex:312},{level:2,title:"Tasklist backlog despite having pollers",slug:"tasklist-backlog-despite-having-pollers",normalizedTitle:"tasklist backlog despite having pollers",charIndex:897},{level:2,title:"Timeouts without heartbeating enabled",slug:"timeouts-without-heartbeating-enabled",normalizedTitle:"timeouts without heartbeating enabled",charIndex:1480},{level:2,title:"Heartbeat Timeouts after enabling heartbeating",slug:"heartbeat-timeouts-after-enabling-heartbeating",normalizedTitle:"heartbeat timeouts after enabling heartbeating",charIndex:2143}],codeSwitcherOptions:{},headersStr:"Missing Pollers Tasklist backlog despite having pollers Timeouts without heartbeating enabled Heartbeat Timeouts after enabling heartbeating",content:"# Timeouts\n\nA workflow could fail if an activity times out and will timeout when the entire workflow execution times out. Workflows or activities time out when their time to execute or time to start has been longer than their configured timeout. Some of the common causes for timeouts have been listed here.\n\n\n# Missing Pollers\n\nCadence workers are part of the service that hosts and executes the workflow. They are of two types: activity worker and workflow worker. Each of these workers are responsible for having pollers which are go-routines that poll for activity tasks and decision tasks respectively from the Cadence server. Without pollers, the workflow cannot proceed with the execution.\n\nMitigation: Make sure these workers are configured with the task lists that are used in the workflow and activities so the server can dispatch tasks to the cadence workers.\n\nWorker setup example\n\n\n# Tasklist backlog despite having pollers\n\nIf a tasklist has pollers but the backlog continues to grow then it is a supply-demand issue. The workflow is growing faster than what the workers can handle. The server wants to dispatch more tasks to the workers but they are not able to keep up.\n\nMitigation: Increase the number of cadence workers by horizontally scaling up the instances where the workflow is running.\n\nOptionally you can also increase the number of pollers per worker by providing this via worker options.\n\nLink to options in go client Link to options in java client\n\n\n# Timeouts without heartbeating enabled\n\nActivities time out StartToClose or ScheduleToClose if the activity took longer than the configured timeout.\n\nLink to description of timeouts\n\nFor long running activities, while the activity is executing, the worker can die due to regular deployments or host restarts or failures. Cadence doesn't know about this and will wait for StartToClose or ScheduleToClose timeouts to kick in.\n\nMitigation: Consider enabling heartbeating\n\nConfiguring heartbeat timeout example\n\nFor short running activities, heart beating is not required but maybe consider increasing the timeout value to suit the actual activity execution time.\n\n\n# Heartbeat Timeouts after enabling heartbeating\n\nActivity has enabled heart beating but the activity timed out with heart beat timeout. This is because the server did not receive a heart beat in the time interval configured as the heart beat timeout.\n\nMitigation: Once heartbeat timeout is configured in activity options, you need to make sure the activity periodically sends a heart beat to the server to make sure the server is aware of the activity being alive.\n\nExample to send periodic heart beat\n\nIn go client, there is an option to register the activity with auto heart beating so that it is done automatically\n\nEnabling auto heart beat during activity registration example",normalizedContent:"# timeouts\n\na workflow could fail if an activity times out and will timeout when the entire workflow execution times out. workflows or activities time out when their time to execute or time to start has been longer than their configured timeout. some of the common causes for timeouts have been listed here.\n\n\n# missing pollers\n\ncadence workers are part of the service that hosts and executes the workflow. they are of two types: activity worker and workflow worker. each of these workers are responsible for having pollers which are go-routines that poll for activity tasks and decision tasks respectively from the cadence server. without pollers, the workflow cannot proceed with the execution.\n\nmitigation: make sure these workers are configured with the task lists that are used in the workflow and activities so the server can dispatch tasks to the cadence workers.\n\nworker setup example\n\n\n# tasklist backlog despite having pollers\n\nif a tasklist has pollers but the backlog continues to grow then it is a supply-demand issue. the workflow is growing faster than what the workers can handle. the server wants to dispatch more tasks to the workers but they are not able to keep up.\n\nmitigation: increase the number of cadence workers by horizontally scaling up the instances where the workflow is running.\n\noptionally you can also increase the number of pollers per worker by providing this via worker options.\n\nlink to options in go client link to options in java client\n\n\n# timeouts without heartbeating enabled\n\nactivities time out starttoclose or scheduletoclose if the activity took longer than the configured timeout.\n\nlink to description of timeouts\n\nfor long running activities, while the activity is executing, the worker can die due to regular deployments or host restarts or failures. cadence doesn't know about this and will wait for starttoclose or scheduletoclose timeouts to kick in.\n\nmitigation: consider enabling heartbeating\n\nconfiguring heartbeat timeout example\n\nfor short running activities, heart beating is not required but maybe consider increasing the timeout value to suit the actual activity execution time.\n\n\n# heartbeat timeouts after enabling heartbeating\n\nactivity has enabled heart beating but the activity timed out with heart beat timeout. this is because the server did not receive a heart beat in the time interval configured as the heart beat timeout.\n\nmitigation: once heartbeat timeout is configured in activity options, you need to make sure the activity periodically sends a heart beat to the server to make sure the server is aware of the activity being alive.\n\nexample to send periodic heart beat\n\nin go client, there is an option to register the activity with auto heart beating so that it is done automatically\n\nenabling auto heart beat during activity registration example",charsets:{}},{title:"Overview",frontmatter:{layout:"default",title:"Overview",permalink:"/docs/workflow-troubleshooting",readingShow:"top"},regularPath:"/docs/08-workflow-troubleshooting/",relativePath:"docs/08-workflow-troubleshooting/index.md",key:"v-46aa6bb2",path:"/docs/workflow-troubleshooting/",codeSwitcherOptions:{},headersStr:null,content:"# Workflow Troubleshooting Overview\n\nThis document will serve as a guide for troubleshooting a workflow for potential issues.",normalizedContent:"# workflow troubleshooting overview\n\nthis document will serve as a guide for troubleshooting a workflow for potential issues.",charsets:{}},{title:"MIT License",frontmatter:{layout:"default",title:"MIT License",permalink:"/docs/about/license",readingShow:"top"},regularPath:"/docs/09-about/01-license.html",relativePath:"docs/09-about/01-license.md",key:"v-e574b140",path:"/docs/about/license/",codeSwitcherOptions:{},headersStr:null,content:'# MIT License\n\nCopyright (c) 2017 Uber Technologies, Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "Software"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\nTHE SOFTWARE.\n',normalizedContent:'# mit license\n\ncopyright (c) 2017 uber technologies, inc.\n\npermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the "software"), to deal\nin the software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the software, and to permit persons to whom the software is\nfurnished to do so, subject to the following conditions:\n\nthe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the software.\n\nthe software is provided "as is", without warranty of any kind, express or\nimplied, including but not limited to the warranties of merchantability,\nfitness for a particular purpose and noninfringement. in no event shall the\nauthors or copyright holders be liable for any claim, damages or other\nliability, whether in an action of contract, tort or otherwise, arising from,\nout of or in connection with the software or the use or other dealings in\nthe software.\n',charsets:{}},{title:"Home",frontmatter:{home:!0,heroText:"Fault-Tolerant Stateful Code Platform",tagline:"Focus on your business logic and let Cadence take care of the complexity of distributed systems",actionText:"Get Started →",actionLink:"/docs/get-started/",readingShow:"top"},regularPath:"/",relativePath:"index.md",key:"v-7256933b",path:"/",codeSwitcherOptions:{},headersStr:null,content:"© {{ new Date().getFullYear() }} Uber Technologies, Inc.\n\n\nEasy to use\n\nWorkflows provide primitives to allow application developers to express complex business logic as code.\n\nThe underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.\n\n\nFault tolerant\n\nCadence enables writing stateful applications without worrying about the complexity of handling process failures.\n\nCadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.\n\n\n\n\nScalable & Reliable\n\nCadence is designed to scale out horizontally to handle millions of concurrent workflows.\n\nCadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.",normalizedContent:"© {{ new date().getfullyear() }} uber technologies, inc.\n\n\neasy to use\n\nworkflows provide primitives to allow application developers to express complex business logic as code.\n\nthe underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.\n\n\nfault tolerant\n\ncadence enables writing stateful applications without worrying about the complexity of handling process failures.\n\ncadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.\n\n\n\n\nscalable & reliable\n\ncadence is designed to scale out horizontally to handle millions of concurrent workflows.\n\ncadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.",charsets:{}},{title:"Contact us",frontmatter:{layout:"default",title:"Contact us",permalink:"/docs/about",readingShow:"top"},regularPath:"/docs/09-about/",relativePath:"docs/09-about/index.md",key:"v-00de750a",path:"/docs/about/",codeSwitcherOptions:{},headersStr:null,content:"# Contact us\n\nIf you have a question, check whether it is already answered at stackoverflow under cadence-workflow tag.\n\nIf you still need help, visit .\n\nIf you have a feature request or a bug to report file an issue against one of the Cadence github repositories:\n\n * Cadence Service and CLI\n * Cadence Go Client\n * Cadence Go Client Samples\n * Cadence Java Client\n * Cadence Java Client Samples\n * Cadence Web UI",normalizedContent:"# contact us\n\nif you have a question, check whether it is already answered at stackoverflow under cadence-workflow tag.\n\nif you still need help, visit .\n\nif you have a feature request or a bug to report file an issue against one of the cadence github repositories:\n\n * cadence service and cli\n * cadence go client\n * cadence go client samples\n * cadence java client\n * cadence java client samples\n * cadence web ui",charsets:{}}],themeConfig:{logo:"/img/logo-white.svg",nav:[{text:"Docs",items:[{text:"Get Started",link:"/docs/get-started/"},{text:"Use cases",link:"/docs/use-cases/"},{text:"Concepts",link:"/docs/concepts/"},{text:"Java client",link:"/docs/java-client/"},{text:"Go client",link:"/docs/go-client/"},{text:"Command line interface",link:"/docs/cli/"},{text:"Operation Guide",link:"/docs/operation-guide/"},{text:"Glossary",link:"/GLOSSARY"},{text:"About",link:"/docs/about/"}]},{text:"Blog",link:"/blog/"},{text:"Client",items:[{text:"Java Docs",link:"https://www.javadoc.io/doc/com.uber.cadence/cadence-client"},{text:"Java Client",link:"https://mvnrepository.com/artifact/com.uber.cadence/cadence-client"},{text:"Go Docs",link:"https://godoc.org/go.uber.org/cadence"},{text:"Go Client",link:"https://github.com/uber-go/cadence-client/releases/latest"}]},{text:"Community",items:[{text:"Github Discussion",link:"https://github.com/uber/cadence/discussions"},{text:"StackOverflow",link:"https://stackoverflow.com/questions/tagged/cadence-workflow"},{text:"Github Issues",link:"https://github.com/uber/cadence/issues"},{text:"Slack",link:"http://t.uber.com/cadence-slack"},{text:"Office Hours Calendar",link:"https://calendar.google.com/event?action=TEMPLATE&tmeid=MjFwOW01NWhlZ3MyZWJkcmo2djVsMjNkNzNfMjAyMjA3MjVUMTYwMDAwWiBlNnI0MGdwM2MycjAxMDU0aWQ3ZTk5ZGxhY0Bn&tmsrc=e6r40gp3c2r01054id7e99dlac%40group.calendar.google.com&scp=ALL"}]},{text:"GitHub",items:[{text:"Cadence Service and CLI",link:"https://github.com/uber/cadence"},{text:"Cadence Go Client",link:"https://github.com/uber-go/cadence-client"},{text:"Cadence Go Client Samples",link:"https://github.com/uber-common/cadence-samples"},{text:"Cadence Java Client",link:"https://github.com/uber-java/cadence-client"},{text:"Cadence Java Client Samples",link:"https://github.com/uber/cadence-java-samples"},{text:"Cadence Web UI",link:"https://github.com/uber/cadence-web"},{text:"Cadence Docs",link:"https://github.com/uber/cadence-docs"}]},{text:"Docker",items:[{text:"Cadence Service",link:"https://hub.docker.com/r/ubercadence/server/tags"},{text:"Cadence CLI",link:"https://hub.docker.com/r/ubercadence/cli/tags"},{text:"Cadence Web UI",link:"https://hub.docker.com/r/ubercadence/web/tags"}]}],docsRepo:"uber/cadence-docs",docsDir:"src",editLinks:!0,sidebar:{"/docs/":[{title:"Get Started",path:"/docs/01-get-started",children:["01-get-started/","01-get-started/01-server-installation","01-get-started/02-java-hello-world","01-get-started/03-golang-hello-world","01-get-started/04-video-tutorials"]},{title:"Use cases",path:"/docs/02-use-cases",children:["02-use-cases/","02-use-cases/01-periodic-execution","02-use-cases/02-orchestration","02-use-cases/03-polling","02-use-cases/04-event-driven","02-use-cases/05-partitioned-scan","02-use-cases/06-batch-job","02-use-cases/07-provisioning","02-use-cases/08-deployment","02-use-cases/09-operational-management","02-use-cases/10-interactive","02-use-cases/11-dsl","02-use-cases/12-big-ml"]},{title:"Concepts",path:"/docs/03-concepts",children:["03-concepts/","03-concepts/01-workflows","03-concepts/02-activities","03-concepts/03-events","03-concepts/04-queries","03-concepts/05-topology","03-concepts/06-task-lists","03-concepts/07-archival","03-concepts/08-cross-dc-replication","03-concepts/09-search-workflows","03-concepts/10-http-api"]},{title:"Java client",path:"/docs/04-java-client",children:["04-java-client/","04-java-client/01-client-overview","04-java-client/02-workflow-interface","04-java-client/03-implementing-workflows","04-java-client/04-starting-workflow-executions","04-java-client/05-activity-interface","04-java-client/06-implementing-activities","04-java-client/07-versioning","04-java-client/08-distributed-cron","04-java-client/09-workers","04-java-client/10-signals","04-java-client/11-queries","04-java-client/12-retries","04-java-client/13-child-workflows","04-java-client/14-exception-handling","04-java-client/15-continue-as-new","04-java-client/16-side-effect","04-java-client/17-testing","04-java-client/18-workflow-replay-shadowing"]},{title:"Go client",path:"/docs/05-go-client",children:["05-go-client/","05-go-client/01-workers","05-go-client/02-create-workflows","05-go-client/02.5-starting-workflows","05-go-client/03-activities","05-go-client/04-execute-activity","05-go-client/05-child-workflows","05-go-client/06-retries","05-go-client/07-error-handling","05-go-client/08-signals","05-go-client/09-continue-as-new","05-go-client/10-side-effect","05-go-client/11-queries","05-go-client/12-activity-async-completion","05-go-client/13-workflow-testing","05-go-client/14-workflow-versioning","05-go-client/15-sessions","05-go-client/16-distributed-cron","05-go-client/17-tracing","05-go-client/18-workflow-replay-shadowing"]},{title:"Command line interface",path:"/docs/06-cli/"},{title:"Production Operation",path:"/docs/07-operation-guide/",children:["07-operation-guide/","07-operation-guide/01-setup","07-operation-guide/02-maintain","07-operation-guide/03-monitoring","07-operation-guide/04-troubleshooting","07-operation-guide/05-migration"]},{title:"Workflow Troubleshooting",path:"/docs/08-workflow-troubleshooting/",children:["08-workflow-troubleshooting/","08-workflow-troubleshooting/01-timeouts"]},{title:"Glossary",path:"../GLOSSARY"},{title:"About",path:"/docs/09-about",children:["09-about/","09-about/01-license"]}]}}};n(241);Vn.component("slack-link",()=>n.e(23).then(n.bind(null,328))),Vn.component("Badge",()=>Promise.all([n.e(0),n.e(4)]).then(n.bind(null,330))),Vn.component("CodeBlock",()=>Promise.all([n.e(0),n.e(5)]).then(n.bind(null,324))),Vn.component("CodeGroup",()=>Promise.all([n.e(0),n.e(6)]).then(n.bind(null,325)));n(242);var Ns={name:"BackToTop",props:{threshold:{type:Number,default:300}},data:()=>({scrollTop:null}),computed:{show(){return this.scrollTop>this.threshold}},mounted(){this.scrollTop=this.getScrollTop(),window.addEventListener("scroll",_s()(()=>{this.scrollTop=this.getScrollTop()},100))},methods:{getScrollTop:()=>window.pageYOffset||document.documentElement.scrollTop||document.body.scrollTop||0,scrollToTop(){window.scrollTo({top:0,behavior:"smooth"}),this.scrollTop=0}}},Rs=(n(243),Object(Es.a)(Ns,(function(){var e=this._self._c;return e("transition",{attrs:{name:"fade"}},[this.show?e("svg",{staticClass:"go-to-top",attrs:{xmlns:"http://www.w3.org/2000/svg",viewBox:"0 0 49.484 28.284"},on:{click:this.scrollToTop}},[e("g",{attrs:{transform:"translate(-229 -126.358)"}},[e("rect",{attrs:{fill:"currentColor",width:"35",height:"5",rx:"2",transform:"translate(229 151.107) rotate(-45)"}}),this._v(" "),e("rect",{attrs:{fill:"currentColor",width:"35",height:"5",rx:"2",transform:"translate(274.949 154.642) rotate(-135)"}})])]):this._e()])}),[],!1,null,"5fd4ef0c",null).exports);n(244);Vn.component("CodeSwitcher",()=>n.e(25).then(n.bind(null,329)));var zs={name:"ReadingProgress",data:()=>({readingTop:0,readingHeight:1,progressStyle:null,transform:void 0,running:!1}),watch:{$readingShow(){this.progressStyle=this.getProgressStyle(),this.$readingShow&&window.addEventListener("scroll",this.base)}},mounted(){this.transform=this.getTransform(),this.progressStyle=this.getProgressStyle(),this.$readingShow&&window.addEventListener("scroll",this.base)},beforeDestroy(){this.$readingShow&&window.removeEventListener("scroll",this.base)},methods:{base(){this.running||(this.running=!0,requestAnimationFrame(this.getReadingBase))},getReadingBase(){this.readingHeight=this.getReadingHeight()-this.getScreenHeight(),this.readingTop=this.getReadingTop(),this.progressStyle=this.getProgressStyle(),this.running=!1},getReadingHeight:()=>Math.max(document.body.scrollHeight,document.body.offsetHeight,0),getScreenHeight:()=>Math.max(window.innerHeight,document.documentElement.clientHeight,0),getReadingTop:()=>Math.max(window.pageYOffset,document.documentElement.scrollTop,0),getTransform(){const e=document.createElement("div");return["transform","-webkit-transform","-moz-transform","-o-transform","-ms-transform"].find(t=>t in e.style)||void 0},getProgressStyle(){const e=this.readingTop/this.readingHeight;switch(this.$readingShow){case"top":case"bottom":return this.transform?`${this.transform}: scaleX(${e})`:`width: ${100*e}%`;case"left":case"right":return this.transform?`${this.transform}: scaleY(${e})`:`height: ${100*e}%`;default:return null}}}},Ls=(n(245),Object(Es.a)(zs,(function(){var e=this._self._c;return e("ClientOnly",[this.$readingShow?e("div",{staticClass:"reading-progress",class:this.$readingShow},[e("div",{staticClass:"progress",style:this.progressStyle})]):this._e()])}),[],!1,null,"3640397f",null).exports);function Fs(e,t){let n=!0;void 0===e?(e="Term not found in the glossary",n=!1):e=Ms(e);return`${t=Hs(t)}`}function Ms(e){return e.replace(/:[\w+]*:([\w+]*):/g,(e,t)=>t).replace(/:([\w+]*):/g,(e,t)=>t)}function Hs(e){return e.split("_").join(" ")}function $s(e){return e.split("_").join(" ")}var Gs={name:"Term",props:{term:{type:String,required:!0},show:{type:String,required:!1,default:""}},data:()=>({termNotFound:!1}),computed:{terms(){return this.$site.pages.find(e=>"/GLOSSARY.html"===e.path).frontmatter.terms},definition(){const e=$s(this.term),t=this.terms[e];return t?Ms(t):(this.termNotFound=!0,"Term not found in the glossary")},displayText(){return $s(this.show?this.show:this.term)}}},Us=Object(Es.a)(Gs,(function(){return(0,this._self._c)("a",{class:{"term-not-found":this.termNotFound,term:!0},attrs:{title:this.definition}},[this._v(this._s(this.displayText))])}),[],!1,null,null,null).exports,Bs={props:{terms:{type:Object,required:!0}},methods:{definition(e){return function(e,t){let n=t[Hs(e)];return n=n.replace(/:([\w+]*):([\w+]*):/g,(e,n,o)=>Fs(t[Hs(n)],o)),n=n.replace(/:([\w+]*):/g,(e,n,o)=>Fs(t[Hs(n)],n)),n}(e,this.terms)}}},Vs=(n(246),Object(Es.a)(Bs,(function(){var e=this,t=e._self._c;return t("dl",e._l(Object.keys(e.terms),(function(n){return t("div",[t("dt",{staticClass:"defined-term"},[e._v(e._s(n))]),e._v(" "),t("dd",{staticClass:"term-definition",domProps:{innerHTML:e._s(e.definition(n,e.terms))}})])})),0)}),[],!1,null,null,null).exports),Ys=n(46);const Ks={redirectors:[{base:"/docs/",alternative:["get-started"]}]};var Qs=[({router:e})=>{e.beforeResolve((e,t,n)=>{const o="undefined"!=typeof window?window:null;o&&e.matched.length&&("*"!==e.matched[0].path&&e.redirectedFrom||"/blog/"===e.path)?o.location.href=e.fullPath:n()})},{},({Vue:e})=>{e.mixin({computed:{$dataBlock(){return this.$options.__data__block__}}})},{},{},({Vue:e})=>{e.component("BackToTop",Rs)},{},{},({Vue:e})=>{e.component(Ls.name,Ls),e.mixin({computed:{$readingShow(){return this.$page.frontmatter.readingShow}}})},({Vue:e})=>{e.component("CodeCopy",Ps)},({Vue:e,options:t,router:n,siteData:o})=>{e.component("Term",Us),e.component("Glossary",Vs)},({router:e,siteData:t})=>{const{routes:n=[]}=e.options,{redirectors:o=[]}=Ks;function i(e){return n.some(t=>t.path.toLowerCase()===e.toLowerCase())}function a(e){if(i(e))return e;if(!/\/$/.test(e)){const t=e+"/";if(i(t))return t}if(!/\.html$/.test(e)){const t=e.replace(/\/$/,"")+".html";if(i(t))return t}return null}if(Ks.locales&&t.locales){const e=t.locales,n=Object.keys(e),i=n.map(t=>({key:t.replace(/^\/|\/$/,""),lang:e[t].lang}));"object"!=typeof Ks.locales&&(Ks.locales={});const{fallback:a,storage:r=!0}=Ks.locales;a&&n.unshift(a),o.unshift({storage:r,base:"/",alternative(){if("undefined"!=typeof window&&window.navigator){const e=window.navigator.languages||[window.navigator.language],t=i.find(({lang:t})=>e.includes(t));if(t)return t.key}return n}})}const r=o.map(({base:e="/",storage:t=!1,alternative:n})=>{let o=!1;if(t)if("object"!=typeof t){const n="string"!=typeof t?"vuepress:redirect:"+e:t;o={get:()=>"undefined"==typeof localStorage?null:localStorage.getItem(n),set(e){"undefined"!=typeof localStorage&&localStorage.setItem(n,e)}}}else t.get&&t.set&&(o=t);return{base:e,storage:o,alternative:n}});e.beforeEach((e,t,n)=>{if(a(e.path))return n();let o;for(const t of r){const{base:n="/",storage:i=!1}=t;let{alternative:r}=t;if(!e.path.startsWith(n))continue;const s=e.path.slice(n.length)||"/";if(i){const e=i.get(t);if(e){const t=a(Object(Ys.join)(n,e,s));if(t){o=t;break}}}if("function"==typeof r&&(r=r(s)),r){"string"==typeof r&&(r=[r]);for(const e of r){const t=a(Object(Ys.join)(n,e,s));if(t){o=t;break}}if(o)break}}n(o)}),e.afterEach(e=>{if(i(e.path))for(const t of r){const{base:n,storage:o}=t;if(!o||!e.path.startsWith(n))continue;const i=e.path.slice(n.length).split("/")[0];i&&o.set(i,t)}})}],Xs=["BackToTop","ReadingProgress"];class Js extends class{constructor(){this.store=new Vn({data:{state:{}}})}$get(e){return this.store.state[e]}$set(e,t){Vn.set(this.store.state,e,t)}$emit(...e){this.store.$emit(...e)}$on(...e){this.store.$on(...e)}}{}Object.assign(Js.prototype,{getPageAsyncComponent:ss,getLayoutAsyncComponent:cs,getAsyncComponent:ls,getVueComponent:ds});var Zs={install(e){const t=new Js;e.$vuepress=t,e.prototype.$vuepress=t}};function ec(e,t){const n=t.toLowerCase();return e.options.routes.some(e=>e.path.toLowerCase()===n)}var tc={props:{pageKey:String,slotKey:{type:String,default:"default"}},render(e){const t=this.pageKey||this.$parent.$page.key;return hs("pageKey",t),Vn.component(t)||Vn.component(t,ss(t)),Vn.component(t)?e(t):e("")}},nc={functional:!0,props:{slotKey:String,required:!0},render:(e,{props:t,slots:n})=>e("div",{class:["content__"+t.slotKey]},n()[t.slotKey])},oc={computed:{openInNewWindowTitle(){return this.$themeLocaleConfig.openNewWindowText||"(opens new window)"}}},ic=(n(247),n(248),Object(Es.a)(oc,(function(){var e=this._self._c;return e("span",[e("svg",{staticClass:"icon outbound",attrs:{xmlns:"http://www.w3.org/2000/svg","aria-hidden":"true",focusable:"false",x:"0px",y:"0px",viewBox:"0 0 100 100",width:"15",height:"15"}},[e("path",{attrs:{fill:"currentColor",d:"M18.8,85.1h56l0,0c2.2,0,4-1.8,4-4v-32h-8v28h-48v-48h28v-8h-32l0,0c-2.2,0-4,1.8-4,4v56C14.8,83.3,16.6,85.1,18.8,85.1z"}}),this._v(" "),e("polygon",{attrs:{fill:"currentColor",points:"45.7,48.7 51.3,54.3 77.2,28.5 77.2,37.2 85.2,37.2 85.2,14.9 62.8,14.9 62.8,22.9 71.5,22.9"}})]),this._v(" "),e("span",{staticClass:"sr-only"},[this._v(this._s(this.openInNewWindowTitle))])])}),[],!1,null,null,null).exports),ac={functional:!0,render(e,{parent:t,children:n}){if(t._isMounted)return n;t.$once("hook:mounted",()=>{t.$forceUpdate()})}};Vn.config.productionTip=!1,Vn.use(Ur),Vn.use(Zs),Vn.mixin(function(e,t,n=Vn){!function(e){e.locales&&Object.keys(e.locales).forEach(t=>{e.locales[t].path=t});Object.freeze(e)}(t),n.$vuepress.$set("siteData",t);const o=new(e(n.$vuepress.$get("siteData"))),i=Object.getOwnPropertyDescriptors(Object.getPrototypeOf(o)),a={};return Object.keys(i).reduce((e,t)=>(t.startsWith("$")&&(e[t]=i[t].get),e),a),{computed:a}}(e=>class{setPage(e){this.__page=e}get $site(){return e}get $themeConfig(){return this.$site.themeConfig}get $frontmatter(){return this.$page.frontmatter}get $localeConfig(){const{locales:e={}}=this.$site;let t,n;for(const o in e)"/"===o?n=e[o]:0===this.$page.path.indexOf(o)&&(t=e[o]);return t||n||{}}get $siteTitle(){return this.$localeConfig.title||this.$site.title||""}get $canonicalUrl(){const{canonicalUrl:e}=this.$page.frontmatter;return"string"==typeof e&&e}get $title(){const e=this.$page,{metaTitle:t}=this.$page.frontmatter;if("string"==typeof t)return t;const n=this.$siteTitle,o=e.frontmatter.home?null:e.frontmatter.title||e.title;return n?o?o+" | "+n:n:o||"VuePress"}get $description(){const e=function(e){if(e){const t=e.filter(e=>"description"===e.name)[0];if(t)return t.content}}(this.$page.frontmatter.meta);return e||(this.$page.frontmatter.description||this.$localeConfig.description||this.$site.description||"")}get $lang(){return this.$page.frontmatter.lang||this.$localeConfig.lang||"en-US"}get $localePath(){return this.$localeConfig.path||"/"}get $themeLocaleConfig(){return(this.$site.themeConfig.locales||{})[this.$localePath]||{}}get $page(){return this.__page?this.__page:function(e,t){for(let n=0;nn||(e.hash?!Vn.$vuepress.$get("disableScrollBehavior")&&{selector:decodeURIComponent(e.hash)}:{x:0,y:0})});!function(e){e.beforeEach((t,n,o)=>{if(ec(e,t.path))o();else if(/(\/|\.html)$/.test(t.path))if(/\/$/.test(t.path)){const n=t.path.replace(/\/$/,"")+".html";ec(e,n)?o(n):o()}else o();else{const n=t.path+"/",i=t.path+".html";ec(e,i)?o(i):ec(e,n)?o(n):o()}})}(n);const o={};try{await Promise.all(Qs.filter(e=>"function"==typeof e).map(t=>t({Vue:Vn,options:o,router:n,siteData:js,isServer:e})))}catch(e){console.error(e)}return{app:new Vn(Object.assign(o,{router:n,render:e=>e("div",{attrs:{id:"app"}},[e("RouterView",{ref:"layout"}),e("div",{class:"global-ui"},Xs.map(t=>e(t)))])})),router:n}}(!1).then(({app:e,router:t})=>{t.onReady(()=>{e.$mount("#app")})})}]); \ No newline at end of file diff --git a/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/index.html b/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/index.html index ae77d5f69..608ab84df 100644 --- a/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/index.html +++ b/blog/2021/09/30/long-term-commitment-and-support-for-the-cadence-project-and-its-community/index.html @@ -5,6 +5,8 @@ Long-term commitment and support for the Cadence project, and its community + + - + @@ -138,6 +140,6 @@

Dear valued Cadence users and developers,

Some of you might have read Temporal’s recent announcement about their decision to drop the support for the Cadence project. This message caused some confusion in the community, so we would like to take this opportunity to clear things out.

First of all, Uber is committed to the long-term success of the Cadence project. Since its inception 5 years ago, use cases built on Cadence and their scale have grown significantly at Uber. Today, Cadence powers a variety of our most business-critical use cases (some public stories are available here (opens new window) and here (opens new window)). At the same time, the Cadence development team at Uber has enjoyed rapid growth with the product and has been driving innovations of workflow technology across the board, from new features (e.g. graceful failover (opens new window), workflow shadowing (opens new window), UI improvements (opens new window)) to better engineering foundations (e.g. gRPC support (opens new window), multi-tenancy support (opens new window)), all in a backwards compatible manner. Neither Uber’s use nor support of Cadence is going to change with Temporal’s announcement. We have a long list of features and exciting roadmaps ahead of us, and we will share more details in our next meetup in November ‘21. As always we will continue to push the boundaries of scale and reliability as our usage within Uber grows.

Secondly, we are committed to maintaining and growing a healthy and collaborative community. Cadence continues to attract attention as a popular open source platform (opens new window), with more than 100 contributors to our project, and more than 1500 developers in our open source Slack support channel (opens new window). The Uber Cadence team, along with our open source partners like Long (opens new window) from Indeed, have been behind the management and support of the Cadence open source community for the past 2 years. Moving forward, we are going to work even more closely with our community, through a series of online and offline channels including meetups, office hours, tech deep dives, and design consultations. We would also like to scale the way we operate, by creating a Cadence OSS Committee that allows us to maintain a closer relationship with its members, so that we can learn from each other's Cadence experiences and grow together. Please definitely let us know your suggestions on the type of engagements that you would like to see with the core team.

About Temporal and its “EOL announcement”

Temporal is a startup that was started 2 years ago based on a Cadence fork by some of the original Cadence team members. We are always grateful for their original contribution to the Cadence project and wish them the best of luck in their future endeavours. That said the announcement from Temporal only means that their team will focus on Temporal (which has always been the case for the last 2 years), and not an official stance on Cadence since they have not been involved with the project for quite some time now.

Feel free to reach out to us (cadence-oss@googlegroups.com or slack (opens new window)) if you have any questions. And we look forward to your contribution and collaboration.

The Uber Cadence team


- + diff --git a/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/index.html b/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/index.html index f2328f8bf..fe77f410c 100644 --- a/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/index.html +++ b/blog/2021/10/13/announcing-cadence-oss-office-hours-and-community-sync-up/index.html @@ -5,13 +5,15 @@ Announcing Cadence OSS office hours and community sync up + + - + @@ -137,6 +139,6 @@ Wed Oct 13 2021

Are you a current Cadence user, do you operate Cadence services, or are you interested in learning about workflow technologies and wonder what problems Cadence could solve for you? We would like to talk to you!

Our team has spent a significant amount of time working with users and partner teams at Uber to design, scale and operate their workflows. This helps our users understand the technology better, smooth their learning curve and ramp up experience, and at the same time allows us to get fast and direct feedback so we can improve the developer experience and close feature gaps. As our product and community grows, we would like to expand this practice to our users in the OSS community. For the first time ever, members of the Cadence team along with core contributors from the community will host bi-weekly office hours to answer any questions you have about Cadence, or workflow technology in general. We can also dedicate future sessions to specific topics that have a common interest. Please don’t hesitate to let us know your thoughts.

Please join a session if you would like to talk about any of the following topics:

  1. Understand what Cadence is and why it might be useful for you and your company
  2. Guidance about running Cadence services and workers in production
  3. Workflow design and operation consultation
  4. Product update, future roadmaps as well as collaboration opportunities

Building and maintaining a healthy and growing community is the key to the success of Cadence, and one of the top priorities for our team. We would like to use the office hours as an opportunity to understand and help our customers, seek feedback, and forge partnerships. We look forward to seeing you in one of the meetings.

Upcoming Office Hours

As we have a geo-distributed userbase, we are still trying to figure out a time that works for most of the people. In the meanwhile, we will manually schedule the first few instances of the meeting until we settle on a fixed schedule. Our next office hours will take place on Thursday, October 21 2pm-3pm PT/5pm-6pm EST/9pm-10pm GMT. Please join via this zoom link (opens new window).

The Uber Cadence team


- + diff --git a/blog/2021/10/19/moving-to-grpc/index.html b/blog/2021/10/19/moving-to-grpc/index.html index b1601069d..3958c29ff 100644 --- a/blog/2021/10/19/moving-to-grpc/index.html +++ b/blog/2021/10/19/moving-to-grpc/index.html @@ -5,6 +5,8 @@ Moving to gRPC + + - + @@ -139,6 +141,6 @@ Tue Oct 19 2021

# Background

Cadence historically has been using TChannel transport with Thrift encoding for both internal RPC calls and communication with client SDKs. gRPC is becoming a de-facto industry standard with much better adoption and community support. It offers features such as authentication and streaming that are very relevant for Cadence. Moreover, TChannel is being deprecated within Uber itself, pushing an effort for this migration. During the last year we’ve implemented multiple changes in server and SDK that allows users to use gRPC in Cadence, as well as to upgrade their existing Cadence cluster in a backward compatible way. This post tracks the completed work items and our future plans.

# Our Approach

With ~500 services using Cadence at Uber and many more open source customers around the world, we had to think about the gRPC transition in a backwards compatible way. We couldn’t simply flip transport and encoding everywhere. Instead we needed to support both protocols as an intermediate step to ensure a smooth transition for our users.

Cadence was using Thrift/TChannel not just for the API with client SDKs. They were also used for RPC calls between internal Cadence server components and also between different data centers. When starting this migration we had a choice of either starting with public APIs first or all the internal things within the server. We chose the latter one, so that we could gain experience and iterate faster within the server without disruption to the clients. With server side done and listening for both protocols, dynamic config flag was exposed to switch traffic internally. It allowed gradual deployment and provided an option to rollback if needed.

The next step - client migration. We have more users for the Go SDK at Uber, that is why we started with it. Current version of SDK exposes Thrift types via public API, therefore we can not remove them without breaking changes. While we have plans for revamped v2 SDK, current users are able to use gRPC as well - with the help of a translation adapter (opens new window). Migration is underway starting with cadence canary service (opens new window), and then onboarding user services one by one.

We plan to support TChannel for a few more releases and then eventually drop it in a future.

# System overview

gRPC migration overview

  1. The frontend of Cadence Server (opens new window) exposes two inbounds for both gRPC and TChannel starting v0.21.0 release (opens new window). gRPC traffic is being served on a different port that can be configured here (opens new window). For gRPC API we introduced proto IDL (opens new window) definitions. We will keep TChannel open on frontend for some time to allow gradual client migration.
  2. Starting with v0.21.0 (opens new window) internal components of Cadence Server (history & matching) also started accepting gRPC traffic. Sending traffic via gRPC is off by default and could be enabled with a flag in dynamic config (opens new window). Planned for v0.24.0 it will be enabled by default, with an option to opt-out.
  3. Starting with v0.23.0 communication between different Cadence clusters can be switched to gRPC via this configuration (opens new window). It is used for replication and request redirection to different DC.
  4. Go SDK (opens new window) has exposed generated Thrift types via its public API. This complicated migration, because switching them to proto types (or rpc agnostic types) means breaking changes. Because of this we are pursuing two alternatives:
    1. (A) Short term: starting with v0.18.2 (opens new window) a compatibility layer (opens new window) is available which makes translation between thrift-proto types underneath. It allows using gRPC communication while still using Thrift based API. Usage example (opens new window).
    2. (B) Long term: we are currently designing v2 SDK that will support gRPC directly. Its API will be RPC agnostic and will include other usability improvements. You can check some ideas that are being considered here (opens new window).
  5. Java SDK (opens new window) is currently on TChannel only. Move to gRPC is planned for 2022 H1.
  6. It is now possible to communicate with gRPC from other languages as well. Use proto IDLs (opens new window) to generate bindings for your preferred language. Minimal example (opens new window) for doing it in python.
  7. WebUI and CLI are currently on TChannel. They are planned to be switched to gRPC for 2022 H1.

# Migration steps

# Upgrading Cadence server

In order to start using gRPC please upgrade Cadence server to v0.22.0 (opens new window) or later.

  1. If you are using an older version (before v0.21.0), make sure to disable internal gRPC communication at first. Needed to ensure that all nodes in the cluster are ready to accept gRPC traffic, before switching it on. This is controlled by the system.enableGRPCOutbound (opens new window) flag in dynamic config.
  2. Once deployed, flip system.enableGRPCOutbound to true. It will require a cluster restart for setting to take effect.
  3. If you are operating in more than one DC - recommended server version to upgrade to is v0.23.0 or newer. Once individual clusters with gRPC support are deployed, please update config (opens new window) to switch cross DC traffic to gRPC. Don’t forget to update ports as well. We also recommend increasing grpcMaxMsgSize (opens new window) to 32MB which is needed to ensure smooth replication. After config change you will need a restart for setting to take effect.
  4. Do not forget that gRPC runs on a different port, therefore you might need to open it on docker containers, firewalls, etc.

# Upgrading clients

  1. GoSDK - Follow an example (opens new window) to inject Thrift-to-proto adapter during client initialization and update your config to use the gRPC port.

# Status at Uber

  • All clusters run gRPC traffic internally for 4 months without any issues.
  • Cross DC traffic has been switched to gRPC this month.
  • With internal tooling updated, we are starting to onboard services to use the Go SDK gRPC compatibility layer.

Do not hesitate to reach out to us (cadence-oss@googlegroups.com or slack (opens new window)) if you have any questions.

The Uber Cadence team


- + diff --git a/blog/2022/01/31/community-spotlight-january-2022/index.html b/blog/2022/01/31/community-spotlight-january-2022/index.html index 48e99d854..e000cc075 100644 --- a/blog/2022/01/31/community-spotlight-january-2022/index.html +++ b/blog/2022/01/31/community-spotlight-january-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - January 2022 + + - + @@ -144,6 +146,6 @@

Welcome to our very first Cadence Community Spotlight update!

This monthly update focuses on news from the wider Cadence community and is all about what you have been doing with Cadence. Do you have an interesting project that uses Cadence? If so then we want to hear from you. Also if you have any news items, blogs, articles, videos or events where Cadence has been mentioned then that is good too. We want to showcase that our community is active and is doing exciting and interesting things.

Please see below for a short round up of things that have happened recently in the community.

On the 12th January 2022 we held our first Cadence Community Related Office Hours. This session was focused on discussing how we plan and organise things for the community. This includes things such as Code of Conduct, managing social media and making sure we regularly communicate project news and events.

And you can see that this monthly update is the result of the feedback from that session! We are happy to get any feedback for comments you may have. Please remember that this update is for you so getting your feedback will help us improve it.

We will be planning other Community Related Office Hour sessions so please watch out for updates.

# Adopting a Cadence Community Code of Conduct

Some of you may already know that our community has adopted this version of the Contributor Covenant (opens new window) as our Code of Conduct. We want our community to be an open, welcoming and supportive place where everyone can collaborate.

# Recording from Cadence Meetup Available

Please don't worry if you missed our online November Cadence meetup (opens new window) because the recording is now available. You can find out more details about the meetup and get access to recordings here (opens new window)

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our slack (opens new window)#community channel.


- + diff --git a/blog/2022/02/28/community-spotlight-february-2022/index.html b/blog/2022/02/28/community-spotlight-february-2022/index.html index 6ca64b9ab..029ed4f85 100644 --- a/blog/2022/02/28/community-spotlight-february-2022/index.html +++ b/blog/2022/02/28/community-spotlight-february-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - February 2022 + + - + @@ -153,6 +155,6 @@ Cadence Youtube (opens new window) Please subscribe and let us know what other videos you’d like to see there.

# Help us to Make Cadence even better

Are you interested in helping us improve Cadence? We are always looking for contributors to help share the workload. If you’d like to help then you can start by taking a look at our list of open issues (opens new window) on Github. We currently have 320 of them that need to be worked on so if you want to learn more about Cadence and solve some of the reported issues then please take a look and volunteer to fix it.

If you are new to Cadence or you’d like to try something simple then we have some issues labelled as ‘good first issue’. These are a great place to start to get more Cadence experience.

# Cadence Calendar

We have created a Cadence public calendar (opens new window) where we can highlight events, meetings, webinars etc that are planned around Cadence. The calendar will soon be available on the Cadence website (opens new window) so please make sure that you check it regularly. This means that you can easily find out if there are any Cadence events planned that you would like to attend.

# Cadence Technical Office Hours

Our second Technical Office Hours event took place on February 28th, Monday at 9AM PST. The main objective was provide Cadence support, respond to any questions about and to share any knowledge that you have learned. We always encourage community members to come along - and thanks very much to everyone who participated.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/03/31/community-spotlight-update-march-2022/index.html b/blog/2022/03/31/community-spotlight-update-march-2022/index.html index f08000e74..fada4377d 100644 --- a/blog/2022/03/31/community-spotlight-update-march-2022/index.html +++ b/blog/2022/03/31/community-spotlight-update-march-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - March 2022 + + - + @@ -149,6 +151,6 @@ Thu Mar 31 2022

Welcome to our Cadence Community Spotlight update!

This is the latest in our series of monthly blog posts focused on the Cadence community and news about what you have been doing with Cadence.

Please see below for a short activity roundup of what has happened recently in the community.

# Updated Cadence Topology Diagram

Did you know that we have an updated Cadence Service diagram on the website? Well we do - and you can find it on our Deployment Topology (opens new window) page. We are always looking for information that helps makes it easier for people to understand how Cadence works.

Special thanks to Ben Slater for updating the diagram and also to Ender, Emrah and Long for helping review it.

# Monthly Cadence Technical Office Hours

Every month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!

Meetings are held on the last Monday of every month so make sure you mark the dates in your calendars. Our next session will be on the 25th April at 9am PT so hope to see you there!

The Cadence Community Calendar (opens new window) contains the Zoom link for the meeting and details of any other events planned so please check it regularly.

# Some Cadence Statistics

This month we thought it would be interesting to post some statistics about the Cadence community.

  • 1722 - the number of members in our #general Slack channel
  • 24 - the number of questions asked in our #support Slack channel during the month
  • 5 - the number of questions asked about Cadence in StackOverflow during the month
  • 105 - the number of contributors to the Cadence git repo
  • 9 - the number of community members who responded to a question during the month

# Using StackOverflow to Respond to Support Questions

We have over 1700 members in our #support channel on our Cadence Slack where some of you have been asking questions about Cadence. The community has been responding and provided some great answers that we don’t want to lose!

It can be difficult searching the Slack #support channel for a specific problem and we want to make sure that we capture all these great answers so that they can help others in the community.

So if possible we would like you to start posting your Cadence questions on StackOverflow (opens new window).

  • Create your question in StackOverflow
  • Post the StackOverflow question link in the Cadence Slack #support channel
  • A response to your question will be posted to StackOverflow

Other community members will be able to search StackOverflow for the details of the your question and see the response. We hope that this will make it easier for people to find answers to common questions.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/04/30/community-spotlight-update-april-2022/index.html b/blog/2022/04/30/community-spotlight-update-april-2022/index.html index e50a78345..25a83a8fd 100644 --- a/blog/2022/04/30/community-spotlight-update-april-2022/index.html +++ b/blog/2022/04/30/community-spotlight-update-april-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - April 2022 + + - + @@ -148,6 +150,6 @@

Welcome to our Cadence Community Spotlight update!

This is our monthly blog post series focused on news from in and around the Cadence community.

Please see below for a short activity roundup of what has happened recently in the community.

# SD Times Names Cadence Open Source Project of the Week

In April Cadence was named as open source project of the week by the SD Times. Being named gives the project some great publicity and means the project is getting noticed. You can find a link to the article in the Cadence in the News section below.

# Follow Us on LinkedIn and Twitter!

We have now set up Cadence accounts on LinkedIn (opens new window) and Twitter (opens new window) where you can keep up to date with what is happening in the community. We will be using these social media accounts to share news, articles, stories and links related to Cadence - so please follow us!

And don’t forget to share your news with us. We are looking forward to receiving your feedback and comments. The more we interact - the more we build our community!

# Proposal to Change the Way We Write Workflows

If you haven’t seen the proposal from community member Quanzheng Long (opens new window) about creating a new way to write Cadence workflows then please take a look:https://github.com/uber/cadence/issues/4785 (opens new window). He has already received some initial feedback and is currently working on putting together a proof of concept demo to show the community. As soon as we have more news about it - we will let you know!

# Help Us Improve Cadence

Do you want to help us improve Cadence? We are always looking for contributors so any contribution you can make - however small is welcome. If you would like to start contributing then please take a look at the list of Cadence Issues on Github (opens new window). We have some issues flagged with a tag of ‘good first issue' that would be a great place to start.

Remember that we are not only looking for code contributions but also non coding ones such as documentation improvements so please take a look and select something to work on.

# Next Cadence Technical Office Hours: 30th May 2022

Every month we hold a Technical Office Hours session via Zoom where you can speak directly with some of our Cadence experts. If you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!

Meetings are held on the last Monday of every month so please make sure you mark the dates in your calendars. Our next session will be on the 30th May at 9am PT so hope to see you there!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/05/31/community-spotlight-update-may-2022/index.html b/blog/2022/05/31/community-spotlight-update-may-2022/index.html index 0a423df6d..49cce9736 100644 --- a/blog/2022/05/31/community-spotlight-update-may-2022/index.html +++ b/blog/2022/05/31/community-spotlight-update-may-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - May 2022 + + - + @@ -144,6 +146,6 @@

Welcome to our regular Cadence Community Spotlight update!

This is our monthly blog post series focused on news from in and around the Cadence community.

Please see below for a short activity roundup of what has happened recently in the community.

# Cadence Polling Cookbook

Do you want to understand polling work and have an example of how to set it up in Cadence? Well a brand new Cadence Polling cookbook (opens new window) is now available that gives you all the details you need. The cookbook was created by several members of the Instaclustr (opens new window) team and they are keen to share it with the community. The pdf version of the cookbook can found on the Cadence website under the Polling an external API for a specific resource to become available section of the Polling Use cases (opens new window).

A Github repository (opens new window) has also been created with the sample cookbook code for you to try out for yourself.

So please go ahead and try out the cookbook and don’t forget to let us have your feedback.

# Congratulations to a First Time Contributor

We are always looking for ways to encourage project participation. It doesn't matter how large the contribution is or whether it is coding or non coding related. This month one of our community members had their first PR merged (opens new window)- so congratulations and many thanks for the contribution tonyxrandall (opens new window)!

# Share Your News!

Our #support Slack (opens new window) channel is always full of questions and activity so we know that there are are lot of people out there exploring, trying out and setting up Cadence. We are always interested in hearing about what the community are doing so if you have something to you want to share as a blog post or part of this montly update then please contact us in the #community Slack (opens new window) channel.

# Next Cadence Technical Office Hours: 3rd and 27th June 2022

We will be having two Technical Office Hours sessions this month. As 30th May was a US holiday we have moved May’s Technical Office Hours to Friday 3rd June at 11am PT. And we will be having our June call on 27th.

Remember that in these Zoom sessions you can speak directly with some of our Cadence experts so if you have a question about Cadence or are facing a particular issue getting it setup then please come along and chat to one of our experts!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/06/30/community-spotlight-update-june-2022/index.html b/blog/2022/06/30/community-spotlight-update-june-2022/index.html index d9d090c28..66bd92a21 100644 --- a/blog/2022/06/30/community-spotlight-update-june-2022/index.html +++ b/blog/2022/06/30/community-spotlight-update-june-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - June 2022 + + - + @@ -146,6 +148,6 @@

It’s time for our monthly Cadence Community Spotlight update with news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Knowledge Sharing and Support

Our Slack #support channel has been busy this month with 13 questions asked this month by 12 different community members. Six community members took time to respond to those questions which clearly shows our community is growing, collaborating and keen to share knowledge.

Please don’t forget that we encourage everyone to post questions on StackOverflow using the cadence-workflow and uber-cadence tags so that others with similar questions or issues can easily search for and find an answer.

# Improving Technical Office Hours

Over the last few months we have been holding regular monthly Office Hours meetings but they have not attracted as many participants as we would like. We would like to understand if there is something preventing people from attending (e.g perhaps the timing or dates are not convenient) so we are planning to send out a short community survey.

If you have any ideas or comments about how we can improve our community office hours sessions then please include this in your feedback or contact us in the #community Slack channel.

# Cadence Stability Improvements

Is Cadence getting better? Yes it is! Many of you may have noticed that Cadence is improving.That is because of the amount of work being done behind the scenes. The Cadence core team has been doing a lot of work to stabilise Cadence functionality. Keep watching out for even more improvements!

# Sprechen Sie Deutsch?

Do you speak German? If you do speak then we have some good news for you. A couple of Cadence blog posts have been translated into German to help promote it to a wider audience. The links are as below and we hope you find them useful!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/07/31/community-spotlight-update-july-2022/index.html b/blog/2022/07/31/community-spotlight-update-july-2022/index.html index 132ee8f49..577d4f26e 100644 --- a/blog/2022/07/31/community-spotlight-update-july-2022/index.html +++ b/blog/2022/07/31/community-spotlight-update-july-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - July 2022 + + - + @@ -149,6 +151,6 @@ Sun Jul 31 2022

Here’s our monthly Community Spotlight update that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Flying Drones with Cadence

Community member Paul Brebner (opens new window) has released another blog (opens new window) in the series of using Cadence to manage a drone delivery service. You can see a simulated view of it in action (opens new window)

Don’t forget to try out the code yourself and remember if you have used Cadence to do something interesting then please let us know so we can feature it in our next update.

# GitHub Statistics

During July the main Cadence branch had 28 pull requests (PRs) merged. There were 214 files changed by 11 different authors. You can find more details here (opens new window)

The Cadence documentation repository was not as busy with only 2 PRs merged in July, 5 commits and 3 authors active. More details can be found here (opens new window)

# Cadence Roadmap

The Cadence Core team has been busy this month looking at the various community feedback for potential improvements and features for Cadence. Planning is already in place for a development roadmap and it is still a little too early to say what will be included so please watch out for future updates. All I know is that it’s going to be exciting!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/08/31/community-spotlight-august-2022/index.html b/blog/2022/08/31/community-spotlight-august-2022/index.html index 42cd714d0..f2264e2ff 100644 --- a/blog/2022/08/31/community-spotlight-august-2022/index.html +++ b/blog/2022/08/31/community-spotlight-august-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - August 2022 + + - + @@ -151,6 +153,6 @@

Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Community Survey

We are working on putting together our first community survey to find out a bit more about our community. We would like to get your feedback about on a few things such as:

  • how you are using Cadence
  • any specific experiences you have had where you'd like to see new features
  • any special use cases not yet covered
  • and of course whatever other feedback you'd like to give us

So please watch out for the survey which will be coming out to you via the Slack channel soon!

# Support Activity

We have noticed that community activity is increasing and that we are continuing to respond to questions in our Slack #support channel. Eight questions have been posted in the channel this month and another seven questions have been posted on StackOverflow. We encourage people to post their questions on StackOverflow so that the response can be shared. You can also post a link to the StackOverflow question in the support channel to be extra sure it gets seen by our community members.

We are looking always forward to receiving more of your questions!

# GitHub Activity

Do you remember our GitHub Statistics from last month? Don't worry if you don't! In July we had 28 pull requests (PRs) merged into the code repository. This month we have had 43 PRs merged (nearly double!)- so you can see the level of activity is also increasing in terms of the project code.

If you are interested in contributing to Cadence then please take a look at our Contribution Guidelines (opens new window) and also our list of good first issues (opens new window) to work on.

# Come Along to Our Next Cadence Meetup!

It's been a while since we had a Cadence meetup so we have decided to organise another one. This time we are planning to do an in-person meetup in the San Francisco Bay area in early November. We are looking for any companies using Cadence to come along and speak about how they are using it. We'd also like to hear about any interesting use cases that you have used Cadence for.

If you are interested in speaking at our next meetup then please contact Ender Demirkaya (opens new window)

# Looking for a Cadence Role?

The Cadence teeam at Uber are recruiting for a Fullstack Engineer. If you are interested then please contact Ender Demirkaya (opens new window) for more details.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/10/11/community-spotlight-september-2022/index.html b/blog/2022/10/11/community-spotlight-september-2022/index.html index a1bf23de5..624be9155 100644 --- a/blog/2022/10/11/community-spotlight-september-2022/index.html +++ b/blog/2022/10/11/community-spotlight-september-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - September 2022 + + - + @@ -149,6 +151,6 @@ Tue Oct 11 2022

Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Cadence at Developer Week

A Cadence talk by Ender Demirkaya (opens new window) and Ben Slater (opens new window) has been accepted for Developer Week Enterprise (opens new window).

The talk is scheduled to for 16th November so please make a note in your calendars.

# Sharing Knowledge

Over the last few months we have had a continual stream of Cadence questions in our Slack (opens new window) #support channel or on StackOverflow (opens new window). As a result of the increased interest some members from the Cadence core team have decided to spend some time each day responding to your questions.

Remember that if you have received a response that has solved your problem especially on StackOverflow then please don't forget to accept answer!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/10/31/community-spotlight-october-2022/index.html b/blog/2022/10/31/community-spotlight-october-2022/index.html index e9fbf68f0..3eb05514d 100644 --- a/blog/2022/10/31/community-spotlight-october-2022/index.html +++ b/blog/2022/10/31/community-spotlight-october-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - October 2022 + + - + @@ -148,6 +150,6 @@

Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Cadence Meetup Postponed

It's always great to get the community together and we had planned to run another Cadence Meetup in early November. Unfortunately we didn't have enough time to get things organised so we've decided to postpone it. So please watch out for an announcement for the new Cadence meetup date.

# Doordash Technnical Showcase Featuring Cadence

We have had some great feedback from people who attended Technical Showcase that was run this month by Doordash. It featured their financial products but also highlighted some of the key technologies they use...and guess what Cadence is one of them!

If you missed the session then you will be happy to know that it was recorded and we've inlcuded a link to the the recording on Youtube (opens new window).

Thanks to the Doordash team for running the session and helping support Cadence by sharing their knowledge.

# iWF Support for Cadence

Community member Quanzheng Long (opens new window) has been busy working on a new project that has been built on top of Cadence. The project is called iWF - Interpreter for Workflow (opens new window). It's great to see that Cadence is now growing it's own ecosystem!

Please feel free to take a look and let Long know what you think!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/11/30/community-spotlight-november-2022/index.html b/blog/2022/11/30/community-spotlight-november-2022/index.html index 575b5097e..8d3aa0fb6 100644 --- a/blog/2022/11/30/community-spotlight-november-2022/index.html +++ b/blog/2022/11/30/community-spotlight-november-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - November 2022 + + - + @@ -146,6 +148,6 @@

Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Cadence @ Uber

This month Uber Engineering published a really nice article on one of the ways they are using Cadence. The article is called How Uber Optimizes the Timing of Push Notifications using ML and Linear Programming (opens new window).

The Uber team take you through the details of the problem that they are looking to solve, so you can understand the scope limitations and depedencies - so please take a look.

# Cadence @ DeveloperWeek Enterprise

DevNetwork run a series of conferences and during November Cadence was featured in at DeveloperWeek Enterprise (opens new window). Ender Demirkaya (opens new window) and Ben Slater (opens new window) presented a talk called Espress Complex Business Logic as Code with Open Source Cadence! (opens new window).

It is good to see that we are finding new channels for us to present the benefits of using Cadence. Huge hanks to Ben and Ender for the presentation and to everyone that attended.

# Cadence at W-JAX

It must be presentation month as we have had yet another Cadence presentation! Earlier this month a Cadence talk was featured at the W-JAX Conference (opens new window) in Munich, Germany. Merlin Walter (opens new window) presented a talk called Microservices - Modern Orchestration with Cadence (opens new window)

Session feedback received was very positive and it's great to see that new audiences are interested in learning more about Cadence and seeing how it works.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

PLEASE NOTE: No Office Hours on 26th December 2022

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2022/12/23/community-spotlight-december-2022/index.html b/blog/2022/12/23/community-spotlight-december-2022/index.html index 375be5875..6bf17579a 100644 --- a/blog/2022/12/23/community-spotlight-december-2022/index.html +++ b/blog/2022/12/23/community-spotlight-december-2022/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - December 2022 + + - + @@ -152,6 +154,6 @@

I know we are a little early this month as many people will be taking some time out for holidays.

# Happy Holidays

We'd like to wish everyone happy holidays and to thank you for being part of the Cadence community. It's been a busy year for Cadence as we have continued to build a strong, active community that works together to solve issues and generally support each other.

Let's keep going!...This is a great way to build a sustainable community.

We are sure that 2023 will be even more exciting as we continue to develop Cadence.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/01/31/community-spotlight-january-2023/index.html b/blog/2023/01/31/community-spotlight-january-2023/index.html index 89e4de692..9585e2c5a 100644 --- a/blog/2023/01/31/community-spotlight-january-2023/index.html +++ b/blog/2023/01/31/community-spotlight-january-2023/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - January 2023 + + - + @@ -144,6 +146,6 @@ Tue Jan 31 2023

Happy New Year everyone! Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Closing Down Cadence Office Hours

We have been running Office Hours sessions every month since May last year. The aim was to give the community an opportunity to speak directly with some of the Cadence core developers and experts to answer questions on particular issues you may be having. We have found that the most preferred method for community questions has been the support Slack channel so have decided to stop this monthly call.

Thanks very much to Ender Demirkaya (opens new window)and the Uber team for making themselves available for these sessions.

Please remember that if you have question about Cadence or are facing a specific issue then you can post your question in our #support Slack (opens new window) channel. If you also post the details on StackOverflow with the cadence workflow tag then there will be a searchable history for others who encounter the same issue to find a solution.

# Update on iWF Support for Cadence

Last October we featured an update in our monthly blog about iWF - Interpreter for Workflow (opens new window), a project built on top of Cadence by community member Quanzheng Long (opens new window). It was announced recently that iWF has released a Golang SDK (opens new window) and updated versions of the Java SDK and server (opens new window).

Long is really keen to get feedback so please take a look at iWF, try them out and presented him any feedback. Long has also created a couple of blog posts about iWF that we have featured in the Cadence in the News section below so please take a look.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

No upcoming events at the moment.

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/02/28/community-spotlight-february/index.html b/blog/2023/02/28/community-spotlight-february/index.html index 752a6fa79..93bd2d765 100644 --- a/blog/2023/02/28/community-spotlight-february/index.html +++ b/blog/2023/02/28/community-spotlight-february/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - February 2023 + + - + @@ -148,6 +150,6 @@

Here’s the latest in our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Community Survey

We've been talking about doing a community survey for a while and during February we sent it out. We are still collating the results so it's not too late to send in your response.

The survey takes 5 minutes and is your opportunity to provide feedback to the project and highlight areas you think we need to focus on.

Use this Survey Link (opens new window)

Please take a few minutes to give us your opinion.

# Cadence and Temporal

During user surveys we've had a few queries about whether Cadence and Temporal (opens new window) are the same project. The answer is No - they are not the same project but they do share the same origin. At a high level Temporal is a fork of the Cadence project. Both Temporal and Cadence are now being developed by different communities so are independent.

# Cadence at DoorDash

Although published a few months ago we missed including an article by DoorDash (opens new window) about how they are using Cadence to build real time event processing with Apache Flink (opens new window) and Apache Kafka (opens new window).

Here is the link to the article: Building Scalable Real Time Event Processing with Kafka and Flink (opens new window)

Remember to let us know if you have news, articles or blog posts about Cadence that you'd like us to include in these monthly updates.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/03/11/community-spotlight-update-march-2024/index.html b/blog/2023/03/11/community-spotlight-update-march-2024/index.html index 910039618..d1c05f6e2 100644 --- a/blog/2023/03/11/community-spotlight-update-march-2024/index.html +++ b/blog/2023/03/11/community-spotlight-update-march-2024/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - March 2024 + + - + @@ -146,6 +148,6 @@

Welcome back to the latest in our regular Cadence community spotlight updates where we aim to deliver you news from in and around the Cadence community! It’s been a few months since our last update (opens new window), so I have a bunch of exciting updates to share.

Let’s get started!

# Proposal for Cadence Plugin System

Community member Mantas Sidlauskas (opens new window) drafted a thorough proposal around putting together a plugin system in Cadence. Aimed at enhancing the flexibility of integrating various components like storage, document search, and archival, this system encourages the use of external plugins, promoting innovation and reducing dependency complications. Your insights and feedback are crucial; learn more and contribute your thoughts at the link below:

A huge thank you to Mantas for initiating this work. This is an excellent example of how we can collaborate together to bring about new features that benefit us all.

# Admin API Permissions Rethinking

The community is deliberating on the permission requirements for the Admin API DescribeCluster endpoint. This vital discussion aims to ensure Cadence web's accessibility across different user levels. We're exploring various solutions and your participation would greatly influence the decision-making process. Feel free to chime in here (opens new window)!

# New Java Samples for Cadence: Signal Workflow Interactions

In some exciting news for Java enthusiasts, a new sample has been added to the Cadence Java Samples repository, demonstrating how to initiate and interact with a signal workflow using the Cadence client. This practical example is a huge win for developers looking to deepen their understanding of workflow signaling in Java. Explore the new sample and expand your Cadence toolkit here (opens new window).

# New GoLang client & Cadence Web Enhancements

Updates to the Cadence GoLang Client and Cadence Web have been rolled out, bringing new features and improvements that streamline user experiences. Highlights include upgraded Cassandra images, refined workflow interceptors, and more intuitive Cadence Web interfaces. Discover the full scope of updates on our GitHub repositories.

# Release Updates: v1.2.6 & v1.2.7

Cadence recently saw the release of versions v1.2.6 and v1.2.7, featuring significant improvements and fixes that enhance the overall Cadence experience. These updates reflect a commitment to respond to the community's valuable feedback. Check out the detailed release notes on the GitHub releases page (opens new window)!

# Cadence in the News!

Below is a selection of Cadence related articles and blogs. Take a look and feel free to share your own with us via your own social media channels!

# Recent Events

Check out this recent webinar, "Building with Cadence: Quantifiable Efficiency," available on-demand now. Discover the robust features of Cadence and how it can streamline the development of distributed applications through an engaging demonstration by John Del Castillo (opens new window).




That’s all for this month!

Your engagement and contributions are what make the Cadence community thrive. Whether you have innovative ideas, insightful feedback, or just want to chat about Cadence, we encourage you to join our Slack #community channel (opens new window).

We're committed to making this update as useful and informative as possible, so please share any feedback or suggestions you might have. Let’s keep building a vibrant and collaborative Cadence community together!

Looking forward to sharing more exciting updates next month!


- + diff --git a/blog/2023/03/31/community-spotlight-march-2023/index.html b/blog/2023/03/31/community-spotlight-march-2023/index.html index ff4dea8e8..54183084d 100644 --- a/blog/2023/03/31/community-spotlight-march-2023/index.html +++ b/blog/2023/03/31/community-spotlight-march-2023/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - March 2023 + + - + @@ -144,6 +146,6 @@

Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Cadence at Open Source Summit, North America

We are very pleased to let you know that a talk on Cadence has been accepted for the Linux Foundation's Open Source Summit, North America (opens new window) in Vancouver on 10th - 12th May 2023.

The talk called Cadence: The New Open Source Project for Building Complex Distributed Applications (opens new window) will be given by Ender Demirkaya (opens new window) and Emrah Seker (opens new window) If you are planning to attend the Open Source Summit then please don't forget to attend the talk and take time catch up with Ender and Emrah!

# Community Activity

Our Slack #support channel has been very active over the last few months as we continue to get an continual stream of questions. Here are the stats:

  • February 2023 : 16 questions asked
  • March 2023 : 12 questions asked

All of these questions are being answered collaboratively by the community. Thanks everyone for sharing your knowledge and we are looking forward to receiving more of your questions!

# Cadence Developer Advocate

Please welcome Yizhe Qin - the new Cadence Developer Advocate from Uber team that will be working to help support the community.

Yizhe's role will involve responding to support questions, organising documentation and anything else that will help keep the community running smoothly.

Please feel free to say Hi to Yizhe on the Slack channel!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/06/08/survey-results/index.html b/blog/2023/06/08/survey-results/index.html index 684fbecaa..f830cf905 100644 --- a/blog/2023/06/08/survey-results/index.html +++ b/blog/2023/06/08/survey-results/index.html @@ -5,6 +5,8 @@ 2023 Cadence Community Survey Results + + - + @@ -143,6 +145,6 @@ Thu Jun 08 2023

We released a user survey earlier this year to learn about who our users are, how they use Cadence, and how we can help them. It was shared from our Slack workspace (opens new window), cadenceworkflow.io (opens new window) Blog and LinkedIn (opens new window). After collecting the feedback, we wanted to share the results with our community. Thank you everyone for filling it out! Your feedback is invaluable and it helps us shape our roadmap for the future.

Here are some highlights in text and you can check out the visuals to get more details:

using.png

job_role.png

Most of the people who replied to our survey were engineers who were already using Cadence, actively evaluating, or migrating from a similar technology. This was exciting to hear! Some of you have contacted us to learn more about benchmarks, scale, and ideal use cases. We will share more guidelines about this but until then, feel free to contact us over our Slack workspace for guidance.

scale.png

The scale our users operating Cadence varies from thousands to billions of workflows per month. It was exciting to see it being used in both small and large scale companies.

time_zone.png

Most survey responders were from Europe compared to any other place. This is in-line with the Cadence team growing its presence in Europe. Users from different places also contacted us to contribute to Cadence as a follow up to the survey. We will start putting up-for-grabs and new-starter tasks on Github. Several of them wanted to meet with a Zoom call and to discuss their use cases and best practices. As the Cadence team has presence in both the EU and the US, we welcome all our users to contact us anytime. Slack is the fastest way to reach us.

following.png

channels.png

Cadence is followed in Slack (opens new window) the most, then Github (opens new window) and LinkedIn (opens new window). We are the most active in Slack and we plan to be more active in other mediums as well.

scenarios.png All of our main use cases were used across the board. While we mentioned the most common cases, several others were mentioned as a comment: enhanced timers, leader election etc.

We found out that Cadence has been used in several science communities. Some of them were using community built clients and were asking if we are going to support more languages. We are planning to take ownership of the Python and Javascript/Typescript clients and support them officially.

improvement.png

Documentation is by far what our users wanted improvements on. We are revamping our documentation soon and there will be major changes on our website soon.

help_stage.png

Other requests were about observability, debuggability, operability, and usability. These areas have been our main focus this year and we are planning to release updates and blogs about them.

support.png

We noticed most of our users need help once a month or more. While we welcome questions and discussions over the mediums mentioned above, we plan to make more public posts about the common issues using our blog, StackOverflow, LinkedIn, or Twitter.

Many users wanted to hear more from Cadence about the roadmap and its growth. Our posts about these will be released soon. Expect more posts about upcoming features, investments, scale, and community updates. Follow us at LinkedIn (opens new window) for such updates.

Our users are interested in learning more about guidelines, capacity expectations in on-prem and in managed solutions. While we have been providing feedback per user basis before, we plan to release more generic guidelines with our observability updates mentioned above.

We also would like to thank our community for the increased interest and engagement with us! Cadence has been more active in different mediums (LinkedIn, Slack, blog, etc.) this year. In the first quarter, we observed that our user base and activities has almost doubled (+96% and +90% respectively) through both new and returning users. Based on such immediate positive reactions, we will keep increasing our community investments in different channels.


- + diff --git a/blog/2023/06/30/community-spotlight-june-2023/index.html b/blog/2023/06/30/community-spotlight-june-2023/index.html index 8618ac15f..674c050ea 100644 --- a/blog/2023/06/30/community-spotlight-june-2023/index.html +++ b/blog/2023/06/30/community-spotlight-june-2023/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - June 2023 + + - + @@ -146,6 +148,6 @@

We've had a short break but now we are back. Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Cadence Release 1.0

Just in case you missed it - at the end of April Cadence v1.0 (opens new window) was officially released. This release is a significant milestone for the project and the community. It indicates that we are confident in the stability of the code that we can recommend it and promote it widely to more users. Kudos to everyone that worked together to make this release happen.

And the Uber team also gave Cadence a writeup on the Uber Engineering Blog (opens new window) so please take a look.

# Community Survey Results

The results of our Community Survey have been published and you can find the details right here on our blog (opens new window). From the results we can see that:

  • our community is a good mix of people using, evaluating, testing or thinking about migrating to Cadence
  • Software Engineers featured highly as a community user profile
  • Europe seems to be the most common community timezone
  • People prefer using our Slack channel for questions
  • Debugging is what most people need help with

Thank you to Ender for compiling the data and to everyone who participated.

# Cadence Video Open Source Summit, North America

In May Ender Demirkaya (opens new window) gave a talk called on Cadence at the Linux Foundation's Open Source Summit, North America (opens new window) in Vancouver. The presentation attracted a sizeable audience and was very well received. There was also a lot of questions from the audience which is a sign that Cadence sounded potentially useful to them.

A recording of the talk Cadence: The New Open Source Project for Building Complex Distributed Applications (opens new window) is now available.

# Overcoming Potential Workflow Versioning Maintenance Challenges

Community member Quanzheng Long (opens new window) has written a detailed article on Medium (opens new window) about some of the potential maintenance challenges of workflow versioning. It's a short read and has some good examples that explains the potential problems and identifies some approaches for dealing with them.

Thanks Long for sharing this knowledge with the community!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

  • None

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window)#community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/07/01/components-of-cadence-application-setup/index.html b/blog/2023/07/01/components-of-cadence-application-setup/index.html index 9c366725e..3f9f34a44 100644 --- a/blog/2023/07/01/components-of-cadence-application-setup/index.html +++ b/blog/2023/07/01/components-of-cadence-application-setup/index.html @@ -5,6 +5,8 @@ Understanding components of Cadence application + + - + @@ -143,6 +145,6 @@

Cadence is a powerful, scalable, and fault-tolerant workflow orchestration framework that helps developers implement and manage complex workflow tasks. In most cases, developers contribute activities and workflows directly to their codebases, and they may not have a full understanding of the components behind a running Cadence application. We receive numerous inquiries about setting up Cadence in a local environment from scratch for testing. Therefore, in this article, we will explore the components that power a Cadence cluster.

There are three critical components that are essential for any Cadence application:

  1. A running Cadence backend server.
  2. A registered Cadence domain.
  3. A running Cadence worker that registers all workflows and activities.

Let's go over these components in more details.

The Cadence backend serves as the heart of your Cadence application. It is responsible for processing and scheduling your workflows and activities. While the backend relies on various dependencies, our team has conveniently packaged them into a single Docker image. You can follow the instructions provided here.

The Cadence domain functions as the namespace for your Cadence workflows. It helps segregate your workflows into manageable groups. When running workflows, you must specify the domain on which you want to execute them.

The Cadence worker, also known as the worker service, is a separate binary process that you need to implement in order to host your workflows and activities. When developing a worker, ensure that all your workflows and activities are properly registered with it. The worker is an actively running application, and you have the freedom to choose the hosting technologies that best suit your needs, such as a simple HTTP or gRPC application.

Ultimately, you will need to set up two running processes on your local machine: the Cadence server and the worker. Additionally, you must register the Cadence domain as a resource. Our team has packaged all these components into user-friendly tools, which you can find on our website.


- + diff --git a/blog/2023/07/05/implement-cadence-worker-from-scratch/index.html b/blog/2023/07/05/implement-cadence-worker-from-scratch/index.html index b43d8c511..a322dfc4d 100644 --- a/blog/2023/07/05/implement-cadence-worker-from-scratch/index.html +++ b/blog/2023/07/05/implement-cadence-worker-from-scratch/index.html @@ -5,6 +5,8 @@ Implement a Cadence worker service from scratch + + - + @@ -300,6 +302,6 @@ 2023-07-03T11:46:46.267-0700 INFO internal/internal_worker.go:838 Worker has no activities registered, so activity worker will not be started. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "35987@uber-C02F18EQMD6R@test-worker@90c0260e-ba5c-4652-9f10-c6d1f9e29c1d"} 2023-07-03T11:46:46.267-0700 INFO cadence-worker/main.go:75 Started Worker. {"worker": "test-worker"}

You may see these logs because your worker is successfully running but we haven't registered any workflows or activities to the worker. In the next tutorial, we will learn how to write a simple hello world workflow for your Cadence application.


- + diff --git a/blog/2023/07/10/cadence-bad-practices-part-1/index.html b/blog/2023/07/10/cadence-bad-practices-part-1/index.html index 2585c025f..4481ae1a3 100644 --- a/blog/2023/07/10/cadence-bad-practices-part-1/index.html +++ b/blog/2023/07/10/cadence-bad-practices-part-1/index.html @@ -5,6 +5,8 @@ Bad practices and Anti-patterns with Cadence (Part 1) + + - + @@ -140,6 +142,6 @@

In the upcoming blog series, we will delve into a discussion about common bad practices and anti-patterns related to Cadence. As diverse teams often encounter distinct business use cases, it becomes imperative to address the most frequently reported issues in Cadence workflows. To provide valuable insights and guidance, the Cadence team has meticulously compiled these common challenges based on customer feedback.

  • Reusing the same workflow ID for very active/continuous running workflows

Cadence organizes workflows based on their unique IDs, using a process called partitioning. If a workflow receives a large number of updates in a short period of time or frequently starts new runs using the continueAsNew function, all these updates will be directed to the same shard. Unfortunately, the Cadence backend is not equipped to handle this concentrated workload efficiently. As a result, a situation known as a "hot shard" arises, overloading the Cadence backend and worsening the problem.

Solution: Well, the best way to avoid this is simply just design your workflow in the way such that each workflow owns a uniformly distributed workflow ID across your Cadence domain. This will make sure that Cadence backend is able to evenly distribute the traffic with proper partition on your workflowIDs.

  • Excessive batch jobs or an enormous number of timers triggered at the same time

Cadence has the capability to handle a large number of concurrent tasks initiated simultaneously, but tampering with this feature can lead to issues within the Cadence system. Consider a scenario where millions of jobs are scheduled to start at the same time and are expected to finish within a specific time interval. Cadence faces the challenge of understanding the desired behavior of customers in such cases. It is uncertain whether the intention is to complete all jobs simultaneously, provide progressive updates in parallel, or finish all jobs before a given deadline. This ambiguity arises due to the independent nature of each job and the difficulty in predicting their outcomes.

Moreover, Cadence workers utilize a sticky cache by default to optimize the runtime of workflows. However, when an overwhelming number of parallel workflows cannot fit into the cache, it can result in cache thrashing. This, in turn, leads to a quadratic increase in runtime complexity, specifically O(n^2), exacerbating the overall performance of the system.

Solution: There are multiple ways to address this issue. Customers can either run jobs in a smaller batch or use start workflow jitter to randomly distribute timers within certain timeframe.


- + diff --git a/blog/2023/07/16/write-your-first-workflow-with-cadence/index.html b/blog/2023/07/16/write-your-first-workflow-with-cadence/index.html index b0ff3c106..d5cf483b4 100644 --- a/blog/2023/07/16/write-your-first-workflow-with-cadence/index.html +++ b/blog/2023/07/16/write-your-first-workflow-with-cadence/index.html @@ -5,6 +5,8 @@ Write your first workflow with Cadence + + - + @@ -180,6 +182,6 @@

Let's try to run a Cadence workflow using Cadence CLI.

cadence --env development --domain test-domain workflow start --et 60 --tl test-worker --workflow_type main.helloWorldWorkflow --input '"World"'
 

You should see the Hello World log such like

2023-07-16T12:09:11.858-0700    INFO    cadence-worker/code.go:104      Workflow completed. {"Domain": "test-domain", "TaskList": "test-worker", "WorkerID": "13585@uber-C02F18EQMD6R@test-worker@42f8a76f-cc42-4a0d-a001-7f7959d5d623", "WorkflowType": "main.helloWorldWorkflow", "WorkflowID": "8cb7fb2a-243b-43f8-82d9-48d758c9d62f", "RunID": "3c070007-89c3-4e00-a039-19a86b2f9224", "Result": "Hello World!"}
 

Congratulations, you have successfully run your very first Cadence workflow.

For a bonus point, the Cadence team has also developed a demonstrative web dashboard to visualize the history of all workflows you have run when you start the Cadence server. Check http://localhost:8088 to see the dashboard like this.

cadencde-ui

This web portal persists all historical workflow you have run recently. Search for the domain you used for this tutorial. In our case, type test-domain and hit enter. You may see a list of workflows with detailed information. Feel free to explore the web UI and raise your suggestions to our Github repo (opens new window).

cadence-ui-detailed

For the incoming blogs, we will cover more advanced topics and use cases with Cadence.


- + diff --git a/blog/2023/07/31/community-spotlight-july-2023/index.html b/blog/2023/07/31/community-spotlight-july-2023/index.html index 0185d53c2..4b33d885b 100644 --- a/blog/2023/07/31/community-spotlight-july-2023/index.html +++ b/blog/2023/07/31/community-spotlight-july-2023/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - July 2023 + + - + @@ -148,6 +150,6 @@

Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# Getting Started with Cadence

Are you new to Cadence and want to understand the basic concepts and architecture? Well we have some great information for you!

Community member Chris Qin (opens new window) has written a short blog post (opens new window) that takes you through the the three main components that make up a Cadence application. Please take a look and feel free to give us your comments and feedback.

Thanks Chris for sharing your knowledge and helping others to get started.

# Cadence Go Client v1.0 Released

This month saw the release of v1.0 of the Cadence Go Client (opens new window). Note that the work done on this release was as a result of community feedback asking for it - so we are listening and responding to community needs.

Thanks very much to everyone who worked hard to get this release out!

# Cadence Release Strategy

A recent discussion on the Cadence Release strategy was posted in Cadence Github Discussions (opens new window) (and also our #general channel on our Slack (opens new window) about the approach we'd like to take for future releases. As a community we want to ensure code stability and to not burden people with having to upgrade frequently.

Based on feedback from the community we will be introducing quarterly release cycles but also give people the ability to make use of patches and minor releases. We will be communicating the intention to make a release at least a month beforehand so that the community has time to finalise any features they want to be included in the upcoming release.

For those of you wanting to keep up to date or try out new features in between releases, the core team at Uber will continue to make patch and minor version updates available to the community.

As always we welcome your feedback so please feel free to add your thoughts and comments to the discussion.

# Cadence Helm Charts

Community member Mark Sagi-Kazar (opens new window) has been maintaining the Banzai Cloud Cadence Helm Charts for the community. As the Helm Charts are a key tool for the community we are planning to take over the maintenance of them.

Our plan is to move the charts into the Cadence repository and to maintain an official and supported Kubernetes solution with Cadence.

Huge thanks to Mark for all the work you have done and it's great to see the task being handed over and made into a community effort.

# Upcoming Events

  • None

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window) #community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/index.html b/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/index.html index 296210947..b34eb37d4 100644 --- a/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/index.html +++ b/blog/2023/08/27/nondeterministic-errors-replayers-shadowers/index.html @@ -5,6 +5,8 @@ Non-deterministic errors, replayers and shadowers + + - + @@ -172,6 +174,6 @@

In this example, the workflow will execute ActivityA and Activity B in sequence. These activities may have other logics in background, such as polling long running operations or manipulate database reads or writes. Now if the developer replaces ActivityA with another activity ActivityC, a non-deterministic error could happen for an existing workflow. It is because the workflow expects results from ActivityA but since the definition of the workflow has been changed to use results from ActivityC, the workflow will fail due to failure of identifying history data of ActivityA. Such issues can be detected by introducing replayers and shadowers to the workflow unit tests.

Cadence workflow replayer is a testing component for replaying existing workflow histories against a workflow definition. You may think of replayer as a mock which will rerun your workflow with exactly the same history as your real workflow. The replaying logic is the same as the one used for processing workflow tasks. If it detects any incompatible changes, the replay test will fail. Workflow Replayer works well when verifying the compatibility against a small number of workflow histories. If there are lots of workflows in production that need to be verified, dumping all histories manually clearly won't work. Directly fetching histories from the cadence server might be a solution, but the time to replay all workflow histories might be too long for a test.

Workflow Shadower is built on top of Workflow Replayer to address this problem. The basic idea of shadowing is: scan workflows based on the filters you defined, fetch history for each workflow in the scan result from Cadence server and run the replay test. It can be run either as a test to serve local development purposes or as a workflow in your worker to continuously replay production workflows.

You may find detailed instructions on how to use replayers and shadowers on our website (opens new window). We will introduce versioning in the next coming blogs.


- + diff --git a/blog/2023/08/31/community-spotlight-august-2023/index.html b/blog/2023/08/31/community-spotlight-august-2023/index.html index 98f469030..eb451afbf 100644 --- a/blog/2023/08/31/community-spotlight-august-2023/index.html +++ b/blog/2023/08/31/community-spotlight-august-2023/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - August 2023 + + - + @@ -151,6 +153,6 @@ Thu Aug 31 2023

Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

Please see below for a roundup of the highlights:

# More Cadence How To's

You might have noticed that we have had a few more contributions to our blog from Chris Qin (opens new window). Chris has been busy sharing insights, and tips on a few important Cadence topics. The objective is to help the community with any potential problems.

Here are the latest topics:

Even if you have not encountered these use cases - it is good to be prepared and have a solution ready.Please take a look and let us have your feedback.

Chris is also going to take a look at the Cadence Samples (opens new window) to make sure they are all working and if not - he's going to re-write them so that they do!

Thanks very much Chris for all the work you are doing to help improve the project!

# More iWF Examaples

Community member Quanzheng Long (opens new window) has also been busy writing this month. In previous blogs Long has told us about iWF (opens new window) that is a layer implemented over of Cadence.

During August Long has published a couple of articles on using the 'ContinueAsNew' functionality in iWF. Links to Part 1 and Part are below:

Please take a look and if you've enjoyed reading them then let Long and us know!

# Cadence At the Helm!

Last month we mentioned the Cadence Helm charts and all the previous work that had been done by Mark Sagi-Kazar (opens new window). We were looking to ensure they are maintained.

So a special thanks goes out this month to Edmondo for contributing some work on the Cadence Helm Chart (opens new window).

# Community Support!

Our Slack (opens new window) channel continues to be the main place where people are asking for help and support with Cadence. During August (which is supposed to be holiday season), we still had 9 questions raised around various topics.

Huge thanks to the following community members who took time to respond and help others: David, Edmondo, Chris Qin, Rony Rahman and Ben Slater.

It's good to see that we are continuing to support each other - doing exactly what communities do!

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window) #community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2023/11/30/community-spotlight-update-november-2023/index.html b/blog/2023/11/30/community-spotlight-update-november-2023/index.html index daa23c595..8a4106503 100644 --- a/blog/2023/11/30/community-spotlight-update-november-2023/index.html +++ b/blog/2023/11/30/community-spotlight-update-november-2023/index.html @@ -5,6 +5,8 @@ Cadence Community Spotlight Update - November 2023 + + - + @@ -147,6 +149,6 @@ Thu Nov 30 2023

Welcome to the latest of our regular monthly Community Spotlight updates that gives you news from in and around the Cadence community!

It's been a couple of months since our last update so we have a lot of updates to share with you.

Please see below for a roundup of the highlights:

# Proposal for Cadence Native Authentication

Community member Mantas Sidlauskas (opens new window) has drafted a proposal around Cadence native authentication and is asking for community feedback. If you are interested in reviewing the current proposal and providing comments or feedback then please find the proposal details at the link below:

This is a great example of how we can focus on collaborating together to find a collective solution. A big thank you to Mantas for initiating this work and we hope to see the results of the community input soon!

# iWF Deep Dive and More!

During the last few months community member Quanzheng Long (opens new window) has continued to share his thoughts about iWF (opens new window), a layer implemented on top of Cadence. Since our last update iWF now has aPython SDK (opens new window). Long has been busy writing articles to share iWF tips and tricks as well as some general ideas about workflows and processes. Links to Long's articles can be found below:

# New Go Samples for Cadence

The Cadence core team is deprecating the old samples for Go and replacing them with new version 2 (V2) samples. They have received a lot of feedback from the community that people are having trouble with old samples, so are in the process of publishing a completely new set of samples for Go.

Here are some major changes to the new samples:

  • Easy to use the read - the new samples will be completely based on CLIs instead of running a binary. (This is consistent with current Cadence use experience)
  • Simple and transparent worker configuration - the old samples did not provide user a clear demonstration about the relationship between the worker and workflow themselves
  • The new samples will help you bootstrap your Cadence workflow faster and easier.
  • More vivid and self-explanatory - instead of the traditional "HelloWorld" type of samples, we want to make it more interesting and engaging. (Each sample will try to simulate a real-life use case to make them more understandable and fun to learn!)

We hope the community will enjoy these changes. If you have any questions or have new an idea for a new sample then please reach out to Chris Qin (opens new window).

The new Go samples can be found at:

  • https://github.com/uber-common/cadence-samples/tree/master/new_samples.

Note that the old samples will be removed once the new samples are fully refreshed.

# Cadence Retrospective

We are nearly at the end of another year and yes it has gone so fast! Over this year Cadence and the community have evolved and grown. This is a good time to reflect about all the things that have happened in the project over the year and think about a possible roadmap for the future.

If you have any feedback, or comments about the project or ideas about what features you'd like to see in the roadmap then please feel free to begin a discussion in the #community Slack (opens new window) channel.

# Cadence in the News!

Below are a selection of Cadence related articles, blogs and whitepapers. Please take a look and feel free to share via your own social media channels.

# Upcoming Events

If you have any news or topics you'd like us to include in our next update then please join our Slack (opens new window) #community channel.

Please remember that this update is for you - so if you have any comments or feedback that could help us improve it then please share it with us in the #community Slack (opens new window) channel.


- + diff --git a/blog/2024/03/10/cadence-non-deterministic-common-qa/index.html b/blog/2024/03/10/cadence-non-deterministic-common-qa/index.html index efbe81b81..046d67dd2 100644 --- a/blog/2024/03/10/cadence-non-deterministic-common-qa/index.html +++ b/blog/2024/03/10/cadence-non-deterministic-common-qa/index.html @@ -5,6 +5,8 @@ Cadence non-derministic errors common question Q&A (part 1) + + - + @@ -157,6 +159,6 @@ Cadence history. Changes to workflow definition will fail the replay process of Cadence as it finds the new workflow definition imcompatible with previous historical events.

Here is a list of common workflow definition changes.

  • Changing workflow parameter counts
  • Changing workflow parameter types
  • Changing workflow return types

The following changes are not categorized as definition changes and therefore will not trigger non-deterministic errors.

  • Changes of workflow return values
  • Changing workflow parameter names as they are just positional

# Does changing activity definitions trigger non-determinstic errors?

YES. Similar to workflow definition change, this is also a very typical non-deterministic error.

Activities are also recorded and replayed by Cadence. Therefore, changes to activity must also be compatible with Cadence history. The following changes are common ones that trigger non-deterministic errors.

  • Changing activity parameter counts
  • Changing activity parameter types
  • Changing activity return types

As activity paremeters are also positional, these two changes will NOT trigger non-deterministic errors.

  • Changes of activity return values
  • Changing activity parameter names

Activity return values inside workflows are not recorded and replayed.

# What changes inside workflows may potentially trigger non-deterministic errors?

Cadence records each execution of a workflow and activity execution inside each of them.Therefore, new changes must be compatible with execution orders inside the workflow. The following changes will fail the non-deterministic check.

  • Append another activity
  • Delete an existing activity
  • Reordering activities

If you really need to change the activity implementation based on new business requirements, you may consider using versioning your workflow.

# Are Cadence signals replayed? If definition of signal is changed, will it trigger non-deterministic errors?

Yes. If a signal is used in a workflow, it becomes a critical component of your workflow. Because signals also involve I/O to your workflow, it is also recorded and replayed. Modifications on signal definitions or usage may yield to non-deterministic errors, for instance, changing return type of a signal.

# If I have new business requirement and really need to change the definition of a workflow, what should I do?

You may introduce a new workflow registered to your worker and divert traffic to it or use versioning for your workflow. Check out Cadence website (opens new window) for more information about versioning.

# Does changes to local activities' definition trigger non-deterministic errors?

Yes. Local activities are recorded and therefore replayed by Cadence. Imcompatible changes on local activity definitions will yield to non-deterministic errors.


- + diff --git a/blog/2024/07/11/yearly-roadmap-update/index.html b/blog/2024/07/11/yearly-roadmap-update/index.html index 1868171f7..e6fc909c0 100644 --- a/blog/2024/07/11/yearly-roadmap-update/index.html +++ b/blog/2024/07/11/yearly-roadmap-update/index.html @@ -5,6 +5,8 @@ 2024 Cadence Yearly Roadmap Update + + - + @@ -146,6 +148,6 @@

# Introduction

If you haven’t heard about Cadence, this section is for you. In a short description, Cadence is a code-driven workflow orchestration engine. The definition itself may not tell enough, so it would help splitting it into three parts:

  • What’s a workflow? (everyone has a different definition)
  • Why does it matter to be code-driven?
  • Benefits of Cadence

# What is a Workflow?

workflow.png

In the simplest definition, it is “a multi-step execution”. Step here represents individual operations that are a little heavier than small in-process function calls. Although they are not limited to those: it could be a separate service call, processing a large dataset, map-reduce, thread sleep, scheduling next run, waiting for an external input, starting a sub workflow etc. It’s anything a user thinks as a single unit of logic in their code. Those steps often have dependencies among themselves. Some steps, including the very first step, might require external triggers (e.g. button click) or schedules. In the more broader meaning, any multi-step function or service is a workflow in principle.

While the above is a more correct way to define workflows, specialized workflows are more widely known: such as data pipelines, directed acyclic graphs, state machines, cron jobs, (micro)service orchestration, etc. This is why typically everyone has a different workflow meaning in mind. Specialized workflows also have simplified interfaces such as UI, configs or a DSL (domain specific language) to make it easy to express the workflow definition.

# Code-Driven Workflows

Over time, any workflow interface evolves to support more scenarios. For any non-code (UI, config, DSL) technology, this means more APIs, concepts and tooling. However, eventually, the technology’s capabilities will be limited by its interface itself. Otherwise the interface will get more complicated to operate.

What happens here is users love the seamless way of creating workflow applications and try to fit more scenarios into it. Natural user tendency is to be able to write any program with such simplicity and confidence.

Given this natural evolution of workflow requirements, it’s better to have a code-driven workflow orchestration engine that can meet any future needs with its powerful expressiveness. On top of this, it is ideal if the interface is seamless, where engineers learn as little as possible and change almost nothing in their local code to write a distributed and durable workflow code. This would virtually remove any limitation and enable implementing any service as a workflow. This is what Cadence aims for.

# Benefits

cadence-benefits.png

With Cadence, many overheads that need to be built for any well-supported service come for free. Here are some highlights (see cadenceworkflow.io (opens new window)):

  • Disaster recovery is supported by default through data replication and failovers
  • Strong multi tenancy support in Cadence clusters. Capacity and traffic management.
  • Users can use Cadence APIs to start and interact with their workflows instead of writing new APIs for them
  • They can schedule their workflows (distributed cron, scheduled start) or any step in their workflows
  • They have tooling to get updates or cancel their workflows.
  • Cadence comes with default metrics and logging support so users already get great insights about their workflows without implementing any observability tooling.
  • Cadence has a web UI where users can list and filter their workflows, inspect workflow/activity inputs and outputs.
  • They can scale their service just like true stateless services even though their workflows maintain a certain state.
  • Behavior on failure modes can easily be configured with a few lines, providing high reliability.
  • With Cadence testing capabilities, they can write unit tests or test against production data to prevent backward incompatibility issues.

# Project Support

# Team

Today the Cadence team comprises 26 people. We have people working from Uber’s US offices (Seattle, San Francisco and Sunnyvale) as well as Europe offices (Aarhus-DK and Amsterdam-NL).

# Community

Cadence is an actively built open source project. We invest in both our internal and open source community (Slack (opens new window), Github (opens new window)), responding to new features and enhancements.

# Scale

It’s one of the most popular platforms at Uber executing ~100K workflow updates per second. There are about 30 different Cadence clusters, several of which serve hundreds of domains. There are ~1000 domains (use cases) varying from tier 0 (most critical) to tier 5 scenarios.

# Managed Solutions

While Uber doesn’t officially sell a managed Cadence solution, there are companies (e.g. Instaclustr (opens new window)) in our community that we work closely with selling Managed Cadence. Due to efficiency investments and other factors, it’s significantly cheaper than its competitors. It can be run in users’ on-prem machines or their cloud service of choice. Pricing is defined based on allocated hosts instead of number of requests so users can get more with the same resources by utilizing multi-tenant clusters.

# After V1 Release

Last year, around this time we announced Cadence V1 (opens new window) and shared our roadmap. In this section we will talk about updates since then. At a high level, you will notice that we continue investing in high reliability and efficiency while also developing new features.

# Frequent Releases

We announced plans to make more frequent releases last year and started making more frequent releases. Today we aim to release biweekly and sometimes release as frequently as weekly. About the format, we listened to our community and heard about having too frequent releases potentially being painful. Therefore, we decided to increment the patch version with releases while incrementing the minor version close to quarterly. This helped us ship much more robust releases and improved our reliability. Here are some highlights:

# Zonal Isolation

Cadence clusters have already been regionally isolated until this change. However, in the cloud, inter-zone communications matter as they are more expensive and their latencies are higher. Zones can individually have problems without impacting other cloud zones. In a regional architecture, a single zone problem might impact every request; however, with zonal isolation traffic from a zone with issues can easily be failed over to other zones, eliminating its impact on the whole cluster. Therefore, we implemented zonal isolation keeping domain traffic inside a single zone to help improve efficiency and reliability.

# Narrowing Blast Radius

When there are issues in a Cadence cluster, it’s often from a single misbehaving workflow. When this happens the whole domain or the cluster could have had issues until the specific workflow is addressed. With this change, we are able to contain the issue only to the offending workflow without impacting others. This is the narrowest blast radius possible.

# Async APIs

At Uber, there are many batch work streams that run a high number of workflows (thousands to millions) at the same time causing bottlenecks for Cadence clusters, causing noisy neighbor issues. This is because StartWorkflow and SignalWorkflow APIs are synchronous, which means when Cadence acks the user requests are successfully saved in their workflow history.

Even after successful initiations, users would then need to deal with high concurrency. This often means constant worker cache thrashing, followed by history rebuilds at every update, increasing workflow execution complexity to O(n^2) from O(n). Alternatively, they would need to quickly scale out and down their service hosts in a very short amount of time to avoid this.

When we took a step back and analyzed such scenarios, we realized that users simply wanted to “complete N workflows (jobs) in K time”. The guarantees around starts and signals were not really important for their use cases. Therefore, we implemented async versions of our sync API, by which we can control the consumption rate, guaranteeing the fastest execution with no disruption in the cluster.

Later this year, we plan to expand this feature to cron workflows and timers as well.

# Pinot as Visibility Store

Apache Pinot (opens new window) is becoming popular due to its cost efficient nature. Several teams reported significant savings by changing their observability storage to Pinot. Cadence now has a Pinot plugin for its visibility store. We are still rolling out this change. Latencies and cost savings will be shared later.

# Code Coverage

We have received many requests from our community to actively contribute to our codebase, especially after our V1 release. While we have been already collaborating with some companies, this is a challenge with individuals who are just learning about Cadence. One of the main reasons was to avoid bugs that can be introduced.

While Cadence has many integration tests, its unit test coverage was lower than desired. With better unit test coverage we can catch changes that break previous logic and prevent them getting into the main branch. Our team covered additional 50K+ lines in various Cadence repos. We hope to bring our code coverage to 85%+ by the end of year so we can welcome such inquiries a lot easier.

# Replayer Improvements

This is still an ongoing project. As mentioned in our V1 release, we are revisiting some core parts of Cadence where less-than-ideal architectural decisions were made in the past. Replayer/shadower is one of such parts. We have been working on improving its precision, eliminating false negatives and positives.

# Global Rate Limiters

Cadence rate limiters are equally distributed across zones and hosts. However, when the user's traffic is skewed, rate limits can get activated even though the user has more capacity. To avoid this, we built global rate limiters. This will make rate limits much more predictable and capacity management a lot easier.

# Regular Failover Drills

Cadence has been performing monthly regional and zonal failover drills to ensure its failover operations are working properly in case we need it. We are failing over hundreds of domains at the same time to validate the scale of this operation, capacity elasticity and correctness of workflows.

# Cadence Web v4

We are migrating Cadence web from Vue.js to React.js to use a more modern infrastructure and to have better feature velocity. We are about 70% complete with this migration and hope to release the new version of it soon.

# Code Review Time Non-determinism Checks

(This is an internal-only feature that we hope to release soon) Cadence non-determinism errors and versioning were common pain points for our customers. There are available tools but they require ongoing effort to validate. We have built a tool that generates a shadower test with a single line command (one time only operation) and continuously validates any code change against production data.

This feature reduced the detect-and-fix time from days/weeks to minutes. Just by launching this feature to the domains with the most non-determinism errors, the number of related incidents reduced by 40%. We have already blocked 500+ diffs that would potentially impact production negatively. This boosted our users’ confidence in using Cadence.

# Domain Reports

(This is an internal-only feature that we hope to release soon) We are able to detect potential issues (bugs, antipatterns, inefficiencies, failures) with domains upon manual investigation. We have automated this process and now generate reports for each domain. This information can be accessed historically (to see the progression over time) and on-demand (to see the current state). This has already driven domain reliability and efficiency improvements.

This feature and above are at MVP level where we plan to generalize, expand and release for open source soon. In the V1 release, we have mentioned that we would build certain features internally first to be able to have enough velocity, to see where they are going and to make breaking changes until it’s mature.

# Client Based Migrations

With 30 clusters and ~1000 domains in production, migrating a domain from a cluster to another became a somewhat frequent operation for Cadence. While this feature is mostly automated, we would like to fully automate it to a level that this would be a single click or command operation. Client based migrations (as opposed to server based ones) give us big flexibility that we can have migrations from many to many environments at the same time. Each migration happens in isolation without impacting any other domain or the cluster.

This is an ongoing project where remaining parts are migrating long running workflows faster and seamless technology to technology migrations even if the “from-technology” is not Cadence in the first place. There are many users that migrated from Cadence-like or different technologies to Cadence so we hope to remove the repeating overhead for such users.

# Roadmap (Next Year)

Our priorities for next year look similar with reliability, efficiency, and new features as our focus. We have seen significant improvements especially in our users’ reliability and efficiency on top of the improvements in our servers. This both reduces operational load on our users and makes Cadence one step closer to being a standard way to build services. Here is a short list of what's coming over the next 12 months:

# Database efficiency

We are increasing our investment in improving Cadence’s database usage. Even though Cadence’s cost looks a lot better compared to the same family of technologies, it can still be significantly improved by eliminating certain bottlenecks coming from its original design.

# Helm Charts

We are grateful to the Cadence community for introducing and maintaining our Helm charts for operating Cadence clusters. We are taking its ownership so it can be officially released and tested. We expect to release this in 2024.

# Dashboard Templates

During our tech talks, demos and user talks, we have received inquiries about what metrics care about. We plan to release templates for our dashboards so our community would look at a similar picture.

# Client V2 Modernization

As we announced last year that we plan to make breaking changes to significantly improve our interfaces, we are working on modernizing our client interface.

# Higher Parallelization and Prioritization in Task Processing

In an effort to have better domain prioritization in multitenant Cadence clusters, we are improving our task processing with higher parallelization and better prioritization. This is a lot better model than just having domains with defined limits. We expect to provide more resources to high priority domains during their peak hours while allowing low priority domains to consume much bigger resources than allocated during quiet times.

# Timer and Cron Burst Handling

After addressing start and signal burst scenarios, we are continuing with bursty timers and cron jobs. Many users set their schedules and timers for the same second with the intention of being able to finish N jobs within a certain amount of time. Current scheduling design isn’t friendly for such intents and high loads can cause temporary starvation in the cluster. By introducing better batch scheduling support, clusters can continue with no disruption while timers are processed in the most efficient way.

# High zonal skew handling

For users operating in their own cloud and having multiple independent zones in every region, zonal skews can be a problem and can create unnecessary bottlenecks when Zonal Isolation feature is enabled. We are working on addressing such issues to improve task matching across zones when skew is detected.

# Tasklist Improvements

When a user scenario grows, there are many knobs that need to be manually adjusted. We would like to automatically partition and smartly forward tasks to improve tasklist efficiency significantly to avoid backlogs, timeouts and hot shards.

# Shard Movement/Assignment Improvements

Cadence shard movements are based on consistent hash and this can be a limiting factor for many different reasons. Certain hosts can end up getting unlucky by having many shards, or having heavy shards. During deployments we might observe a much higher number of shard movements than desired, which reduces the availability. With improved shard movements and assignments we can have more homogenous load among hosts while also having a minimum amount of shard movements during deployments with much better availability.

# Worker Heartbeats

Today, there’s no worker liveliness tracking in Cadence. Instead, task or activity heartbeat timeouts are used to reassign tasks to different workers. For latency sensitive users this can become a big disruption. For long activities without heartbeats, this can cause big delays. This feature is to eliminate depending on manual timeout or heartbeat configs to reassign tasks by tracking if workers are still healthy. This feature will also enable so many other new efficiency and reliability features we would like to get to in the future.

# Domain and Workflow Diagnostics

Probably the two most common user questions are “What’s wrong with my domain?” and “What’s wrong with my workflow?”. Today, diagnosing what happened and what could be wrong isn’t that easy apart from some basic cases. We are working on tools that would run diagnostics on workflows and domains to point out things that might potentially be wrong with public runbook links attached. This feature will not only help diagnose what is wrong with our workflows and domains but will also help fix them.

# Self Serve Operations

Certain Cadence operations are performed through admin CLI operations. However, these should be able to be done via Cadence UI by users. Admins shouldn’t need to be involved in every step or the checks they validate should be able to be automated. This is what the initiative is about including domain registration, auth/authz onboarding or adding new search attributes but it’s not limited to these operations.

# Cost Estimation

One big question we receive when users are onboarding to Cadence is “How much will this cost me?”. This is not an easy question to answer since data and traffic load can be quite different. We plan to automate this process to help users understand how much resources they will need. Especially in multi-tenant clusters, this will help users understand how much room they still have in their clusters and how much the new scenario will consume.

# Domain Reports (continue)

We plan to release this internal feature to open source as soon as possible. On top of presenting this data on built-in Cadence surfaces (web, CLI. etc.) we will create APIs to make it integratable with deployment systems, user service UIs, periodic reports and any other service that would like to consume.

# Non-determinism Detection Improvements (continue)

We have seen great reliability improvements and reduction in incidents with this feature on the user side last year. We continue to invest in this feature and make it available in open source as soon as possible.

# Domain Migrations (continue)

In the next year, we plan to finish our seamless client based migration to be able to safely migrate domains from one cluster to another, one technology (even if it’s not Cadence) to another and one cloud solution to another. There are only a few features left to achieve this.

# Community

Do you want to hear more about Cadence? Do you need help with your set-up or usage? Are you evaluating your options? Do you want to contribute? Feel free to join our community and reach out to us.

Slack: https://uber-cadence.slack.com/ (opens new window)

Github: https://github.com/uber/cadence (opens new window)

Since last year, we have been contacted by various companies to take on bigger projects on the Cadence project. As we have been investing in code coverage and refactoring Cadence for a cleaner codebase, this will be a lot easier now. Let us know if you have project ideas to contribute or if you’d like to pick something we already planned.

Our monthly community meetings are still ongoing, too. That is the best place to get heard and be involved in our decision-making process. Let us know so we can send you an invite. We are also working on a broader governing model to open up this project to more people. Stay tuned for updates on this topic!


- + diff --git a/blog/2024/09/05/workflow-specific-rate-limits/index.html b/blog/2024/09/05/workflow-specific-rate-limits/index.html index 5b8b57dcf..be4911c2d 100644 --- a/blog/2024/09/05/workflow-specific-rate-limits/index.html +++ b/blog/2024/09/05/workflow-specific-rate-limits/index.html @@ -5,6 +5,8 @@ Minimizing blast radius in Cadence: Introducing Workflow ID-based Rate Limits + + - + @@ -182,6 +184,6 @@ "logging-call-at":"cache.go:175" }

# Conclusion

Implementing these rate limits highly improves the reliability of a Cadence cluster, as users now cannot send too many requests to a single shard. This fine-grained control helps in maintaining optimal performance and enhances the ability to forecast and mitigate potential issues before they impact the service.

Workflow ID-based rate limits are a significant step forward in our ongoing effort to provide a robust and efficient workflow management service. By preventing hot shards and ensuring equitable resource distribution, we can offer more reliable performance, even under peak loads. We encourage all Cadence users to familiarize themselves with these new limits and adjust their workflow configurations to achieve optimal results.


- + diff --git a/blog/index.html b/blog/index.html index fca311a57..c680001cc 100644 --- a/blog/index.html +++ b/blog/index.html @@ -5,11 +5,13 @@ Post + + - + @@ -217,6 +219,6 @@

- + diff --git a/blog/page/2/index.html b/blog/page/2/index.html index 4354449bc..75f3e2e0a 100644 --- a/blog/page/2/index.html +++ b/blog/page/2/index.html @@ -5,11 +5,13 @@ Page 2 | Post + + - + @@ -211,6 +213,6 @@

- + diff --git a/blog/page/3/index.html b/blog/page/3/index.html index 9c85f5d1f..059a553a4 100644 --- a/blog/page/3/index.html +++ b/blog/page/3/index.html @@ -5,11 +5,13 @@ Page 3 | Post + + - + @@ -199,6 +201,6 @@

- + diff --git a/blog/page/4/index.html b/blog/page/4/index.html index d71ca2dca..f8aff3074 100644 --- a/blog/page/4/index.html +++ b/blog/page/4/index.html @@ -5,11 +5,13 @@ Page 4 | Post + + - + @@ -217,6 +219,6 @@

- + diff --git a/blog/page/5/index.html b/blog/page/5/index.html index eeb280431..2c6d20c32 100644 --- a/blog/page/5/index.html +++ b/blog/page/5/index.html @@ -5,11 +5,13 @@ Page 5 | Post + + - + @@ -217,6 +219,6 @@

- + diff --git a/blog/page/6/index.html b/blog/page/6/index.html index b4e0d82e4..575d2d8ed 100644 --- a/blog/page/6/index.html +++ b/blog/page/6/index.html @@ -5,11 +5,13 @@ Page 6 | Post + + - + @@ -207,6 +209,6 @@

- + diff --git a/blog/page/7/index.html b/blog/page/7/index.html index 6e9173a89..f8c81fc32 100644 --- a/blog/page/7/index.html +++ b/blog/page/7/index.html @@ -5,11 +5,13 @@ Page 7 | Post + + - + @@ -144,6 +146,6 @@

- + diff --git a/docs/about/index.html b/docs/about/index.html index 6be76dfd6..d73b68110 100644 --- a/docs/about/index.html +++ b/docs/about/index.html @@ -6,8 +6,10 @@ Contact us | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/about/license/index.html b/docs/about/license/index.html index a42705dfd..e2aa5ddc8 100644 --- a/docs/about/license/index.html +++ b/docs/about/license/index.html @@ -6,8 +6,10 @@ MIT License | Cadence + + - + @@ -148,6 +150,6 @@

- + diff --git a/docs/cli/index.html b/docs/cli/index.html index b4a321cc6..acd39be5e 100644 --- a/docs/cli/index.html +++ b/docs/cli/index.html @@ -6,8 +6,10 @@ Introduction | Cadence + + - + @@ -300,6 +302,6 @@ →

- + diff --git a/docs/concepts/activities/index.html b/docs/concepts/activities/index.html index c40af49a4..8cf6a1ef9 100644 --- a/docs/concepts/activities/index.html +++ b/docs/concepts/activities/index.html @@ -6,8 +6,10 @@ Activities | Cadence + + - + @@ -138,6 +140,6 @@ →

- + diff --git a/docs/concepts/archival/index.html b/docs/concepts/archival/index.html index 2e02dfc52..891632854 100644 --- a/docs/concepts/archival/index.html +++ b/docs/concepts/archival/index.html @@ -6,8 +6,10 @@ Archival | Cadence + + - + @@ -161,6 +163,6 @@ →

- + diff --git a/docs/concepts/cross-dc-replication/index.html b/docs/concepts/cross-dc-replication/index.html index 662ad37bf..023f8dac4 100644 --- a/docs/concepts/cross-dc-replication/index.html +++ b/docs/concepts/cross-dc-replication/index.html @@ -6,8 +6,10 @@ Cross DC replication | Cadence + + - + @@ -218,6 +220,6 @@ →

- + diff --git a/docs/concepts/events/index.html b/docs/concepts/events/index.html index 92b333c07..ea2e2e403 100644 --- a/docs/concepts/events/index.html +++ b/docs/concepts/events/index.html @@ -6,8 +6,10 @@ Event handling | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/concepts/http-api/index.html b/docs/concepts/http-api/index.html index f75338ccd..914292d9f 100644 --- a/docs/concepts/http-api/index.html +++ b/docs/concepts/http-api/index.html @@ -6,8 +6,10 @@ HTTP API | Cadence + + - + @@ -1261,6 +1263,6 @@ →

- + diff --git a/docs/concepts/index.html b/docs/concepts/index.html index 83ff5edb0..5ad88a3f4 100644 --- a/docs/concepts/index.html +++ b/docs/concepts/index.html @@ -6,8 +6,10 @@ Introduction | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/concepts/queries/index.html b/docs/concepts/queries/index.html index 9761c3fef..c579a45a7 100644 --- a/docs/concepts/queries/index.html +++ b/docs/concepts/queries/index.html @@ -6,8 +6,10 @@ Synchronous query | Cadence + + - + @@ -138,6 +140,6 @@ →

- + diff --git a/docs/concepts/search-workflows/index.html b/docs/concepts/search-workflows/index.html index 0cc664d77..1e01fdc35 100644 --- a/docs/concepts/search-workflows/index.html +++ b/docs/concepts/search-workflows/index.html @@ -6,8 +6,10 @@ Search workflows(Advanced visibility) | Cadence + + - + @@ -217,6 +219,6 @@ →

- + diff --git a/docs/concepts/task-lists/index.html b/docs/concepts/task-lists/index.html index 1cbb4d4ad..45e23a22d 100644 --- a/docs/concepts/task-lists/index.html +++ b/docs/concepts/task-lists/index.html @@ -6,8 +6,10 @@ Task lists | Cadence + + - + @@ -145,6 +147,6 @@ →

- + diff --git a/docs/concepts/topology/index.html b/docs/concepts/topology/index.html index 11c464d7b..1f3ead212 100644 --- a/docs/concepts/topology/index.html +++ b/docs/concepts/topology/index.html @@ -6,8 +6,10 @@ Deployment topology | Cadence + + - + @@ -137,6 +139,6 @@ →

- + diff --git a/docs/concepts/workflows/index.html b/docs/concepts/workflows/index.html index 9129cfc31..d3ff472f1 100644 --- a/docs/concepts/workflows/index.html +++ b/docs/concepts/workflows/index.html @@ -6,8 +6,10 @@ Workflows | Cadence + + - + @@ -209,6 +211,6 @@ →

- + diff --git a/docs/get-started/golang-hello-world/index.html b/docs/get-started/golang-hello-world/index.html index 7820242c1..35deb84e2 100644 --- a/docs/get-started/golang-hello-world/index.html +++ b/docs/get-started/golang-hello-world/index.html @@ -6,8 +6,10 @@ Golang hello world | Cadence + + - + @@ -246,6 +248,6 @@ →

- + diff --git a/docs/get-started/index.html b/docs/get-started/index.html index 7ace4ea4e..e8f624097 100644 --- a/docs/get-started/index.html +++ b/docs/get-started/index.html @@ -6,8 +6,10 @@ Overview | Cadence + + - + @@ -148,6 +150,6 @@ →

- + diff --git a/docs/get-started/installation/index.html b/docs/get-started/installation/index.html index 8632eb71e..8713f9bfd 100644 --- a/docs/get-started/installation/index.html +++ b/docs/get-started/installation/index.html @@ -6,8 +6,10 @@ Server Installation | Cadence + + - + @@ -158,6 +160,6 @@ →

- + diff --git a/docs/get-started/java-hello-world/index.html b/docs/get-started/java-hello-world/index.html index 292a18e3b..12cef3bf2 100644 --- a/docs/get-started/java-hello-world/index.html +++ b/docs/get-started/java-hello-world/index.html @@ -6,8 +6,10 @@ Java hello world | Cadence + + - + @@ -243,6 +245,6 @@ →

- + diff --git a/docs/get-started/video-tutorials/index.html b/docs/get-started/video-tutorials/index.html index c097d36c0..b793f2437 100644 --- a/docs/get-started/video-tutorials/index.html +++ b/docs/get-started/video-tutorials/index.html @@ -6,8 +6,10 @@ Video Tutorials | Cadence + + - + @@ -137,6 +139,6 @@ →

- + diff --git a/docs/go-client/activities/index.html b/docs/go-client/activities/index.html index cafdfbf9f..2c9807efe 100644 --- a/docs/go-client/activities/index.html +++ b/docs/go-client/activities/index.html @@ -6,8 +6,10 @@ Activity overview | Cadence + + - + @@ -200,6 +202,6 @@ →

- + diff --git a/docs/go-client/activity-async-completion/index.html b/docs/go-client/activity-async-completion/index.html index f2e3a15c0..9c3d0ce74 100644 --- a/docs/go-client/activity-async-completion/index.html +++ b/docs/go-client/activity-async-completion/index.html @@ -6,8 +6,10 @@ Async activity completion | Cadence + + - + @@ -157,6 +159,6 @@ →

- + diff --git a/docs/go-client/child-workflows/index.html b/docs/go-client/child-workflows/index.html index 160efbf3c..d3c54688c 100644 --- a/docs/go-client/child-workflows/index.html +++ b/docs/go-client/child-workflows/index.html @@ -6,8 +6,10 @@ Child workflows | Cadence + + - + @@ -168,6 +170,6 @@ →

- + diff --git a/docs/go-client/continue-as-new/index.html b/docs/go-client/continue-as-new/index.html index 9b6f28589..0988106b0 100644 --- a/docs/go-client/continue-as-new/index.html +++ b/docs/go-client/continue-as-new/index.html @@ -6,8 +6,10 @@ Continue as new | Cadence + + - + @@ -147,6 +149,6 @@ →

- + diff --git a/docs/go-client/create-workflows/index.html b/docs/go-client/create-workflows/index.html index cd408cf83..c1efa1d54 100644 --- a/docs/go-client/create-workflows/index.html +++ b/docs/go-client/create-workflows/index.html @@ -6,8 +6,10 @@ Creating workflows | Cadence + + - + @@ -205,6 +207,6 @@ →

- + diff --git a/docs/go-client/distributed-cron/index.html b/docs/go-client/distributed-cron/index.html index c6cfc34a2..130e65f78 100644 --- a/docs/go-client/distributed-cron/index.html +++ b/docs/go-client/distributed-cron/index.html @@ -6,8 +6,10 @@ Distributed CRON | Cadence + + - + @@ -187,6 +189,6 @@ →

- + diff --git a/docs/go-client/error-handling/index.html b/docs/go-client/error-handling/index.html index 74c1c3abe..8f653cdee 100644 --- a/docs/go-client/error-handling/index.html +++ b/docs/go-client/error-handling/index.html @@ -6,8 +6,10 @@ Error handling | Cadence + + - + @@ -181,6 +183,6 @@ →

- + diff --git a/docs/go-client/execute-activity/index.html b/docs/go-client/execute-activity/index.html index 174b7a84a..e78f344e1 100644 --- a/docs/go-client/execute-activity/index.html +++ b/docs/go-client/execute-activity/index.html @@ -6,8 +6,10 @@ Executing activities | Cadence + + - + @@ -185,6 +187,6 @@ →

- + diff --git a/docs/go-client/index.html b/docs/go-client/index.html index 3308df353..aa4f3777e 100644 --- a/docs/go-client/index.html +++ b/docs/go-client/index.html @@ -6,8 +6,10 @@ Introduction | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/go-client/queries/index.html b/docs/go-client/queries/index.html index 58f99ed57..a08edf9ae 100644 --- a/docs/go-client/queries/index.html +++ b/docs/go-client/queries/index.html @@ -6,8 +6,10 @@ Queries | Cadence + + - + @@ -181,6 +183,6 @@ →

- + diff --git a/docs/go-client/retries/index.html b/docs/go-client/retries/index.html index 56fa69f50..6b0305541 100644 --- a/docs/go-client/retries/index.html +++ b/docs/go-client/retries/index.html @@ -6,8 +6,10 @@ Activity and workflow retries | Cadence + + - + @@ -209,6 +211,6 @@ →

- + diff --git a/docs/go-client/sessions/index.html b/docs/go-client/sessions/index.html index a77782952..f3be1a6e7 100644 --- a/docs/go-client/sessions/index.html +++ b/docs/go-client/sessions/index.html @@ -6,8 +6,10 @@ Sessions | Cadence + + - + @@ -185,6 +187,6 @@ →

- + diff --git a/docs/go-client/side-effect/index.html b/docs/go-client/side-effect/index.html index 341f0b26e..e92f3f1fc 100644 --- a/docs/go-client/side-effect/index.html +++ b/docs/go-client/side-effect/index.html @@ -6,8 +6,10 @@ Side effect | Cadence + + - + @@ -154,6 +156,6 @@ →

- + diff --git a/docs/go-client/signals/index.html b/docs/go-client/signals/index.html index 8bb538fee..913ec8d52 100644 --- a/docs/go-client/signals/index.html +++ b/docs/go-client/signals/index.html @@ -6,8 +6,10 @@ Signals | Cadence + + - + @@ -162,6 +164,6 @@ →

- + diff --git a/docs/go-client/start-workflows/index.html b/docs/go-client/start-workflows/index.html index 81b2bd4d1..c803db872 100644 --- a/docs/go-client/start-workflows/index.html +++ b/docs/go-client/start-workflows/index.html @@ -6,8 +6,10 @@ Starting workflows | Cadence + + - + @@ -253,6 +255,6 @@ →

- + diff --git a/docs/go-client/tracing/index.html b/docs/go-client/tracing/index.html index db92f60a7..ad0acea71 100644 --- a/docs/go-client/tracing/index.html +++ b/docs/go-client/tracing/index.html @@ -6,8 +6,10 @@ Tracing and context propagation | Cadence + + - + @@ -174,6 +176,6 @@ →

- + diff --git a/docs/go-client/workers/index.html b/docs/go-client/workers/index.html index 9b6cb1a92..70be2691d 100644 --- a/docs/go-client/workers/index.html +++ b/docs/go-client/workers/index.html @@ -6,8 +6,10 @@ Worker service | Cadence + + - + @@ -294,6 +296,6 @@ →

- + diff --git a/docs/go-client/workflow-non-deterministic-errors/index.html b/docs/go-client/workflow-non-deterministic-errors/index.html index c9372af04..44655d63c 100644 --- a/docs/go-client/workflow-non-deterministic-errors/index.html +++ b/docs/go-client/workflow-non-deterministic-errors/index.html @@ -6,8 +6,10 @@ Workflow Non-deterministic errors | Cadence + + - + @@ -166,6 +168,6 @@

And restart worker, then it will run into this error. Because in the history, the workflow has scheduled ActivityB with input a, but during replay, it schedules ActivityC.

# 4. Decision state machine panic

fmt.Sprintf("unknown decision %v, possible causes are nondeterministic workflow definition code"+" or incompatible change in the workflow definition", id)
 

For source code click here (opens new window)

This usually means workflow history is corrupted due to some bug. For example, the same activity can be scheduled and differentiated by activityID. So ActivityIDs for different activities are supposed to be unique in workflow history. If however we have an ActivityID collision, replay will run into this error.

# Common Q&A

# I want to change my workflow implementation. What code changes may produce non-deterministic errors?

As we discussed in previous sections, if your changes change decision tasks, then they will probably lead to non-deterministic errors. These are some common changes that can be categorized by these previous 4 types mentioned above.

  1. Changing the order of executing Cadence defined operations, such as activities, timer, child workflows, signals, cancelRequest.
  2. Change the duration of a timer
  3. Use build-in goroutine of golang instead of using workflow.Go
  4. Use build-in channel of golang instead of using workflow.Channel
  5. Use build-in sleep function instead of using workflow.Sleep

# What are some changes that will NOT trigger non-deterministic errors?

Code changes that are free of non-deterministic erorrs normally do not involve decision tasks in Cadence.

  1. Activity input and output changes do not directly cause non-deterministic errors because the contents are not checked. However, changes may produce serialization errors based on your data converter implementation (type or number-of-arg changes are particularly prone to problems, so we recommend you always use a single struct). Cadence uses json.Marshal and json.Unmarshal (with Decoder.UseNumber()) by default.
  2. Code changes that does not modify history events are safe to be checked in. For example, logging or metrics implementations.
  3. Change of retry policies, as these are not compared. Adding or removing retry policies is also safe. Changes will only take effect on new calls however, not ones that have already been scheduled.

# I want to check if my code change will produce non-deterministic errors, how can I debug?

Cadence provides replayer test, which functions as an unit test on your local machine to replay your workflow history comparing to your potential code change. If you introduce a non-deterministic change and your history triggers it, the test should fail. Check out this page for more details.

- + diff --git a/docs/go-client/workflow-replay-shadowing/index.html b/docs/go-client/workflow-replay-shadowing/index.html index 7588e42b0..53024bd54 100644 --- a/docs/go-client/workflow-replay-shadowing/index.html +++ b/docs/go-client/workflow-replay-shadowing/index.html @@ -6,8 +6,10 @@ Workflow Replay and Shadowing | Cadence + + - + @@ -186,6 +188,6 @@ →

- + diff --git a/docs/go-client/workflow-testing/index.html b/docs/go-client/workflow-testing/index.html index e83ef8662..9421e167d 100644 --- a/docs/go-client/workflow-testing/index.html +++ b/docs/go-client/workflow-testing/index.html @@ -6,8 +6,10 @@ Testing | Cadence + + - + @@ -260,6 +262,6 @@ →

- + diff --git a/docs/go-client/workflow-versioning/index.html b/docs/go-client/workflow-versioning/index.html index b9b89548a..6b3513366 100644 --- a/docs/go-client/workflow-versioning/index.html +++ b/docs/go-client/workflow-versioning/index.html @@ -6,8 +6,10 @@ Versioning | Cadence + + - + @@ -226,6 +228,6 @@ →

- + diff --git a/docs/java-client/activity-interface/index.html b/docs/java-client/activity-interface/index.html index cf989549e..e9befb72c 100644 --- a/docs/java-client/activity-interface/index.html +++ b/docs/java-client/activity-interface/index.html @@ -6,8 +6,10 @@ Activity interface | Cadence + + - + @@ -149,6 +151,6 @@ →

- + diff --git a/docs/java-client/child-workflows/index.html b/docs/java-client/child-workflows/index.html index 713e15f97..ea5d3b185 100644 --- a/docs/java-client/child-workflows/index.html +++ b/docs/java-client/child-workflows/index.html @@ -6,8 +6,10 @@ Child workflows | Cadence + + - + @@ -199,6 +201,6 @@ →

- + diff --git a/docs/java-client/client-overview/index.html b/docs/java-client/client-overview/index.html index f9db2863b..b8e9b502c 100644 --- a/docs/java-client/client-overview/index.html +++ b/docs/java-client/client-overview/index.html @@ -6,8 +6,10 @@ Client SDK Overview | Cadence + + - + @@ -141,6 +143,6 @@ →

- + diff --git a/docs/java-client/continue-as-new/index.html b/docs/java-client/continue-as-new/index.html index 8b60a6c40..0d7a6abf9 100644 --- a/docs/java-client/continue-as-new/index.html +++ b/docs/java-client/continue-as-new/index.html @@ -6,8 +6,10 @@ Continue As New | Cadence + + - + @@ -148,6 +150,6 @@ →

- + diff --git a/docs/java-client/distributed-cron/index.html b/docs/java-client/distributed-cron/index.html index 73166f6af..1dc31aae5 100644 --- a/docs/java-client/distributed-cron/index.html +++ b/docs/java-client/distributed-cron/index.html @@ -6,8 +6,10 @@ Distributed CRON | Cadence + + - + @@ -179,6 +181,6 @@ →

- + diff --git a/docs/java-client/exception-handling/index.html b/docs/java-client/exception-handling/index.html index 00f46d913..b1e3ee3e1 100644 --- a/docs/java-client/exception-handling/index.html +++ b/docs/java-client/exception-handling/index.html @@ -6,8 +6,10 @@ Exception Handling | Cadence + + - + @@ -281,6 +283,6 @@ →

- + diff --git a/docs/java-client/implementing-activities/index.html b/docs/java-client/implementing-activities/index.html index d33e40b5b..fdc3b5d6c 100644 --- a/docs/java-client/implementing-activities/index.html +++ b/docs/java-client/implementing-activities/index.html @@ -6,8 +6,10 @@ Implementing activities | Cadence + + - + @@ -217,6 +219,6 @@ →

- + diff --git a/docs/java-client/implementing-workflows/index.html b/docs/java-client/implementing-workflows/index.html index 2f5a312bd..d982701f5 100644 --- a/docs/java-client/implementing-workflows/index.html +++ b/docs/java-client/implementing-workflows/index.html @@ -6,8 +6,10 @@ Implementing workflows | Cadence + + - + @@ -247,6 +249,6 @@ →

- + diff --git a/docs/java-client/index.html b/docs/java-client/index.html index dfa06b440..e4551eeb1 100644 --- a/docs/java-client/index.html +++ b/docs/java-client/index.html @@ -6,8 +6,10 @@ Introduction | Cadence + + - + @@ -147,6 +149,6 @@ →

- + diff --git a/docs/java-client/queries/index.html b/docs/java-client/queries/index.html index 54915f9a7..b464b273f 100644 --- a/docs/java-client/queries/index.html +++ b/docs/java-client/queries/index.html @@ -6,8 +6,10 @@ Queries | Cadence + + - + @@ -195,6 +197,6 @@ →

- + diff --git a/docs/java-client/retries/index.html b/docs/java-client/retries/index.html index b2859585a..93974402b 100644 --- a/docs/java-client/retries/index.html +++ b/docs/java-client/retries/index.html @@ -6,8 +6,10 @@ Retries | Cadence + + - + @@ -152,6 +154,6 @@ →

- + diff --git a/docs/java-client/side-effect/index.html b/docs/java-client/side-effect/index.html index 90f883673..f6c96f88f 100644 --- a/docs/java-client/side-effect/index.html +++ b/docs/java-client/side-effect/index.html @@ -6,8 +6,10 @@ Side Effect | Cadence + + - + @@ -172,6 +174,6 @@ →

- + diff --git a/docs/java-client/signals/index.html b/docs/java-client/signals/index.html index 2c96c9534..57e4197cf 100644 --- a/docs/java-client/signals/index.html +++ b/docs/java-client/signals/index.html @@ -6,8 +6,10 @@ Signals | Cadence + + - + @@ -215,6 +217,6 @@ →

- + diff --git a/docs/java-client/starting-workflow-executions/index.html b/docs/java-client/starting-workflow-executions/index.html index 163c5f0d2..094afd720 100644 --- a/docs/java-client/starting-workflow-executions/index.html +++ b/docs/java-client/starting-workflow-executions/index.html @@ -6,8 +6,10 @@ Starting workflows | Cadence + + - + @@ -188,6 +190,6 @@ →

- + diff --git a/docs/java-client/testing/index.html b/docs/java-client/testing/index.html index ad4d70ed4..a71a95fd4 100644 --- a/docs/java-client/testing/index.html +++ b/docs/java-client/testing/index.html @@ -6,8 +6,10 @@ Testing | Cadence + + - + @@ -198,6 +200,6 @@ →

- + diff --git a/docs/java-client/versioning/index.html b/docs/java-client/versioning/index.html index db4644f7f..396089e46 100644 --- a/docs/java-client/versioning/index.html +++ b/docs/java-client/versioning/index.html @@ -6,8 +6,10 @@ Versioning | Cadence + + - + @@ -205,6 +207,6 @@ →

- + diff --git a/docs/java-client/workers/index.html b/docs/java-client/workers/index.html index 817607f2f..bd016c3f1 100644 --- a/docs/java-client/workers/index.html +++ b/docs/java-client/workers/index.html @@ -6,8 +6,10 @@ Worker service | Cadence + + - + @@ -170,6 +172,6 @@ →

- + diff --git a/docs/java-client/workflow-interface/index.html b/docs/java-client/workflow-interface/index.html index d2d0cf3f1..17582f775 100644 --- a/docs/java-client/workflow-interface/index.html +++ b/docs/java-client/workflow-interface/index.html @@ -6,8 +6,10 @@ Workflow interface | Cadence + + - + @@ -150,6 +152,6 @@ →

- + diff --git a/docs/java-client/workflow-replay-shadowing/index.html b/docs/java-client/workflow-replay-shadowing/index.html index 0194edbff..e88e04549 100644 --- a/docs/java-client/workflow-replay-shadowing/index.html +++ b/docs/java-client/workflow-replay-shadowing/index.html @@ -6,8 +6,10 @@ Workflow Replay and Shadowing | Cadence + + - + @@ -183,6 +185,6 @@ →

- + diff --git a/docs/operation-guide/index.html b/docs/operation-guide/index.html index 24775167b..5b8208833 100644 --- a/docs/operation-guide/index.html +++ b/docs/operation-guide/index.html @@ -6,8 +6,10 @@ Overview | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/operation-guide/maintain/index.html b/docs/operation-guide/maintain/index.html index dc0e27de5..fe8b9da4a 100644 --- a/docs/operation-guide/maintain/index.html +++ b/docs/operation-guide/maintain/index.html @@ -6,8 +6,10 @@ Cluster Maintenance | Cadence + + - + @@ -151,6 +153,6 @@ →

- + diff --git a/docs/operation-guide/migration/index.html b/docs/operation-guide/migration/index.html index 0365a1571..ebf318f4d 100644 --- a/docs/operation-guide/migration/index.html +++ b/docs/operation-guide/migration/index.html @@ -6,8 +6,10 @@ Cluster Migration | Cadence + + - + @@ -191,6 +193,6 @@ →

- + diff --git a/docs/operation-guide/monitor/index.html b/docs/operation-guide/monitor/index.html index 8e200e6e9..3c2637510 100644 --- a/docs/operation-guide/monitor/index.html +++ b/docs/operation-guide/monitor/index.html @@ -6,8 +6,10 @@ Cluster Monitoring | Cadence + + - + @@ -476,6 +478,6 @@ →

- + diff --git a/docs/operation-guide/setup/index.html b/docs/operation-guide/setup/index.html index b10b073a4..c5403fdd6 100644 --- a/docs/operation-guide/setup/index.html +++ b/docs/operation-guide/setup/index.html @@ -6,8 +6,10 @@ Cluster Configuration | Cadence + + - + @@ -178,6 +180,6 @@ →

- + diff --git a/docs/operation-guide/troubleshooting/index.html b/docs/operation-guide/troubleshooting/index.html index 47754df55..e6f6e98c6 100644 --- a/docs/operation-guide/troubleshooting/index.html +++ b/docs/operation-guide/troubleshooting/index.html @@ -6,8 +6,10 @@ Cluster Troubleshooting | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/use-cases/batch-job/index.html b/docs/use-cases/batch-job/index.html index 655179d13..109bc1a85 100644 --- a/docs/use-cases/batch-job/index.html +++ b/docs/use-cases/batch-job/index.html @@ -6,8 +6,10 @@ Batch job | Cadence + + - + @@ -138,6 +140,6 @@ →

- + diff --git a/docs/use-cases/big-ml/index.html b/docs/use-cases/big-ml/index.html index b1c0420a2..1e6e2d996 100644 --- a/docs/use-cases/big-ml/index.html +++ b/docs/use-cases/big-ml/index.html @@ -6,8 +6,10 @@ Big data and ML | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/use-cases/deployment/index.html b/docs/use-cases/deployment/index.html index c35e50a45..c3392b980 100644 --- a/docs/use-cases/deployment/index.html +++ b/docs/use-cases/deployment/index.html @@ -6,8 +6,10 @@ Deployment | Cadence + + - + @@ -139,6 +141,6 @@ →

- + diff --git a/docs/use-cases/dsl/index.html b/docs/use-cases/dsl/index.html index 477fe2596..7f6bbe065 100644 --- a/docs/use-cases/dsl/index.html +++ b/docs/use-cases/dsl/index.html @@ -6,8 +6,10 @@ DSL workflows | Cadence + + - + @@ -140,6 +142,6 @@ →

- + diff --git a/docs/use-cases/event-driven/index.html b/docs/use-cases/event-driven/index.html index 5d24d3c60..9b945344e 100644 --- a/docs/use-cases/event-driven/index.html +++ b/docs/use-cases/event-driven/index.html @@ -6,8 +6,10 @@ Event driven application | Cadence + + - + @@ -140,6 +142,6 @@ →

- + diff --git a/docs/use-cases/index.html b/docs/use-cases/index.html index 469528f51..4a5baa918 100644 --- a/docs/use-cases/index.html +++ b/docs/use-cases/index.html @@ -6,8 +6,10 @@ Introduction | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/use-cases/interactive/index.html b/docs/use-cases/interactive/index.html index 88096757a..8a28f8da3 100644 --- a/docs/use-cases/interactive/index.html +++ b/docs/use-cases/interactive/index.html @@ -6,8 +6,10 @@ Interactive application | Cadence + + - + @@ -137,6 +139,6 @@ →

- + diff --git a/docs/use-cases/operational-management/index.html b/docs/use-cases/operational-management/index.html index badd4e065..ffac6a145 100644 --- a/docs/use-cases/operational-management/index.html +++ b/docs/use-cases/operational-management/index.html @@ -6,8 +6,10 @@ Operational management | Cadence + + - + @@ -137,6 +139,6 @@ →

- + diff --git a/docs/use-cases/orchestration/index.html b/docs/use-cases/orchestration/index.html index ddac2c653..b525ade21 100644 --- a/docs/use-cases/orchestration/index.html +++ b/docs/use-cases/orchestration/index.html @@ -6,8 +6,10 @@ Orchestration | Cadence + + - + @@ -140,6 +142,6 @@ →

- + diff --git a/docs/use-cases/partitioned-scan/index.html b/docs/use-cases/partitioned-scan/index.html index 18232c806..0c631b0b4 100644 --- a/docs/use-cases/partitioned-scan/index.html +++ b/docs/use-cases/partitioned-scan/index.html @@ -6,8 +6,10 @@ Storage scan | Cadence + + - + @@ -139,6 +141,6 @@ →

- + diff --git a/docs/use-cases/periodic-execution/index.html b/docs/use-cases/periodic-execution/index.html index f1264d0b0..af572c511 100644 --- a/docs/use-cases/periodic-execution/index.html +++ b/docs/use-cases/periodic-execution/index.html @@ -6,8 +6,10 @@ Periodic execution | Cadence + + - + @@ -138,6 +140,6 @@ →

- + diff --git a/docs/use-cases/polling/index.html b/docs/use-cases/polling/index.html index 1f4d0745a..3ed3343a0 100644 --- a/docs/use-cases/polling/index.html +++ b/docs/use-cases/polling/index.html @@ -6,8 +6,10 @@ Polling | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/use-cases/provisioning/index.html b/docs/use-cases/provisioning/index.html index 73acf5525..51fafbc99 100644 --- a/docs/use-cases/provisioning/index.html +++ b/docs/use-cases/provisioning/index.html @@ -6,8 +6,10 @@ Infrastructure provisioning | Cadence + + - + @@ -138,6 +140,6 @@ →

- + diff --git a/docs/workflow-troubleshooting/index.html b/docs/workflow-troubleshooting/index.html index 2eae07c94..a6d551fc5 100644 --- a/docs/workflow-troubleshooting/index.html +++ b/docs/workflow-troubleshooting/index.html @@ -6,8 +6,10 @@ Overview | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/docs/workflow-troubleshooting/timeouts/index.html b/docs/workflow-troubleshooting/timeouts/index.html index 5752923f9..32adda4e3 100644 --- a/docs/workflow-troubleshooting/timeouts/index.html +++ b/docs/workflow-troubleshooting/timeouts/index.html @@ -6,8 +6,10 @@ Timeouts | Cadence + + - + @@ -136,6 +138,6 @@ →

- + diff --git a/index.html b/index.html index bb67ed7e1..464b11392 100644 --- a/index.html +++ b/index.html @@ -6,8 +6,10 @@ Cadence + + - + @@ -134,6 +136,6 @@

Get Started →

Easy to use

Workflows provide primitives to allow application developers to express complex business logic as code.

The underlying platform abstracts scalability, reliability and availability concerns from individual developers/teams.

Fault tolerant

Cadence enables writing stateful applications without worrying about the complexity of handling process failures.

Cadence preserves complete multithreaded application state including thread stacks with local variables across hardware and software failures.

Scalable & Reliable

Cadence is designed to scale out horizontally to handle millions of concurrent workflows.

Cadence provides out-of-the-box asynchronous history event replication that can help you recover from zone failures.

- + diff --git a/rss.xml b/rss.xml index d4c0c7948..0961cee48 100644 --- a/rss.xml +++ b/rss.xml @@ -4,7 +4,7 @@ / - Thu, 05 Sep 2024 13:36:06 GMT + Mon, 16 Sep 2024 20:15:50 GMT http://blogs.law.harvard.edu/tech/rss https://github.com/webmasterish/vuepress-plugin-feed diff --git a/tag/index.html b/tag/index.html index 4e2ca94b7..0b5bf4f5f 100644 --- a/tag/index.html +++ b/tag/index.html @@ -5,11 +5,13 @@ Tag + + - + @@ -130,6 +132,6 @@ (opens new window)
- +