docs: Serverless Workers - Evaluate pages (1/4)#4417
docs: Serverless Workers - Evaluate pages (1/4)#4417lennessyy merged 3 commits intofeat/serverless-worker-prereleasefrom
Conversation
Add the Evaluate section for Serverless Workers documentation: - Serverless Workers overview page - Interactive demo page with ServerlessWorkerDemo component - Sidebar entry under Features - Redirect from old demo URL - Change onBrokenLinks/onBrokenAnchors to 'warn' for incremental PRs Part 1 of 4, splitting PR #4405 into smaller PRs. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📖 Docs PR preview links
|
|
|
||
| Serverless Workers use the same Temporal SDKs as traditional Workers. | ||
| You register Workflows and Activities the same way. | ||
| The difference is in the lifecycle: instead of a long-lived process that polls continuously, Temporal triggers the compute environment on demand, the Worker processes available Tasks, and then exits. |
There was a problem hiding this comment.
The workers, while alive, still poll the server.
|
|
||
| Traditional Workers require you to provision infrastructure, configure scaling policies, manage deployments, and monitor host-level health. | ||
| Serverless Workers remove this burden. | ||
| The compute provider handles provisioning, scaling, and infrastructure management. |
There was a problem hiding this comment.
We do not handle provisioning (yet).
| ### Reduce operational overhead | ||
|
|
||
| Traditional Workers require you to provision infrastructure, configure scaling policies, manage deployments, and monitor host-level health. | ||
| Serverless Workers remove this burden. |
There was a problem hiding this comment.
Customers still need to manage deployments. We simplify the deployment process (because of the simplicity of serverless providers), reduce the ongoing infrastructure management burden, handle autoscaling, scale to zero.
|
|
||
| Serverless Workers may not be ideal when: | ||
|
|
||
| - **Workloads are long-running.** Serverless platforms enforce execution time limits (for example, AWS Lambda has a 15-minute maximum). Activities that run longer than the provider's timeout need a different hosting strategy. |
There was a problem hiding this comment.
If the work flows are long running, it's a good fit. If you have an activity that needs longer than 15 minutes and cannot be interrupted, lambda is not a good fit. This limitation does not apply to cloud run.
|
|
||
| - **Workloads are long-running.** Serverless platforms enforce execution time limits (for example, AWS Lambda has a 15-minute maximum). Activities that run longer than the provider's timeout need a different hosting strategy. | ||
| - **Workloads require sustained high throughput.** For consistently high-volume Task Queues, long-lived Workers on dedicated compute may be more cost-effective and performant. | ||
| - **You need persistent connections.** Some features require a persistent connection between the Worker and Temporal, which serverless invocations do not maintain. |
There was a problem hiding this comment.
@smuneebahmad Is this true? Should we comment on how sticky works here?
There was a problem hiding this comment.
This is here primarily because in Go it looks like DisableEagerActivities is set to always true (users can't override). But yeah if there is a roadmap to remove these limitations we can remove this bullet
There was a problem hiding this comment.
We can either remove this bullet point, or say:
**You need long running persistent connections**. Some features require a long running persistent connection between the Worker and Temporal, beyond the maximum runtime duration of a serverless worker like AWS Lambda.
There was a problem hiding this comment.
I'd love to be specific here if possible. Can we name specific features that require a persistent connection?
I wouldn't want to remove this if the limitation is real and unlikely to change. This page is not meant to be push users to serverless. We want to be helpful here to help them choose serverless vs stick with traditional workers.
- Fix polling description: serverless workers still poll, the difference is lifecycle - Tone down operational overhead claims: customers still deploy and configure - Clarify long-running limitation applies to activities, not workflows Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
||
| Serverless Workers use the same Temporal SDKs as traditional Workers. | ||
| You register Workflows and Activities the same way. | ||
| The difference is in the lifecycle: instead of running a long-lived process, Temporal triggers the compute environment when Tasks arrive, the Worker starts and polls for available Tasks, and then exits when the work is done. |
There was a problem hiding this comment.
...Temporal triggers the compute environment when Tasks arrive...
Recommend using Lambda's nomenclature here: "...Temporal invokes a serverless function when Tasks arrive..."
|
|
||
| Serverless Workers may not be ideal when: | ||
|
|
||
| - **Activities are long-running and cannot be interrupted.** Serverless platforms enforce execution time limits (for example, AWS Lambda has a 15-minute maximum). Activities that run longer than the provider's timeout and cannot be broken into smaller steps need a different hosting strategy. Long-running Workflows are not affected because Workflows can span multiple invocations. |
There was a problem hiding this comment.
for example, AWS Lambda has a 15-minute maximum
for example, AWS Lambda has a 15-minute execution limit
| | **Lifecycle** | Long-lived process that runs continuously. | Invoked on demand. Starts and stops per invocation. | | ||
| | **Scaling** | You manage scaling (Kubernetes HPA, instance count, etc.). | Temporal invokes additional instances as needed, within the compute provider's concurrency limits. | | ||
| | **Connection** | Persistent connection to Temporal. | Fresh connection on each invocation. | | ||
| | **Worker Versioning** | Optional but recommended. | Required. | |
There was a problem hiding this comment.
I don't think "Worker Versioning --> Required" is true.
AWS allows Unqualified Lambdas and that's what we are using for pre-release.
There was a problem hiding this comment.
@bchav Do we not need worker versioning? I thought with serverless you had to have a worker deployment version
There was a problem hiding this comment.
Just to clarify my nuance: The workers themselves need to be versioned because we need to create Worker Deployment (WD) and then Worker Deployment Version (WDV) in the Compute Config. But the way it's written in the table, it looks like unversioned / unqualified Lambdas won't work.
There was a problem hiding this comment.
I see. I will revise the wording here to make that clear.
There was a problem hiding this comment.
On second thought will just remove this row as it's a lower level detail than should be in this page
| value={config.lambdaFunctionName} | ||
| onChange={handleFunctionNameChange} | ||
| /> | ||
| <ConfigField |
There was a problem hiding this comment.
We don't need to specify Min and Max instances for serverless functions like AWS Lambda. These settings are more suited for replica-based compute providers like AWS ECS or Google Cloud Run
| --deployment-name ${deploymentName} \\ | ||
| --build-id ${buildId} \\ | ||
| --aws-lambda-invoke ${lambdaArn} \\ | ||
| --scaler-min-instances ${scalerMin} \\ |
There was a problem hiding this comment.
We can remove --scaler-min-instances and --scaler-max-instances for AWS Lambda example
| --namespace ${namespace} \\ | ||
| --deployment-name ${deploymentName} \\ | ||
| --build-id ${buildId} \\ | ||
| --ignore-missing-task-queues`; |
There was a problem hiding this comment.
We don't need to specify --ignore-missing-task-queues
|
|
||
| - **Workloads are long-running.** Serverless platforms enforce execution time limits (for example, AWS Lambda has a 15-minute maximum). Activities that run longer than the provider's timeout need a different hosting strategy. | ||
| - **Workloads require sustained high throughput.** For consistently high-volume Task Queues, long-lived Workers on dedicated compute may be more cost-effective and performant. | ||
| - **You need persistent connections.** Some features require a persistent connection between the Worker and Temporal, which serverless invocations do not maintain. |
There was a problem hiding this comment.
We can either remove this bullet point, or say:
**You need long running persistent connections**. Some features require a long running persistent connection between the Worker and Temporal, beyond the maximum runtime duration of a serverless worker like AWS Lambda.
- Reframe lifecycle description: Temporal invokes the Serverless Worker on demand (bchav, akhayam) - Clarify operational overhead: offload invocation and scaling, but deployments remain the user's responsibility (bchav) - Introduce "long-lived Workers" terminology and use consistently - Sharpen Lambda execution limit wording and Cloud Run callout (akhayam, bchav) - Remove Worker Versioning row from comparison table as too low-level (akhayam) - Remove --scaler-min-instances, --scaler-max-instances, and --ignore-missing-task-queues from demo CLI snippets (smuneebahmad) - Remove Min/Max Instances config fields from demo UI (smuneebahmad) Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
18f8d0e
into
feat/serverless-worker-prerelease
Summary
ServerlessWorkerDemocomponent and its CSS module/evaluate/serverless-workers-demoto/evaluate/serverless-workers/demoonBrokenLinksandonBrokenAnchorsfrom'throw'to'warn'to support incremental PRs with cross-referencesThis is PR 1 of 4, splitting #4405 into smaller PRs targeting
feat/serverless-worker-prerelease.Test plan
/evaluate/serverless-workers/evaluate/serverless-workers/demo/evaluate/serverless-workers-demoworks🤖 Generated with Claude Code
┆Attachments: EDU-6189 docs: Serverless Workers - Evaluate pages (1/4)