You can subscribe to S3 bucket events using three notification methods. Event notifications can trigger Lambda functions, SQS queues, or SNS topics when objects are created, deleted, or modified.
notify_function() - Subscribe a Lambda function to handle events
notify_queue() - Send events to an SQS queue
notify_topic() - Publish events to an SNS topic
Function Notifications
Subscribe a Lambda function to handle S3 events using notify_function():
@app.rundefrun()->None:bucket=Bucket("uploads")# Notify on all object created eventsbucket.notify_function("on-upload",events=["s3:ObjectCreated:*"],function="functions/process_upload.handler",)
You can also filter notifications by object key prefix or suffix, and configure function options:
@app.rundefrun()->None:bucket=Bucket("media")# Only trigger for images in the uploads folderbucket.notify_function("process-images",events=["s3:ObjectCreated:*"],filter_prefix="uploads/",filter_suffix=".jpg",function="functions/process_image.handler",memory=512,timeout=60,)
Linking Resources
You can link other resources to your notification function using the links parameter. For details on how linking works and default permissions, see the Linking guide.
Send event notifications to an SQS queue for asynchronous processing using notify_queue():
@app.rundefrun()->None:bucket=Bucket("orders")processing_queue=Queue("order-processing")# Send notifications to queuebucket.notify_queue("order-created",events=["s3:ObjectCreated:Put"],queue=processing_queue,)# Subscribe a function to process from the queueprocessing_queue.subscribe("processor","functions/process_order.handler")
You can also use an existing queue ARN:
@app.rundefrun()->None:bucket=Bucket("orders")# Send to an external queue (you manage the queue policy)bucket.notify_queue("order-created",events=["s3:ObjectCreated:Put"],queue="arn:aws:sqs:us-east-1:123456789012:my-external-queue",)
Topic Notifications
Publish event notifications to an SNS topic for fan-out to multiple subscribers using notify_topic():
@app.rundefrun()->None:bucket=Bucket("uploads")notifications=Topic("upload-notifications")# Publish upload events to the topicbucket.notify_topic("on-upload",events=["s3:ObjectCreated:*"],topic=notifications,)# Subscribe multiple handlers to the topicnotifications.subscribe("processor","functions/process_upload.handler")notifications.subscribe("logger","functions/log_upload.handler")
You can also use an existing topic ARN:
@app.rundefrun()->None:bucket=Bucket("uploads")# Send to an external topic (you manage the topic policy)bucket.notify_topic("on-upload",events=["s3:ObjectCreated:*"],topic="arn:aws:sns:us-east-1:123456789012:my-external-topic",)
Queue and Topic Policy Behavior
Using Stelvio Queue or Topic components: Stelvio automatically creates an SQS QueuePolicy or SNS TopicPolicy resource to allow S3 to publish notifications. These policy resources replace any existing policy on the queue or topic. If you have custom policies, you may need to manage permissions manually.
Using ARN strings (external queue/topic): Stelvio does not create or manage a policy resource. You are responsible for ensuring the queue/topic policy allows S3 to send notifications from the bucket.
Multiple Notifications
You can add multiple notifications to the same bucket, each with different targets and filter configurations:
@app.rundefrun()->None:bucket=Bucket("media")processing_queue=Queue("processing")alerts=Topic("alerts")# Process uploaded imagesbucket.notify_function("process-images",events=["s3:ObjectCreated:*"],filter_suffix=".jpg",function="functions/process_image.handler",memory=512,)# Queue videos for async processingbucket.notify_queue("queue-videos",events=["s3:ObjectCreated:*"],filter_suffix=".mp4",queue=processing_queue,)# Alert on all deletions in the archive folderbucket.notify_topic("deletion-alert",events=["s3:ObjectRemoved:*"],filter_prefix="archive/",topic=alerts,)
Each notification call configures an independent notification with its own target and filters. You can combine different target types (functions, queues, topics) and use different filter_prefix and filter_suffix values to route events precisely.
Notifications must be defined before resource creation
All notifications must be added to the Bucket before its resources are created. Once the Bucket's S3 resources have been provisioned (by accessing .resources), attempting to add new notifications will raise a RuntimeError. Define all your notifications immediately after creating the Bucket instance.
The versioning configuration for the S3 bucket. Boolean. Default is False.
access
The access configuration for the S3 bucket. Either None (default) or 'public'.
Bucket.notify_function() Parameters
Parameter
Description
name
Unique name for this notification subscription (required).
events
List of S3 event types to subscribe to (required).
filter_prefix
Filter notifications by object key prefix. Optional.
filter_suffix
Filter notifications by object key suffix. Optional.
function
Lambda function handler to invoke. Can be a string, FunctionConfig, or FunctionConfigDict. Optional.
links
List of links to grant the notification function access to other resources. Optional.
**opts
Additional function configuration options (memory, timeout, environment, architecture, runtime, requirements, layers, url). Only valid when function is specified as a string. These are unpacked from FunctionConfigDict.
Bucket.notify_queue() Parameters
Parameter
Description
name
Unique name for this notification subscription (required).
events
List of S3 event types to subscribe to (required).
filter_prefix
Filter notifications by object key prefix. Optional.
filter_suffix
Filter notifications by object key suffix. Optional.
queue
SQS queue to send notifications to. Can be a Queue component or queue ARN string. Optional.
Bucket.notify_topic() Parameters
Parameter
Description
name
Unique name for this notification subscription (required).
events
List of S3 event types to subscribe to (required).
filter_prefix
Filter notifications by object key prefix. Optional.
filter_suffix
Filter notifications by object key suffix. Optional.
topic
SNS topic to send notifications to. Can be a Topic component or topic ARN string. Optional.
Resources
Resource
Description
bucket
The S3 bucket created by the Bucket component.
public_access_block
The BucketPublicAccessBlock resource created by the Bucket component.
bucket_policy
The BucketPolicy resource created by the Bucket component if access is set to 'public'.
bucket_notification
The BucketNotification resource if any notifications are configured.
subscriptions
List of BucketNotifySubscription components created via notification methods.
Static Websites
Stelvio can create and manage S3 buckets for static website hosting using the S3StaticWebsite component.
Create a static website from a directory using the S3StaticWebsite component:
Creates a CloudFront distribution for the S3 bucket, so that it is compatible with 3rd party DNS providers
Attaching a domain name to a S3 bucket (without CloudFront) only works with AWS Route 53, because you'd need to create a CNAME pointing to the S3 bucket name. This is not possible with other DNS providers as they don't have access to the S3 bucket name in AWS.
Creates an S3 object for each file in the static website directory
Automatically creates a DNS record for the CloudFront distribution if a DNS provider is configured
Handling files (assets) of a static website
The S3StaticWebsite component automatically uploads all files in the specified directory to the S3 bucket.
The directory parameter is optional, though. If omitted, an empty S3 bucket is created and you are responsible for uploading the files (assets) to the bucket.
The custom_domain parameter is also optional. If omitted, no DNS record is created for the CloudFront distribution and you can access the static website using the CloudFront domain name (<distribution_id>.cloudfront.net).
In the following example, we use the mkdocs library to build a static website from Markdown files and upload the generated files to the S3 bucket:
@app.rundefrun()->None:config=mkdocs.config.load_config("mkdocs.yml")mkdocs.commands.build.build(config)website=S3StaticWebsite("s3-static-mkdocs",custom_domain="s3-2."+CUSTOM_DOMAIN_NAME,)# Upload files to the buckets3_bucket=website.bucketboto3_client=boto3.client("s3")boto3_client.put_object(Bucket=s3_bucket.bucket_name,Key="index.html",Body="<h1>Hello, World!</h1>")
In both cases, the files uploaded to your static website are considered part of your infrastructure. Thus, the files would be automatically deployed whenever you run stlv deploy. However, in the latter case, uploaded files are not part of your (Pulumi) state and thus not tracked.
Using the stlv_resources module, you can access the S3 bucket and manage the files (assets) in your static website, should you decide that part should not be part of your deployment. Note that at the moment, you can get the arn of the bucket via stvl_resources only from within a Lambda function.
Note
If you decide to upload your file assets manually, you must also take care of removing the files from the bucket before running stlv destroy, as the AWS API does not allow deleting a non-empty S3 bucket.
Note
The S3StaticWebsite component is designed for most use cases of static websites. The error handler for 404 is by default set to error.html. This will be exposed to the user as a parameter in the future.
Exposing a bucket along with other resources
If you want to expose a bucket along with other resources, such as an API Gateway, you can use the Router component.
Parameters
Parameter
Description
custom_domain
The custom domain name for the static website. Optional. If provided, a DNS record will be created for the CloudFront distribution. Optional. A str.
directory
The directory containing the static website files to be uploaded to the S3 bucket. Optional. Either a Path like object or a str.
Resources
Resource
Description
bucket
The S3 bucket created for the static website.
files
The files uploaded to the S3 bucket for the static website.
cloudfront_distribution
The CloudFront distribution created for the static website.
Customization
The Bucket component supports the customize parameter to override underlying Pulumi resource properties. For an overview of how customization works, see the Customization guide.