AWS S3 (Object Storage)
S3 Buckets
Stelvio supports creating and managing S3 buckets using the Bucket component.
Create an S3 bucket and link it to an API Gateway handler:
@app.run
def run() -> None:
bucket = Bucket("todo-bucket")
api = Api("todo-api")
api.route("GET", "/write", handler="functions/write.get", links=[bucket])
api.route("GET", "/read", handler="functions/read.get", links=[bucket])
Using the linking mechanism, you can easily access the S3 bucket in your Lambda functions using the regular boto3 library:
import boto3
from stlv_resources import Resources
def get(event, context):
s3_client = boto3.client("s3")
bucket_name = Resources.todo_bucket.bucket_name
s3_client.put_object(Bucket=bucket_name, Key="hello.txt", Body="Hello, World!")
return {"statusCode": 200, "body": "Hello, World!"}
Public Access
By default, all public access is blocked for S3 buckets created with the Bucket component.
You can change that behaviour by setting the access argument to 'public':
@app.run
def run() -> None:
bucket = Bucket("todo-bucket", access="public")
How access is implemented
Internally, access to S3 buckets is handled by the BucketPublicAccessBlock resource.
It is created with the followign parameters:
block_public_acls=<Value>,
block_public_policy=<Value>,
ignore_public_acls=<Value>,
restrict_public_buckets=<Value>,
<Value> is set to either False for public access or True for private access (default).
| Parameter | Description |
|---|---|
block_public_acls |
Whether to block public ACLs. |
block_public_policy |
Whether to block public policies. |
ignore_public_acls |
Whether to ignore public ACLs. |
restrict_public_buckets |
Whether to restrict public buckets. |
Additionally, a policy for s3:GetObject is created to allow public read access to the objects in the bucket if access is set to 'public'.
See the Pulumi Documentation for more information.
Event Notifications
You can subscribe to S3 bucket events using three notification methods. Event notifications can trigger Lambda functions, SQS queues, or SNS topics when objects are created, deleted, or modified.
notify_function()- Subscribe a Lambda function to handle eventsnotify_queue()- Send events to an SQS queuenotify_topic()- Publish events to an SNS topic
Function Notifications
Subscribe a Lambda function to handle S3 events using notify_function():
@app.run
def run() -> None:
bucket = Bucket("uploads")
# Notify on all object created events
bucket.notify_function(
"on-upload",
events=["s3:ObjectCreated:*"],
function="functions/process_upload.handler",
)
You can also filter notifications by object key prefix or suffix, and configure function options:
@app.run
def run() -> None:
bucket = Bucket("media")
# Only trigger for images in the uploads folder
bucket.notify_function(
"process-images",
events=["s3:ObjectCreated:*"],
filter_prefix="uploads/",
filter_suffix=".jpg",
function="functions/process_image.handler",
memory=512,
timeout=60,
)
Linking Resources
You can link other resources to your notification function using the links parameter. For details on how linking works and default permissions, see the Linking guide.
bucket.notify_function(
"on-upload",
events=["s3:ObjectCreated:*"],
function="functions/process_upload.handler",
links=[table, results_queue],
)
Queue Notifications
Send event notifications to an SQS queue for asynchronous processing using notify_queue():
@app.run
def run() -> None:
bucket = Bucket("orders")
processing_queue = Queue("order-processing")
# Send notifications to queue
bucket.notify_queue(
"order-created",
events=["s3:ObjectCreated:Put"],
queue=processing_queue,
)
# Subscribe a function to process from the queue
processing_queue.subscribe("processor", "functions/process_order.handler")
You can also use an existing queue ARN:
@app.run
def run() -> None:
bucket = Bucket("orders")
# Send to an external queue (you manage the queue policy)
bucket.notify_queue(
"order-created",
events=["s3:ObjectCreated:Put"],
queue="arn:aws:sqs:us-east-1:123456789012:my-external-queue",
)
Topic Notifications
Publish event notifications to an SNS topic for fan-out to multiple subscribers using notify_topic():
@app.run
def run() -> None:
bucket = Bucket("uploads")
notifications = Topic("upload-notifications")
# Publish upload events to the topic
bucket.notify_topic(
"on-upload",
events=["s3:ObjectCreated:*"],
topic=notifications,
)
# Subscribe multiple handlers to the topic
notifications.subscribe("processor", "functions/process_upload.handler")
notifications.subscribe("logger", "functions/log_upload.handler")
You can also use an existing topic ARN:
@app.run
def run() -> None:
bucket = Bucket("uploads")
# Send to an external topic (you manage the topic policy)
bucket.notify_topic(
"on-upload",
events=["s3:ObjectCreated:*"],
topic="arn:aws:sns:us-east-1:123456789012:my-external-topic",
)
Queue and Topic Policy Behavior
Using Stelvio Queue or Topic components: Stelvio automatically creates an SQS QueuePolicy or SNS TopicPolicy resource to allow S3 to publish notifications. These policy resources replace any existing policy on the queue or topic. If you have custom policies, you may need to manage permissions manually.
Using ARN strings (external queue/topic): Stelvio does not create or manage a policy resource. You are responsible for ensuring the queue/topic policy allows S3 to send notifications from the bucket.
Multiple Notifications
You can add multiple notifications to the same bucket, each with different targets and filter configurations:
@app.run
def run() -> None:
bucket = Bucket("media")
processing_queue = Queue("processing")
alerts = Topic("alerts")
# Process uploaded images
bucket.notify_function(
"process-images",
events=["s3:ObjectCreated:*"],
filter_suffix=".jpg",
function="functions/process_image.handler",
memory=512,
)
# Queue videos for async processing
bucket.notify_queue(
"queue-videos",
events=["s3:ObjectCreated:*"],
filter_suffix=".mp4",
queue=processing_queue,
)
# Alert on all deletions in the archive folder
bucket.notify_topic(
"deletion-alert",
events=["s3:ObjectRemoved:*"],
filter_prefix="archive/",
topic=alerts,
)
Each notification call configures an independent notification with its own target and filters. You can combine different target types (functions, queues, topics) and use different filter_prefix and filter_suffix values to route events precisely.
Notifications must be defined before resource creation
All notifications must be added to the Bucket before its resources are created. Once the Bucket's S3 resources have been provisioned (by accessing .resources), attempting to add new notifications will raise a RuntimeError. Define all your notifications immediately after creating the Bucket instance.
Available Event Types
| Event Type | Description |
|---|---|
s3:ObjectCreated:* |
All object creation events |
s3:ObjectCreated:Put |
Object created via PUT |
s3:ObjectCreated:Post |
Object created via POST |
s3:ObjectCreated:Copy |
Object created via COPY |
s3:ObjectCreated:CompleteMultipartUpload |
Multipart upload completed |
s3:ObjectRemoved:* |
All object removal events |
s3:ObjectRemoved:Delete |
Object deleted |
s3:ObjectRemoved:DeleteMarkerCreated |
Delete marker created |
s3:ObjectRestore:* |
All object restore events |
s3:ObjectTagging:* |
All object tagging events |
s3:LifecycleExpiration:* |
All lifecycle expiration events |
s3:Replication:* |
All replication events |
For a complete list, see the AWS S3 Event Notifications documentation.
Parameters
| Parameter | Description |
|---|---|
versioning |
The versioning configuration for the S3 bucket. Boolean. Default is False. |
access |
The access configuration for the S3 bucket. Either None (default) or 'public'. |
Bucket.notify_function() Parameters
| Parameter | Description |
|---|---|
name |
Unique name for this notification subscription (required). |
events |
List of S3 event types to subscribe to (required). |
filter_prefix |
Filter notifications by object key prefix. Optional. |
filter_suffix |
Filter notifications by object key suffix. Optional. |
function |
Lambda function handler to invoke. Can be a string, FunctionConfig, or FunctionConfigDict. Optional. |
links |
List of links to grant the notification function access to other resources. Optional. |
**opts |
Additional function configuration options (memory, timeout, environment, architecture, runtime, requirements, layers, url). Only valid when function is specified as a string. These are unpacked from FunctionConfigDict. |
Bucket.notify_queue() Parameters
| Parameter | Description |
|---|---|
name |
Unique name for this notification subscription (required). |
events |
List of S3 event types to subscribe to (required). |
filter_prefix |
Filter notifications by object key prefix. Optional. |
filter_suffix |
Filter notifications by object key suffix. Optional. |
queue |
SQS queue to send notifications to. Can be a Queue component or queue ARN string. Optional. |
Bucket.notify_topic() Parameters
| Parameter | Description |
|---|---|
name |
Unique name for this notification subscription (required). |
events |
List of S3 event types to subscribe to (required). |
filter_prefix |
Filter notifications by object key prefix. Optional. |
filter_suffix |
Filter notifications by object key suffix. Optional. |
topic |
SNS topic to send notifications to. Can be a Topic component or topic ARN string. Optional. |
Resources
| Resource | Description |
|---|---|
bucket |
The S3 bucket created by the Bucket component. |
public_access_block |
The BucketPublicAccessBlock resource created by the Bucket component. |
bucket_policy |
The BucketPolicy resource created by the Bucket component if access is set to 'public'. |
bucket_notification |
The BucketNotification resource if any notifications are configured. |
subscriptions |
List of BucketNotifySubscription components created via notification methods. |
Static Websites
Stelvio can create and manage S3 buckets for static website hosting using the S3StaticWebsite component.
Create a static website from a directory using the S3StaticWebsite component:
@app.run
def run() -> None:
config = mkdocs.config.load_config("mkdocs.yml")
mkdocs.commands.build.build(config)
_ = S3StaticWebsite(
"s3-static-mkdocs",
custom_domain="s3-2." + CUSTOM_DOMAIN_NAME,
directory="site"
)
- Creates an S3 bucket for static website hosting
- Creates a CloudFront distribution for the S3 bucket, so that it is compatible with 3rd party DNS providers
- Attaching a domain name to a S3 bucket (without CloudFront) only works with AWS Route 53, because you'd need to create a CNAME pointing to the S3 bucket name. This is not possible with other DNS providers as they don't have access to the S3 bucket name in AWS.
- Creates an S3 object for each file in the static website directory
- Automatically creates a DNS record for the CloudFront distribution if a DNS provider is configured
Handling files (assets) of a static website
The S3StaticWebsite component automatically uploads all files in the specified directory to the S3 bucket.
The directory parameter is optional, though. If omitted, an empty S3 bucket is created and you are responsible for uploading the files (assets) to the bucket.
The custom_domain parameter is also optional. If omitted, no DNS record is created for the CloudFront distribution and you can access the static website using the CloudFront domain name (<distribution_id>.cloudfront.net).
In the following example, we use the mkdocs library to build a static website from Markdown files and upload the generated files to the S3 bucket:
@app.run
def run() -> None:
config = mkdocs.config.load_config("mkdocs.yml")
mkdocs.commands.build.build(config)
website = S3StaticWebsite(
"s3-static-mkdocs",
custom_domain="s3-2." + CUSTOM_DOMAIN_NAME,
)
# Upload files to the bucket
s3_bucket = website.bucket
boto3_client = boto3.client("s3")
boto3_client.put_object(
Bucket=s3_bucket.bucket_name, Key="index.html", Body="<h1>Hello, World!</h1>"
)
In both cases, the files uploaded to your static website are considered part of your infrastructure. Thus, the files would be automatically deployed whenever you run stlv deploy. However, in the latter case, uploaded files are not part of your (Pulumi) state and thus not tracked.
Using the stlv_resources module, you can access the S3 bucket and manage the files (assets) in your static website, should you decide that part should not be part of your deployment. Note that at the moment, you can get the arn of the bucket via stvl_resources only from within a Lambda function.
Note
If you decide to upload your file assets manually, you must also take care of removing the files from the bucket before running stlv destroy, as the AWS API does not allow deleting a non-empty S3 bucket.
Note
The S3StaticWebsite component is designed for most use cases of static websites. The error handler for 404 is by default set to error.html. This will be exposed to the user as a parameter in the future.
Exposing a bucket along with other resources
If you want to expose a bucket along with other resources, such as an API Gateway, you can use the Router component.
Parameters
| Parameter | Description |
|---|---|
custom_domain |
The custom domain name for the static website. Optional. If provided, a DNS record will be created for the CloudFront distribution. Optional. A str. |
directory |
The directory containing the static website files to be uploaded to the S3 bucket. Optional. Either a Path like object or a str. |
Resources
| Resource | Description |
|---|---|
bucket |
The S3 bucket created for the static website. |
files |
The files uploaded to the S3 bucket for the static website. |
cloudfront_distribution |
The CloudFront distribution created for the static website. |
Customization
The Bucket component supports the customize parameter to override underlying Pulumi resource properties. For an overview of how customization works, see the Customization guide.
Resource Keys
| Resource Key | Pulumi Args Type | Description |
|---|---|---|
bucket |
BucketArgs | The S3 bucket itself |
public_access_block |
BucketPublicAccessBlockArgs | Public access block settings |
bucket_policy |
BucketPolicyArgs | Bucket policy (when access="public") |
bucket_notification |
BucketNotificationArgs | Bucket notification configuration |
subscriptions |
(nested BucketNotifySubscriptionCustomizationDict) |
Notification subscription resources |
function |
BucketNotificationLambdaFunctionArgs | Lambda notification config block |
queue |
BucketNotificationQueueArgs | SQS queue notification config block |
topic |
BucketNotificationTopicArgs | SNS topic notification config block |
Example
bucket = Bucket(
"my-bucket",
customize={
"bucket": {
"force_destroy": True,
"tags": {"Environment": "dev"},
}
}
)
S3StaticWebsite
The S3StaticWebsite component supports the customize parameter to override underlying Pulumi resource properties.
Resource Keys
| Resource Key | Pulumi Args Type | Description |
|---|---|---|
bucket |
Nested (see Bucket customization above) | The S3 bucket |
files |
BucketObjectArgs | Uploaded files from the directory |
cloudfront_distribution |
Nested (see CloudFrontDistribution customization) | CloudFront distribution |
Example
from stelvio.aws.s3 import S3StaticWebsite
website = S3StaticWebsite(
"my-website",
directory="./dist",
custom_domain="www.example.com",
customize={
"bucket": {
"bucket": {"tags": {"Type": "static-assets"}}
},
"cloudfront_distribution": {
"distribution": {"price_class": "PriceClass_100"}
}
}
)