+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create API**: Create a new API ([`V1`](https://app.localstack.cloud/resources/gateway/v1/new)/[`V2`](https://app.localstack.cloud/resources/gateway/v2/new)) by clicking on **Create API** button on top-right and creating a new configuration by clicking on **Submit** button.
+- **Edit API**: Edit the API configuration (`V1`/`V2`) by clicking on **Edit API** button on top-right and saving the new configuration by clicking on **Submit** button.
+- **Check the Resources**: Click on **Resources** tab to view the resources associated with the API, along with their details, such as `Id`, `ParentId`, `Path Part`, and `Path` and their `HTTP` method.
+- **Navigate the Stages**: Click on **Stages** tab to view the stages associated with the API, along with their details, such as `Deployment Id`, `Stage Name`, `Client Certificate Id`, and more.
+- **Delete API**: Delete the API configuration (`V1`/`V2`) by selecting the resource, clicking on **Remove Selected** button on top-right and confirming the deletion by clicking on **Continue** button.
+
+You can also use the Resource Browser to check out the **Authorizers**, **Models**, **Request Validators**, **API Keys**, and **Usage Plans**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use API Gateway in LocalStack for various use cases:
+
+- [API Gateway with Custom Domains over our LocalStack Pro samples](https://github.com/localstack/localstack-pro-samples/tree/master/apigw-custom-domain)
+- [Websockets via API Gateway V2](https://github.com/localstack/localstack-pro-samples/tree/master/serverless-websockets)
+- [Serverless Container-based APIs with Amazon ECS and Amazon API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample)
+- [Step-up Authentication using Amazon Cognito, DynamoDB, API Gateway Lambda Authorizer, and Lambda functions](https://github.com/localstack/step-up-auth-sample)
+- [Serverless Microservices with Amazon API Gateway, DynamoDB, SQS, and Lambda](https://github.com/localstack/microservices-apigateway-lambda-dynamodb-sqs-sample)
+- [Note-Taking application using AWS SDK for JavaScript, Amazon DynamoDB, Lambda, Cognito, API Gateway, and S3](https://github.com/localstack/aws-sdk-js-notes-app)
+- For Terraform samples, check out the [LocalStack Terraform examples](https://github.com/localstack/localstack-terraform-samples) repository
diff --git a/src/content/docs/aws/services/app-auto-scaling.md b/src/content/docs/aws/services/app-auto-scaling.md
new file mode 100644
index 00000000..0a358453
--- /dev/null
+++ b/src/content/docs/aws/services/app-auto-scaling.md
@@ -0,0 +1,149 @@
+---
+title: "Application Auto Scaling"
+linkTitle: "Application Auto Scaling"
+description: Get started with Application Auto Scaling on LocalStack
+tags: ["Base"]
+persistence: supported
+---
+
+## Introduction
+
+Application Auto Scaling is a centralized solution for managing automatic scaling by defining scaling policies based on specific metrics.
+Based on CPU utilization or request rates, it automatically adjusts capacity in response to changes in workload.
+With Application Auto Scaling, you can configure automatic scaling for services such as DynamoDB, ECS, Lambda, ElastiCache, and more.
+Auto scaling uses CloudWatch under the hood to configure scalable targets which a service namespace, resource ID, and scalable dimension can uniquely identify.
+
+LocalStack allows you to use the Application Auto Scaling APIs in your local environment to scale different resources based on scaling policies and scheduled scaling.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_application-autoscaling" >}}), which provides information on the extent of Application Auto Scaling's integration with LocalStack.
+
+## Getting Started
+
+This guide is designed for users new to Application Auto Scaling and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can configure auto scaling to handle a heavy workload for your Lambda function.
+
+### Create a Lambda Function
+
+To create a new Lambda function, create a new file called `index.js` with the following code:
+
+```js
+exports.handler = async (event, context) => {
+ console.log('Hello from Lambda!');
+ return {
+ statusCode: 200,
+ body: 'Hello, World!'
+ };
+};
+```
+
+Run the following command to create a new Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API:
+
+{{< command >}}
+$ zip function.zip index.js
+
+$ awslocal lambda create-function \
+ --function-name autoscaling-example \
+ --runtime nodejs18.x \
+ --zip-file fileb://function.zip \
+ --handler index.handler \
+ --role arn:aws:iam::000000000000:role/cool-stacklifter
+{{< /command >}}
+
+### Create a version and alias for your Lambda function
+
+Next, you can create a version for your Lambda function and publish an alias.
+We will use the [`PublishVersion`](https://docs.aws.amazon.com/cli/latest/reference/lambda/publish-version.html) and [`CreateAlias`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-alias.html) APIs for this.
+Run the following commands:
+
+{{< command >}}
+$ awslocal lambda publish-version --function-name autoscaling-example
+$ awslocal lambda create-alias \
+ --function-name autoscaling-example \
+ --description "alias for blue version of function" \
+ --function-version 1 \
+ --name BLUE
+{{< /command >}}
+
+### Register the Lambda function as a scalable target
+
+To register the Lambda function as a scalable target, you can use the [`RegisterScalableTarget`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) API.
+We will specify the `--service-namespace` as `lambda`, `--scalable-dimension` as `lambda:function:ProvisionedConcurrency`, and `--resource-id` as `function:autoscaling-example:BLUE`.
+
+Run the following command to register the scalable target:
+
+{{< command >}}
+$ awslocal application-autoscaling register-scalable-target \
+ --service-namespace lambda \
+ --scalable-dimension lambda:function:ProvisionedConcurrency \
+ --resource-id function:autoscaling-example:BLUE \
+ --min-capacity 0 --max-capacity 0
+{{< /command >}}
+
+### Setting up a scheduled action
+
+You can create a scheduled action that scales out by specifying the `--schedule` parameter with a recurring schedule specified as a cron job.
+Run the following command to create a scheduled action using the [`PutScheduledAction`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scheduled-action.html) API:
+
+{{< command >}}
+awslocal application-autoscaling put-scheduled-action \
+ --service-namespace lambda \
+ --scalable-dimension lambda:function:ProvisionedConcurrency \
+ --resource-id function:autoscaling-example:BLUE \
+ --scheduled-action-name lambda-action \
+ --schedule "cron(*/2* ** *)" \
+ --scalable-target-action MinCapacity=1,MaxCapacity=5
+{{< /command >}}
+
+You can confirm if the scheduled action exists using [`DescribeScheduledActions`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/describe-scheduled-actions.html) API:
+
+{{< command >}}
+$ awslocal application-autoscaling describe-scheduled-actions \
+ --service-namespace lambda
+{{< /command >}}
+
+### Setting up a target tracking scaling policy
+
+You can now set up a target tracking scaling policy to scale based on current resource utilization.
+You can use the [`PutScalingPolicy`](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) API to create a target tracking scaling policy after ensuring that your predefined metric expects the target value.
+When metrics lack data due to minimal application load, Application Auto Scaling does not adjust capacity.
+
+Run the following command to create a target-tracking scaling policy:
+
+{{< command >}}
+$ awslocal application-autoscaling put-scaling-policy \
+ --service-namespace lambda \
+ --scalable-dimension lambda:function:ProvisionedConcurrency \
+ --resource-id function:events-example:BLUE \
+ --policy-name scaling-policy --policy-type TargetTrackingScaling \
+ --target-tracking-scaling-policy-configuration '{ "TargetValue": 50.0, "PredefinedMetricSpecification": { "PredefinedMetricType": "predefinedmetric" }}'
+{{< /command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing AppConfig applications.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Application Auto Scaling** under the **App Integration** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create API**: Create a new GraphQL API by clicking **Create API** and providing a name for the API, Authentication Type, and optional tags among other parameters.
+- **Edit API**: Click on the GraphQL API name and click **Edit API** to edit the GraphQL API, by updating the parameters before clicking **Submit**.
+- **Create Data Source**: Click on the GraphQL API name and click **Data Source**.
+ Click on **Create Data Source** to create a new data source for the GraphQL API, by providing a name for the data source, data source type, and Service Role ARN before clicking **Submit**.
+- **Edit Data Source**: Click on the GraphQL API name and click **Data Source**.
+ Click on the data source name and click **Edit Data Source** to edit the data source, by updating the parameters before clicking **Submit**.
+- **Create Types**: Click on the GraphQL API name and click **Types**.
+ Click on **Create Type** to create a type definition, in GraphQL Schema Definition Language (SDL) format, before clicking **Submit**.
+- **Create API Key**: Click on the GraphQL API name and click **API Keys**.
+ Click on **Create API Key** to create an API key for the GraphQL API, by providing a description for the API key and its expiration time before clicking **Submit**.
+- **View and edit Schema**: Click on the GraphQL API name and click **Schema**.
+ You can view the GraphQL schema, and edit the GraphQL schema, in GraphQL Schema Definition Language (SDL) format, before clicking **Update**.
+- **Query**: Click on the GraphQL API name and click **Query**.
+ You can query the GraphQL API by providing the GraphQL query and variables, including the operation and API key, before clicking **Execute**.
+- **Attach Resolver**: Click on the GraphQL API name and click **Resolvers**.
+ Click on **Attach Resolver** to attach a resolver to a field, by providing the field name, data source name, Request Mapping Template, Response Mapping Template, among other parameters, before clicking **Submit**.
+- **Create Function**: Click on the GraphQL API name and click **Functions**.
+ Click on **Create Function** to create a function, by providing a name for the function, data source name, and Function Version, Request Mapping Template, Response Mapping Template, among other parameters, before clicking **Submit**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use AppSync in LocalStack for various use cases:
+
+- [AppSync GraphQL APIs for DynamoDB and RDS Aurora PostgreSQL](https://github.com/localstack/appsync-graphql-api-sample)
diff --git a/src/content/docs/aws/services/athena.md b/src/content/docs/aws/services/athena.md
new file mode 100644
index 00000000..0480a8cb
--- /dev/null
+++ b/src/content/docs/aws/services/athena.md
@@ -0,0 +1,263 @@
+---
+title: "Athena"
+linkTitle: "Athena"
+description: Get started with Athena on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Athena is an interactive query service provided by Amazon Web Services (AWS) that enables you to analyze data stored in S3 using standard SQL queries.
+Athena allows users to create ad-hoc queries to perform data analysis, filter, aggregate, and join datasets stored in S3.
+It supports various file formats, such as JSON, Parquet, and CSV, making it compatible with a wide range of data sources.
+
+LocalStack allows you to configure the Athena APIs with a Hive metastore that can connect to the S3 API and query your data directly in your local environment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_athena" >}}), which provides information on the extent of Athena's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Athena and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an Athena table and run a query against it in addition to reading the results with the AWS CLI.
+
+{{< callout >}}
+To utilize the Athena API, LocalStack will download additional dependencies.
+This involves getting a Docker image of around 1.5GB, containing Presto, Hive, and other tools.
+These components are retrieved automatically when you initiate the service.
+To ensure a smooth initial setup, ensure you're connected to a stable internet connection while fetching these components for the first time.
+{{< /callout >}}
+
+### Create an S3 bucket
+
+You can create an S3 bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command.
+Run the following command to create a bucket named `athena-bucket`:
+
+{{< command >}}
+$ awslocal s3 mb s3://athena-bucket
+{{< / command >}}
+
+You can create some sample data using the following commands:
+
+{{< command >}}
+$ echo "Name,Service" > data.csv
+$ echo "LocalStack,Athena" >> data.csv
+{{< / command >}}
+
+You can upload the data to your bucket using the [`cp`](https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html) command:
+
+{{< command >}}
+$ awslocal s3 cp data.csv s3://athena-bucket/data/
+{{< / command >}}
+
+### Create an Athena table
+
+You can create an Athena table using the [`CreateTable`](https://docs.aws.amazon.com/athena/latest/APIReference/API_CreateTable.html) API
+Run the following command to create a table named `athena_table`:
+
+{{< command >}}
+$ awslocal athena start-query-execution \
+ --query-string "create external table tbl01 (name STRING, surname STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LOCATION 's3://athena-bucket/data/';" --result-configuration "OutputLocation=s3://athena-bucket/output/"
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "QueryExecutionId": "593acab7"
+}
+```
+
+You can retrieve information about the query execution using the [`GetQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryExecution.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal athena get-query-execution --query-execution-id 593acab7
+{{< / command >}}
+
+Replace `593acab7` with the `QueryExecutionId` returned by the [`StartQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_StartQueryExecution.html) API.
+
+### Get output of the query
+
+You can get the output of the query using the [`GetQueryResults`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryResults.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal athena get-query-results --query-execution-id 593acab7
+{{< / command >}}
+
+You can now read the data from the `tbl01` table and retrieve the data from S3 that was mentioned in your table creation statement.
+Run the following command:
+
+{{< command >}}
+$ awslocal athena start-query-execution \
+ --query-string "select * from tbl01;" --result-configuration "OutputLocation=s3://athena-bucket/output/"
+{{< / command >}}
+
+You can retrieve the execution details similarly using the [`GetQueryExecution`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryExecution.html) API using the `QueryExecutionId` returned by the previous step.
+
+You can copy the `ResultConfiguration` from the output and use it to retrieve the results of the query.
+Run the following command:
+
+{{< command >}}
+$ awslocal cp s3://athena-bucket/output/593acab7.csv .
+$ cat 593acab7.csv
+{{< / command >}}
+
+Replace `593acab7.csv` with the path to the file that was present in the `ResultConfiguration` of the previous step.
+You can also use the [`GetQueryResults`](https://docs.aws.amazon.com/athena/latest/APIReference/API_GetQueryResults.html) API to retrieve the results of the query.
+
+## Delta Lake Tables
+
+LocalStack Athena supports [Delta Lake](https://delta.io), an open-source storage framework that extends Parquet data files with a file-based transaction log for ACID transactions and scalable metadata handling.
+
+To illustrate this feature, we take a sample published in the [AWS blog](https://aws.amazon.com/blogs/big-data/crawl-delta-lake-tables-using-aws-glue-crawlers).
+
+The Delta Lake files used in this sample are available in a public S3 bucket under `s3://aws-bigdata-blog/artifacts/delta-lake-crawler/sample_delta_table`.
+For your convenience, we have prepared the test files in a downloadable ZIP file [here](https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip).
+We start by downloading and extracting this ZIP file:
+
+{{< command >}}
+$ mkdir /tmp/delta-lake-sample; cd /tmp/delta-lake-sample
+$ wget https://localstack-assets.s3.amazonaws.com/aws-sample-athena-delta-lake.zip
+$ unzip aws-sample-athena-delta-lake.zip; rm aws-sample-athena-delta-lake.zip
+{{< / command >}}
+
+We can then create an S3 bucket in LocalStack using the [`awslocal`](https://github.com/localstack/awscli-local) command line, and upload the files to the bucket:
+{{< command >}}
+$ awslocal s3 mb s3://test
+$ awslocal s3 sync /tmp/delta-lake-sample s3://test
+{{< / command >}}
+
+Next, we create the table definitions in Athena:
+{{< command >}}
+$ awslocal athena start-query-execution \
+ --query-string "CREATE EXTERNAL TABLE test (product_id string, product_name string, \
+ price bigint, currency string, category string, updated_at double) \
+ LOCATION 's3://test/' TBLPROPERTIES ('table_type'='DELTA')"
+{{< / command >}}
+
+Please note that this query may take some time to finish executing.
+You can observe the output in the LocalStack container (ideally with `DEBUG=1` enabled) to follow the steps of the query execution.
+
+Finally, we can now run a `SELECT` query to extract data from the Delta Lake table we've just created:
+{{< command >}}
+$ queryId=$(awslocal athena start-query-execution --query-string "SELECT * from deltalake.default.test" | jq -r .QueryExecutionId)
+$ awslocal athena get-query-results --query-execution-id $queryId
+{{< / command >}}
+
+The query should yield a result similar to the output below:
+
+```bash
+...
+ "Rows": [
+ {
+ "Data": [
+ { "VarCharValue": "product_id" },
+ { "VarCharValue": "product_name" },
+ { "VarCharValue": "price" },
+ { "VarCharValue": "currency" },
+ { "VarCharValue": "category" },
+ { "VarCharValue": "updated_at" }
+ ]
+ },
+ {
+ "Data": [
+ { "VarCharValue": "00005" },
+ { "VarCharValue": "USB charger" },
+ { "VarCharValue": "50" },
+ { "VarCharValue": "INR" },
+ { "VarCharValue": "Electronics" },
+ { "VarCharValue": "1653462374.9975588" }
+ ]
+ },
+ ...
+...
+```
+
+{{< callout >}}
+The `SELECT` statement above currently requires us to prefix the database/table name with `deltalake.` - this will be further improved in a future iteration, for better parity with AWS.
+{{< /callout >}}
+
+## Iceberg Tables
+
+The LocalStack Athena implementation also supports [Iceberg tables](https://docs.aws.amazon.com/athena/latest/ug/querying-iceberg-creating-tables.html).
+You can define an Iceberg table in Athena using the `CREATE TABLE` statement, as shown in the example below:
+
+```sql
+CREATE TABLE mytable (c1 integer, c2 string, c3 double)
+LOCATION 's3://mybucket/prefix/' TBLPROPERTIES ( 'table_type' = 'ICEBERG' )
+```
+
+Once the table has been created and data inserted into it, you can see the Iceberg metadata and data files being created in S3:
+
+```bash
+s3://mybucket/_tmp.prefix/
+s3://mybucket/prefix/data/00000-0-user1_20230212221600_cd8f8cbd-4dcc-4c3f-96a2-f08d4104d6fb-job_local1695603329_0001-00001.parquet
+s3://mybucket/prefix/data/00000-0-user1_20230212221606_eef1fd88-8ff1-467a-a15b-7a24be7bc52b-job_local1976884152_0002-00001.parquet
+s3://mybucket/prefix/metadata/00000-06706bea-e09d-4ff1-b366-353705634f3a.metadata.json
+s3://mybucket/prefix/metadata/00001-3df6a04e-070d-447c-a213-644fe6633759.metadata.json
+s3://mybucket/prefix/metadata/00002-5dcd5d07-a9ed-4757-a6bc-9e87fcd671d5.metadata.json
+s3://mybucket/prefix/metadata/2f8d3628-bb13-4081-b5a9-30f2e81b7226-m0.avro
+s3://mybucket/prefix/metadata/70de28f7-6507-44ae-b505-618d734174b9-m0.avro
+s3://mybucket/prefix/metadata/snap-8425363304532374388-1-70de28f7-6507-44ae-b505-618d734174b9.avro
+s3://mybucket/prefix/metadata/snap-9068645333036463050-1-2f8d3628-bb13-4081-b5a9-30f2e81b7226.avro
+s3://mybucket/prefix/temp/
+```
+
+## Client configuration
+
+You can configure the Athena service in LocalStack with various clients, such as [PyAthena](https://github.com/laughingman7743/PyAthena/), [awswrangler](https://github.com/aws/aws-sdk-pandas), among others!
+Here are small snippets to get you started:
+
+{{< tabpane lang="python" >}}
+{{< tab header="PyAthena" lang="python" >}}
+from pyathena import connect
+
+conn = connect(
+ s3_staging_dir="s3://s3-results-bucket/output/",
+ region_name="us-east-1",
+ endpoint_url="http://localhost:4566",
+)
+cursor = conn.cursor()
+
+cursor.execute("SELECT 1,2,3 AS test")
+print(cursor.fetchall())
+{{< /tab >}}
+{{< tab header="awswrangler" lang="python" >}}
+import awswrangler as wr
+import pandas as pd
+
+ENDPOINT = "http://localhost:4566"
+DATABASE = "testdb"
+S3_BUCKET = "s3://s3-results-bucket/output/"
+
+wr.config.athena_endpoint_url = ENDPOINT
+wr.config.glue_endpoint_url = ENDPOINT
+wr.config.s3_endpoint_url = ENDPOINT
+wr.catalog.create_database(DATABASE)
+df = wr.athena.read_sql_query("SELECT 1 AS col1, 2 AS col2, 3 AS col3", database=DATABASE)
+print(df)
+{{< /tab >}}
+{{< /tabpane >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for Athena query execution, writing SQL queries, and visualizing query results.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Athena** under the **Analytics** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **View Databases**: View the databases available in your Athena instance by clicking on the **Databases** tab.
+- **View Catalogs**: View the catalogs available in your Athena instance by clicking on the **Catalogs** tab.
+- **Edit Catalogs**: Edit the catalogs available in your Athena instance by clicking on the **Catalog name**, editing the catalog, and then clicking on the **Submit** button.
+- **Create Catalogs**: Create a new catalog by clicking on the **Create Catalog** button, entering the catalog details, and then clicking on the **Submit** button.
+- **Run SQL Queries**: Run SQL queries by clicking on the **SQL** button, entering the query, and then clicking on the **Execute** button.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Athena in LocalStack for various use cases:
+
+- [Query data in S3 Bucket with Amazon Athena, Glue Catalog & CloudFormation](https://github.com/localstack/query-data-s3-athena-glue-sample)
diff --git a/src/content/docs/aws/services/auto-scaling.md b/src/content/docs/aws/services/auto-scaling.md
new file mode 100644
index 00000000..ba7d4514
--- /dev/null
+++ b/src/content/docs/aws/services/auto-scaling.md
@@ -0,0 +1,141 @@
+---
+title: "Auto Scaling"
+linkTitle: "Auto Scaling"
+description: Get started with Auto Scaling" on LocalStack
+tags: ["Base"]
+---
+
+## Introduction
+
+Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to the demand.
+You can use Auto Scaling to ensure that you are running your desired number of instances.
+
+LocalStack allows you to use the Auto Scaling APIs locally to create and manage Auto Scaling groups locally.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_autoscaling" >}}), which provides information on the extent of Auto Scaling's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Auto Scaling and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a launch template, an Auto Scaling group, and attach an instance to the Auto Scaling group using the AWS CLI.
+
+### Create a launch template
+
+You can create a launch template that defines the launch configuration for the instances in the Auto Scaling group using the [`CreateLaunchTemplate`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateLaunchTemplate.html) API.
+Run the following command to create a launch template:
+
+{{< command >}}
+$ awslocal ec2 create-launch-template \
+ --launch-template-name my-template-for-auto-scaling \
+ --version-description version1 \
+ --launch-template-data '{"ImageId":"ami-ff0fea8310f3","InstanceType":"t2.micro"}'
+{{< /command >}}
+
+The following output is displayed:
+
+```json
+{
+ "LaunchTemplate": {
+ "LaunchTemplateId": "lt-5ccdf1a84f178ba44",
+ "LaunchTemplateName": "my-template-for-auto-scaling",
+ "CreateTime": "2024-07-12T07:59:08+00:00",
+ "CreatedBy": "arn:aws:iam::000000000000:root",
+ "DefaultVersionNumber": 1,
+ "LatestVersionNumber": 1,
+ "Tags": []
+ }
+}
+```
+
+### Create an Auto Scaling group
+
+Before creating an Auto Scaling group, you need to fetch the subnet ID.
+Run the following command to describe the subnets:
+
+{{< command >}}
+$ awslocal ec2 describe-subnets --output text --query Subnets[0].SubnetId
+{{< /command >}}
+
+Copy the subnet ID from the output and use it to create the Auto Scaling group.
+Run the following command to create an Auto Scaling group using the [`CreateAutoScalingGroup`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateAutoScalingGroup.html) API:
+
+{{< command >}}
+$ awslocal autoscaling create-auto-scaling-group \
+ --auto-scaling-group-name my-asg \
+ --launch-template LaunchTemplateId=lt-5ccdf1a84f178ba44 \
+ --min-size 1 \
+ --max-size 5 \
+ --vpc-zone-identifier 'subnet-d4d16268'
+{{< /command >}}
+
+### Describe the Auto Scaling group
+
+You can describe the Auto Scaling group using the [`DescribeAutoScalingGroups`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DescribeAutoScalingGroups.html) API.
+Run the following command to describe the Auto Scaling group:
+
+{{< command >}}
+$ awslocal autoscaling describe-auto-scaling-groups
+{{< /command >}}
+
+The following output is displayed:
+
+```json
+{
+ "AutoScalingGroups": [
+ {
+ "AutoScalingGroupName": "my-asg",
+ "AutoScalingGroupARN": "arn:aws:autoscaling:us-east-1:000000000000:autoScalingGroup:74b4ffac-4588-4a7c-86b1-9fa992f49c8e:autoScalingGroupName/my-asg",
+ "LaunchTemplate": {
+ "LaunchTemplateId": "lt-5ccdf1a84f178ba44",
+ "LaunchTemplateName": "my-template-for-auto-scaling"
+ },
+ "MinSize": 1,
+ "MaxSize": 5,
+ ...
+ "Instances": [
+ {
+ "InstanceId": "i-fc01551d496fc363f",
+ "InstanceType": "t2.micro",
+ "AvailabilityZone": "us-east-1a",
+ ...
+ }
+ ],
+ ...
+ "TerminationPolicies": [
+ "Default"
+ ],
+ ...
+ "CapacityRebalance": false
+ }
+ ]
+}
+```
+
+### Attach an instance to the Auto Scaling group
+
+You can attach an instance to the Auto Scaling group using the [`AttachInstances`](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_AttachInstances.html) API.
+
+Before that, create an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API.
+Run the following command to create an EC2 instance locally:
+
+{{< command >}}
+$ awslocal ec2 run-instances \
+ --image-id ami-ff0fea8310f3 --count 1
+{{< /command >}}
+
+Fetch the instance ID from the output and use it to attach the instance to the Auto Scaling group.
+Run the following command to attach the instance to the Auto Scaling group:
+
+{{< command >}}
+$ awslocal autoscaling attach-instances \
+ --instance-ids i-0d678c4ecf6018dde \
+ --auto-scaling-group-name my-asg
+{{< /command >}}
+
+Replace `i-0d678c4ecf6018dde` with the instance ID that you fetched from the output.
+
+## Current Limitations
+
+LocalStack does not support the `docker`/`libvirt` [VM manager for EC2]({{< ref "/user-guide/aws/ec2/#vm-managers" >}}).
+It only works with the `mock` VM manager.
diff --git a/src/content/docs/aws/services/backup.md b/src/content/docs/aws/services/backup.md
new file mode 100644
index 00000000..e90413b0
--- /dev/null
+++ b/src/content/docs/aws/services/backup.md
@@ -0,0 +1,149 @@
+---
+title: "Backup"
+linkTitle: "Backup"
+description: Get started with Backup on LocalStack
+tags: ["Ultimate"]
+persistence: supported
+---
+
+## Introduction
+
+Backup is a centralized backup service provided by Amazon Web Services.
+It simplifies the process of backing up and restoring your data across various AWS services and resources.
+Backup supports a wide range of AWS resources, including Elastic Block Store (EBS) volumes, Relational Database Service (RDS) databases, DynamoDB tables, Elastic File System (EFS) file systems, and more.
+Backup enables you to set backup retention policies, allowing you to specify how long you want to retain your backup copies.
+
+LocalStack allows you to use the Backup APIs in your local environment to manage backup plans, create scheduled or on-demand backups of certain resource types.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_backup" >}}), which provides information on the extent of Backup's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Backup and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a backup job and specify a set of resources to the backup plan name and backup rules with the AWS CLI.
+
+### Create a backup vault
+
+You can create a backup vault which acts as a logical container where backups are stored using the [`CreateBackupVault`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupVault.html) API.
+Run the following command to create a backup vault named `my-vault`:
+
+{{< command >}}
+$ awslocal backup create-backup-vault \
+ --backup-vault-name primary
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "BackupVaultName": "primary",
+ "BackupVaultArn": "arn:aws:backup:us-east-1:000000000000:backup-vault:primary",
+ "CreationDate": 1693286432.432258
+}
+```
+
+### Create a backup plan
+
+You can create a backup plan which specifies the backup vault to store the backups in and the schedule for creating backups.
+You can specify the backup plan in a `backup-plan.json` file:
+
+```json
+{
+ "BackupPlanName": "testplan",
+ "Rules": [{
+ "RuleName": "HalfDayBackups",
+ "TargetBackupVaultName": "primary",
+ "ScheduleExpression": "cron(0 5/12 ? * * *)",
+ "StartWindowMinutes": 480,
+ "CompletionWindowMinutes": 10080,
+ "Lifecycle": {
+ "DeleteAfterDays": 30
+ },
+ "CopyActions": [{
+ "DestinationBackupVaultArn": "arn:aws:backup:us-east-1:000000000000:backup-vault:secondary",
+ "Lifecycle": {
+ "DeleteAfterDays": 30
+ }
+ }]
+ }]
+}
+```
+
+You can use the [`CreateBackupPlan`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupPlan.html) API to create a backup plan.
+Run the following command to create a backup plan:
+
+{{< command >}}
+$ awslocal backup create-backup-plan \
+ --backup-plan file://backup-plan.json
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "BackupPlanId": "9337aba3",
+ "BackupPlanArn": "arn:aws:backup:us-east-1:000000000000:backup-plan:testplan",
+ "CreationDate": 1693286644.0,
+ "VersionId": "9dc2cb60"
+}
+```
+
+### Create a backup selection
+
+You can create a backup selection which specifies the resources to backup and the backup plan to associate with.
+You can specify the backup selection in a `backup-selection.json` file:
+
+```json
+{
+ "SelectionName": "Myselection",
+ "IamRoleArn": "arn:aws:iam::000000000000:role/service-role/AWSBackupDefaultServiceRole",
+ "Resources": ["arn:aws:ec2:us-east-1:000000000000:volume/vol-0abcdef1234"],
+ "ListOfTags": [{
+ "ConditionType": "STRINGEQUALS",
+ "ConditionKey": "backup",
+ "ConditionValue": "yes"
+ }]
+}
+
+```
+
+You can use the [`CreateBackupSelection`](https://docs.aws.amazon.com/aws-backup/latest/devguide/API_CreateBackupSelection.html) API to create a backup selection.
+Run the following command to create a backup selection:
+
+{{< command >}}
+$ awslocal backup create-backup-selection \
+ --backup-plan-id 9337aba3 \
+ --backup-selection file://backup-plan-resources.json
+{{< / command >}}
+
+Replace the `--backup-plan-id` value with the `BackupPlanId` value from the output of the previous command.
+The following output would be retrieved:
+
+```bash
+{
+ "SelectionId": "91ce25f8",
+ "BackupPlanId": "9337aba3",
+ "CreationDate": 1693287607.209043
+}
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing backup plans and vaults.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Backup** under the **Storage** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Backup Plan**: Create a backup plan by clicking the **Create** button in the **Backup Plans** tab and specifying the backup plan details, including the plan name, rules, backup setting, and more in the modal dialog.
+- **Create Backup Vault**: Create a backup vault by clicking the **Create** button in the **Backup Vault** tab and specifying the vault name, tags, and other parameters in the modal dialog.
+- **Create Backup**: Create a backup by clicking the **Backup Vault** and then clicking the **Actions** button followed by clicking the **Create Backup** button in the modal dialog.
+ Specify the backup name, backup vault, and other parameters in the modal dialog.
+- **Assign Resources**: Click the backup plan and then click the **Actions** button followed by clicking the **Assign Resources** button in the modal dialog.
+ Specify the backup plan ID and resources to assign in the modal dialog, and click **Submit** to assign the resources to the backup plan.
+- **Delete Vault**: Delete a backup vault by clicking the **Backup Vault** or selecting multiple vaults.
+ Click the **Actions** button followed by clicking the **Delete Vault** button or **Remove Selected** to delete an individual vault or multiple vaults respectively in the modal dialog.
+- **Delete Backup Plan**: Delete a backup plan by clicking the **Backup Plan** or selecting multiple plans.
+ Click the **Actions** button followed by clicking the **Delete Backup Plan** button or **Remove Selected** to delete an individual plan or multiple plans respectively in the modal dialog.
diff --git a/src/content/docs/aws/services/batch.md b/src/content/docs/aws/services/batch.md
new file mode 100644
index 00000000..72a123f7
--- /dev/null
+++ b/src/content/docs/aws/services/batch.md
@@ -0,0 +1,195 @@
+---
+title: Batch
+linkTitle: Batch
+description: Get started with Batch on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Batch is a cloud-based service provided by Amazon Web Services (AWS) that simplifies the process of running batch computing workloads on the AWS cloud infrastructure.
+Batch allows you to efficiently process large volumes of data and run batch jobs without the need to manage and provision underlying compute resources.
+
+LocalStack allows you to use the Batch APIs to automate and scale computational tasks in your local environment while handling batch workloads.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_batch" >}}), which provides information on the extent of Batch integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to AWS Batch and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you create and run a Batch job by following these steps:
+
+1. Creating a service role for the compute environment.
+2. Creating the compute environment.
+3. Creating a job queue using the compute environment.
+4. Creating a job definition.
+5. Submitting a job to the job queue.
+
+### Create a service role
+
+You can create a role using the [`CreateRole`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-role.html) API.
+For LocalStack, the service role simply needs to exist.
+However, when [enforcing IAM policies]({{< ref "user-guide/aws/iam#enforcing-iam-policies" >}}), it is necessary that the policy is valid.
+
+Run the following command to create a role with an empty policy document:
+
+{{< command >}}
+$ awslocal iam create-role \
+ --role-name myrole \
+ --assume-role-policy-document "{}"
+{{< / command >}}
+
+You should see the following output:
+
+```bash
+{
+ "Role": {
+ "Path": "/",
+ "RoleName": "myrole",
+ "RoleId": "AROAQAAAAAAAMKIDGTHVC",
+ "Arn": "arn:aws:iam::000000000000:role/myrole",
+ "CreateDate": "2023-08-10T20:52:06.196000Z",
+ "AssumeRolePolicyDocument": {}
+ }
+}
+```
+
+### Create the compute environment
+
+You can use the [`CreateComputeEnvironment`](https://docs.aws.amazon.com/cli/latest/reference/batch/create-compute-environment.html) API to create a compute environment.
+Run the following command using the role ARN above (`arn:aws:iam::000000000000:role/myrole`), to create the compute environment:
+
+{{< command >}}
+$ awslocal batch create-compute-environment \
+ --compute-environment-name myenv \
+ --type UNMANAGED \
+ --service-role
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Stack**: Create a new CloudFormation stack by clicking on **Create Stack** and provide a template file or URL, including the stack name and parameters.
+- **Edit Stack**: Edit an existing CloudFormation stack by clicking on **Edit Stack** and editing the stack name and parameters and clicking on **Submit**.
+- **View Stack**: View an existing CloudFormation stack by clicking on the Stack Name and viewing the stack details, including the stack name, status, and resources.
+- **Delete Stack**: Delete an existing CloudFormation stack by clicking on the Stack Name and clicking on **Actions** and then **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use CloudFormation in LocalStack for various use cases:
+
+- [Serverless Container-based APIs with Amazon ECS & API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample)
+- [Deploying containers on ECS clusters using ECR and Fargate]({{< ref "/tutorials/ecs-ecr-container-app" >}})
+- [Messaging Processing application with SQS, DynamoDB, and Fargate](https://github.com/localstack/sqs-fargate-ddb-cdk-go)
+
+## Feature coverage
+
+{{< callout "tip" >}}
+We are continually enhancing our CloudFormation feature coverage by consistently introducing new resource types.
+Your feature requests assist us in determining the priority of resource additions.
+Feel free to contribute by [creating a new GitHub issue](https://github.com/localstack/localstack/issues/new?assignees=&labels=feature-request&template=feature-request.yml&title=feature+request%3A+%3Ctitle%3E).
+{{< /callout >}}
+
+### Features
+
+| Feature | Support |
+|:--------------------|:------------------------------------------------|
+| Parameters | Partial |
+| Dynamic References | **Full** |
+| Rules | - |
+| Mappings | **Full** |
+| Conditions | **Full** |
+| Transform | **Full** |
+| Outputs | **Full** |
+| Custom resources | Partial |
+| Drift detection | - |
+| Importing Resources | - |
+| Change sets | **Full** |
+| Nested stacks | Partial |
+| StackSets | Partial |
+| Intrinsic Functions | Partial |
+
+{{< callout >}}
+Currently, support for `UPDATE` operations on resources is limited.
+Prefer stack re-creation over stack update at this time.
+{{< /callout >}}
+
+{{< callout >}}
+Currently, support for `NoEcho` parameters is limited.
+Parameters will be masked only in the `Parameters` section of responses to `DescribeStacks` and `DescribeChangeSets` requests.
+This might expose sensitive information.
+Please exercise caution when using parameters with `NoEcho`.
+{{< /callout >}}
+
+### Intrinsic Functions
+
+| Intrinsic Function | Supported | Explanation |
+| ------------------ | --------- | ------------------------------------------------------------ |
+| `Fn::And` | Yes | Performs a logical AND operation on two or more expressions. |
+| `Fn::Or` | Yes | Performs a logical OR operation on two or more expressions. |
+| `Fn::Base64` | Yes | Converts a binary string to a Base64-encoded string. |
+| `Fn::Sub` | Yes | Performs a string substitution operation. |
+| `Fn::Split` | Yes | Splits a string into an array of strings. |
+| `Fn::Length` | Yes | Returns the length of a string. |
+| `Fn::Join` | Yes | Joins an array of strings into a single string. |
+| `Fn::FindInMap` | Yes | Finds a value in a map. |
+| `Fn::Ref` | Yes | References a resource in the template. |
+| `Fn::GetAtt` | Yes | Gets an attribute from a resource. |
+| `Fn::If` | Yes | Performs a conditional evaluation. |
+| `Fn::Import` | Yes | Imports a value from another template. |
+| `Fn::ToJsonString` | No | Converts an object or map into a json string. |
+| `Fn::Cidr` | No | Generates a CIDR block from the inputs. |
+| `Fn::GetAZs` | No | Returns a list of the Availability Zones of a region. |
+
+### Resources
+
+{{< callout >}}
+When utilizing the Community image, any resources within the stack that are not supported will be disregarded and won't be deployed.
+{{< /callout >}}
+
+#### Community image
+
+| Resource | Create | Delete | Update |
+|---------------------------------------------|-------:|-------:|-------:|
+| AWS::Amplify::Branch | ✅ | ✅ | - |
+| AWS::ApiGateway::Account | ✅ | ✅ | - |
+| AWS::ApiGateway::ApiKey | ✅ | ✅ | - |
+| AWS::ApiGateway::BasePathMapping | ✅ | ✅ | - |
+| AWS::ApiGateway::Deployment | ✅ | ✅ | - |
+| AWS::ApiGateway::DomainName | ✅ | ✅ | - |
+| AWS::ApiGateway::GatewayResponse | ✅ | ✅ | - |
+| AWS::ApiGateway::Method | ✅ | ✅ | ✅ |
+| AWS::ApiGateway::Model | ✅ | ✅ | - |
+| AWS::ApiGateway::RequestValidator | ✅ | ✅ | - |
+| AWS::ApiGateway::Resource | ✅ | ✅ | - |
+| AWS::ApiGateway::RestApi | ✅ | ✅ | - |
+| AWS::ApiGateway::Stage | ✅ | ✅ | - |
+| AWS::ApiGateway::UsagePlan | ✅ | ✅ | ✅ |
+| AWS::ApiGateway::UsagePlanKey | ✅ | ✅ | - |
+| AWS::AutoScaling::AutoScalingGroup | ✅ | ✅ | - |
+| AWS::AutoScaling::LaunchConfiguration | ✅ | ✅ | - |
+| AWS::CDK::Metadata | ✅ | ✅ | ✅ |
+| AWS::CertificateManager::Certificate | ✅ | ✅ | - |
+| AWS::CloudFormation::Macro | ✅ | ✅ | - |
+| AWS::CloudFormation::Stack | ✅ | ✅ | - |
+| AWS::CloudFormation::WaitCondition | ✅ | ✅ | - |
+| AWS::CloudFormation::WaitConditionHandle | ✅ | ✅ | - |
+| AWS::CloudWatch::Alarm | ✅ | ✅ | - |
+| AWS::CloudWatch::CompositeAlarm | ✅ | ✅ | - |
+| AWS::DynamoDB::GlobalTable | ✅ | ✅ | - |
+| AWS::DynamoDB::Table | ✅ | ✅ | - |
+| AWS::EC2::DHCPOptions | ✅ | ✅ | - |
+| AWS::EC2::Instance | ✅ | ✅ | ✅ |
+| AWS::EC2::InternetGateway | ✅ | ✅ | - |
+| AWS::EC2::KeyPair | ✅ | ✅ | - |
+| AWS::EC2::NatGateway | ✅ | ✅ | - |
+| AWS::EC2::NetworkAcl | ✅ | ✅ | - |
+| AWS::EC2::Route | ✅ | ✅ | - |
+| AWS::EC2::RouteTable | ✅ | ✅ | - |
+| AWS::EC2::SecurityGroup | ✅ | ✅ | - |
+| AWS::EC2::Subnet | ✅ | ✅ | - |
+| AWS::EC2::SubnetRouteTableAssociation | ✅ | ✅ | - |
+| AWS::EC2::TransitGateway | ✅ | ✅ | - |
+| AWS::EC2::TransitGatewayAttachment | ✅ | ✅ | - |
+| AWS::EC2::VPC | ✅ | ✅ | - |
+| AWS::EC2::VPCGatewayAttachment | ✅ | ✅ | - |
+| AWS::ECR::Repository | ✅ | ✅ | - |
+| AWS::Elasticsearch::Domain | ✅ | ✅ | - |
+| AWS::Events::ApiDestination | ✅ | ✅ | - |
+| AWS::Events::Connection | ✅ | ✅ | - |
+| AWS::Events::EventBus | ✅ | ✅ | - |
+| AWS::Events::EventBusPolicy | ✅ | ✅ | - |
+| AWS::Events::Rule | ✅ | ✅ | - |
+| AWS::IAM::AccessKey | ✅ | ✅ | ✅ |
+| AWS::IAM::Group | ✅ | ✅ | - |
+| AWS::IAM::InstanceProfile | ✅ | ✅ | - |
+| AWS::IAM::ManagedPolicy | ✅ | ✅ | - |
+| AWS::IAM::Policy | ✅ | ✅ | ✅ |
+| AWS::IAM::Role | ✅ | ✅ | ✅ |
+| AWS::IAM::ServiceLinkedRole | ✅ | ✅ | - |
+| AWS::IAM::User | ✅ | ✅ | - |
+| AWS::KMS::Alias | ✅ | ✅ | - |
+| AWS::KMS::Key | ✅ | ✅ | - |
+| AWS::Kinesis::Stream | ✅ | ✅ | - |
+| AWS::Kinesis::StreamConsumer | ✅ | ✅ | - |
+| AWS::KinesisFirehose::DeliveryStream | ✅ | ✅ | - |
+| AWS::Lambda::Alias | ✅ | ✅ | - |
+| AWS::Lambda::CodeSigningConfig | ✅ | ✅ | - |
+| AWS::Lambda::EventInvokeConfig | ✅ | ✅ | - |
+| AWS::Lambda::EventSourceMapping | ✅ | ✅ | - |
+| AWS::Lambda::Function | ✅ | ✅ | ✅ |
+| AWS::Lambda::LayerVersion | ✅ | ✅ | - |
+| AWS::Lambda::LayerVersionPermission | ✅ | ✅ | - |
+| AWS::Lambda::Permission | ✅ | ✅ | ✅ |
+| AWS::Lambda::Url | ✅ | ✅ | - |
+| AWS::Lambda::Version | ✅ | ✅ | - |
+| AWS::Logs::LogGroup | ✅ | ✅ | - |
+| AWS::Logs::LogStream | ✅ | ✅ | - |
+| AWS::Logs::SubscriptionFilter | ✅ | ✅ | - |
+| AWS::OpenSearchService::Domain | ✅ | ✅ | - |
+| AWS::Redshift::Cluster | ✅ | ✅ | - |
+| AWS::ResourceGroups::Group | ✅ | ✅ | - |
+| AWS::Route53::HealthCheck | ✅ | ✅ | - |
+| AWS::Route53::RecordSet | ✅ | ✅ | - |
+| AWS::S3::Bucket | ✅ | ✅ | - |
+| AWS::S3::BucketPolicy | ✅ | ✅ | - |
+| AWS::SNS::Subscription | ✅ | ✅ | ✅ |
+| AWS::SNS::Topic | ✅ | ✅ | - |
+| AWS::SNS::TopicPolicy | ✅ | ✅ | - |
+| AWS::SQS::Queue | ✅ | ✅ | ✅ |
+| AWS::SQS::QueuePolicy | ✅ | ✅ | ✅ |
+| AWS::SSM::MaintenanceWindow | ✅ | ✅ | - |
+| AWS::SSM::MaintenanceWindowTarget | ✅ | ✅ | - |
+| AWS::SSM::MaintenanceWindowTask | ✅ | ✅ | - |
+| AWS::SSM::Parameter | ✅ | ✅ | ✅ |
+| AWS::SSM::PatchBaseline | ✅ | ✅ | - |
+| AWS::Scheduler::Schedule | ✅ | ✅ | - |
+| AWS::Scheduler::ScheduleGroup | ✅ | ✅ | - |
+| AWS::SecretsManager::ResourcePolicy | ✅ | ✅ | - |
+| AWS::SecretsManager::RotationSchedule | ✅ | ✅ | - |
+| AWS::SecretsManager::Secret | ✅ | ✅ | - |
+| AWS::SecretsManager::SecretTargetAttachment | ✅ | ✅ | - |
+| AWS::ServiceDiscovery::HttpNamespace | ✅ | ✅ | - |
+| AWS::ServiceDiscovery::PrivateDnsNamespace | ✅ | ✅ | - |
+| AWS::ServiceDiscovery::PublicDnsNamespace | ✅ | ✅ | - |
+| AWS::ServiceDiscovery::Service | ✅ | ✅ | - |
+| AWS::StepFunctions::Activity | ✅ | ✅ | - |
+| AWS::StepFunctions::StateMachine | ✅ | ✅ | ✅ |
+| AWS::Timestream::Database | ✅ | ✅ | - |
+| AWS::Timestream::Table | ✅ | ✅ | - |
+
+#### Pro image
+
+| Resource | Create | Delete | Update |
+|-------------------------------------------------|-------:|-------:|-------:|
+| AWS::Amplify::App | ✅ | ✅ | - |
+| AWS::ApiGateway::Authorizer | ✅ | ✅ | - |
+| AWS::ApiGateway::VpcLink | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::Api | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::ApiMapping | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::Authorizer | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::Deployment | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::DomainName | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::Integration | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::IntegrationResponse | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::Route | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::RouteResponse | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::Stage | ✅ | ✅ | - |
+| AWS::ApiGatewayV2::VpcLink | ✅ | ✅ | - |
+| AWS::AppConfig::Application | ✅ | ✅ | - |
+| AWS::AppConfig::ConfigurationProfile | ✅ | ✅ | - |
+| AWS::AppConfig::Deployment | ✅ | ✅ | - |
+| AWS::AppConfig::DeploymentStrategy | ✅ | ✅ | - |
+| AWS::AppConfig::Environment | ✅ | ✅ | - |
+| AWS::AppConfig::HostedConfigurationVersion | ✅ | ✅ | - |
+| AWS::AppSync::ApiKey | ✅ | ✅ | - |
+| AWS::AppSync::DataSource | ✅ | ✅ | - |
+| AWS::AppSync::FunctionConfiguration | ✅ | ✅ | - |
+| AWS::AppSync::GraphQLApi | ✅ | ✅ | - |
+| AWS::AppSync::GraphQLSchema | ✅ | ✅ | - |
+| AWS::AppSync::Resolver | ✅ | ✅ | ✅ |
+| AWS::ApplicationAutoScaling::ScalableTarget | ✅ | ✅ | - |
+| AWS::ApplicationAutoScaling::ScalingPolicy | ✅ | ✅ | - |
+| AWS::Athena::DataCatalog | ✅ | ✅ | - |
+| AWS::Athena::NamedQuery | ✅ | ✅ | - |
+| AWS::Athena::WorkGroup | ✅ | ✅ | - |
+| AWS::Backup::BackupPlan | ✅ | ✅ | - |
+| AWS::Batch::ComputeEnvironment | ✅ | ✅ | - |
+| AWS::Batch::JobDefinition | ✅ | ✅ | - |
+| AWS::Batch::JobQueue | ✅ | ✅ | - |
+| AWS::CloudFormation::CustomResource | ✅ | - | - |
+| AWS::CloudFront::CachePolicy | ✅ | ✅ | - |
+| AWS::CloudFront::CloudFrontOriginAccessIdentity | ✅ | ✅ | - |
+| AWS::CloudFront::Distribution | ✅ | ✅ | - |
+| AWS::CloudFront::Function | ✅ | ✅ | - |
+| AWS::CloudFront::OriginAccessControl | ✅ | ✅ | - |
+| AWS::CloudFront::OriginRequestPolicy | ✅ | ✅ | - |
+| AWS::Cloudfront::ResponseHeadersPolicy | ✅ | ✅ | - |
+| AWS::CloudTrail::Trail | ✅ | ✅ | - |
+| AWS::Cognito::IdentityPool | ✅ | ✅ | - |
+| AWS::Cognito::IdentityPoolRoleAttachment | ✅ | ✅ | - |
+| AWS::Cognito::UserPool | ✅ | ✅ | - |
+| AWS::Cognito::UserPoolClient | ✅ | ✅ | - |
+| AWS::Cognito::UserPoolDomain | ✅ | ✅ | - |
+| AWS::Cognito::UserPoolGroup | ✅ | ✅ | - |
+| AWS::Cognito::UserPoolIdentityProvider | ✅ | ✅ | - |
+| AWS::Cognito::UserPoolResourceServer | ✅ | ✅ | - |
+| AWS::DocDB::DBCluster | ✅ | ✅ | - |
+| AWS::DocDB::DBClusterParameterGroup | ✅ | ✅ | - |
+| AWS::DocDB::DBInstance | ✅ | ✅ | - |
+| AWS::DocDB::DBSubnetGroup | ✅ | ✅ | - |
+| AWS::EC2::EIP | ✅ | ✅ | - |
+| AWS::EC2::LaunchTemplate | ✅ | ✅ | - |
+| AWS::EC2::PrefixList | ✅ | ✅ | - |
+| AWS::EC2::SecurityGroupEgress | ✅ | ✅ | - |
+| AWS::EC2::SecurityGroupIngress | ✅ | ✅ | - |
+| AWS::EC2::SubnetRouteTableAssociation | ✅ | ✅ | - |
+| AWS::EC2::VpcEndpoint | ✅ | ✅ | - |
+| AWS::EC2::VPCCidrBlock | ✅ | ✅ | - |
+| AWS::EC2::VPCEndpoint | ✅ | ✅ | - |
+| AWS::EC2::VPCEndpointService | ✅ | ✅ | - |
+| AWS::ECS::CapacityProvider | ✅ | ✅ | - |
+| AWS::ECS::Cluster | ✅ | ✅ | - |
+| AWS::ECS::ClusterCapacityProviderAssociations | ✅ | ✅ | - |
+| AWS::ECS::Service | ✅ | ✅ | - |
+| AWS::ECS::TaskDefinition | ✅ | ✅ | - |
+| AWS::EFS::AccessPoint | ✅ | ✅ | - |
+| AWS::EFS::FileSystem | ✅ | ✅ | - |
+| AWS::EFS:MountTarget | ✅ | ✅ | - |
+| AWS::EKS::Cluster | ✅ | ✅ | - |
+| AWS::EKS::FargateProfile | ✅ | ✅ | - |
+| AWS::EKS::Nodegroup | ✅ | ✅ | - |
+| AWS::ElastiCache::CacheCluster | ✅ | ✅ | - |
+| AWS::ElastiCache::ParameterGroup | ✅ | ✅ | - |
+| AWS::ElastiCache::ReplicationGroup | ✅ | ✅ | - |
+| AWS::ElastiCache::SecurityGroup | ✅ | ✅ | - |
+| AWS::ElastiCache::SubnetGroup | ✅ | ✅ | - |
+| AWS::ElasticBeanstalk::Application | ✅ | ✅ | - |
+| AWS::ElasticBeanstalk::ApplicationVersion | ✅ | ✅ | - |
+| AWS::ElasticBeanstalk::ConfigurationTemplate | ✅ | ✅ | - |
+| AWS::ElasticBeanstalk::Environment | ✅ | ✅ | - |
+| AWS::ElasticLoadBalancingV2::Listener | ✅ | ✅ | - |
+| AWS::ElasticLoadBalancingV2::ListenerRule | ✅ | ✅ | - |
+| AWS::ElasticLoadBalancingV2::LoadBalancer | ✅ | ✅ | - |
+| AWS::ElasticLoadBalancingV2::TargetGroup | ✅ | ✅ | - |
+| AWS::Glue::Classifier | ✅ | ✅ | - |
+| AWS::Glue::Crawler | ✅ | ✅ | - |
+| AWS::Glue::Connection | ✅ | ✅ | - |
+| AWS::Glue::Database | ✅ | ✅ | - |
+| AWS::Glue::Job | ✅ | ✅ | - |
+| AWS::Glue::Registry | ✅ | ✅ | - |
+| AWS::Glue::SchemaVersion | ✅ | ✅ | - |
+| AWS::Glue::SchemaVersionMetadata | ✅ | ✅ | - |
+| AWS::Glue::Table | ✅ | ✅ | - |
+| AWS::Glue::Trigger | ✅ | ✅ | - |
+| AWS::Glue::Workflow | ✅ | ✅ | - |
+| AWS::IoT::Certificate | ✅ | ✅ | - |
+| AWS::IoT::Policy | ✅ | ✅ | - |
+| AWS::IoT::RoleAlias | ✅ | ✅ | - |
+| AWS::IoT::Thing | ✅ | ✅ | - |
+| AWS::IoT::TopicRule | ✅ | ✅ | - |
+| AWS::IoTAnalytics::Channel | ✅ | ✅ | - |
+| AWS::IoTAnalytics::Dataset | ✅ | ✅ | - |
+| AWS::IoTAnalytics::Datastore | ✅ | ✅ | - |
+| AWS::IoTAnalytics::Pipeline | ✅ | ✅ | - |
+| AWS::KinesisAnalytics::Application | ✅ | ✅ | - |
+| AWS::KinesisAnalytics::ApplicationOutput | ✅ | ✅ | - |
+| AWS::MSK::Cluster | ✅ | ✅ | - |
+| AWS::Neptune::DBCluster | ✅ | ✅ | - |
+| AWS::Neptune::DBClusterParameterGroup | ✅ | ✅ | - |
+| AWS::Neptune::DBInstance | ✅ | ✅ | - |
+| AWS::Neptune::DBParameterGroup | ✅ | ✅ | - |
+| AWS::Neptune::DBSubnetGroup | ✅ | ✅ | - |
+| AWS::Pipes::Pipe | ✅ | ✅ | - |
+| AWS::QLDB::Ledger | ✅ | ✅ | - |
+| AWS::RDS::DBCluster | ✅ | ✅ | - |
+| AWS::RDS::DBClusterParameterGroup | ✅ | ✅ | - |
+| AWS::RDS::DBInstance | ✅ | ✅ | - |
+| AWS::RDS::DBParameterGroup | ✅ | ✅ | - |
+| AWS::RDS::DBProxy | ✅ | ✅ | - |
+| AWS::RDS::DBProxyTargetGroup | ✅ | ✅ | - |
+| AWS::RDS::DBSubnetGroup | ✅ | ✅ | - |
+| AWS::RDS::GlobalCluster | ✅ | ✅ | - |
+| AWS::Redshift::ClusterParameterGroup | ✅ | ✅ | - |
+| AWS::Redshift::ClusterSecurityGroup | ✅ | ✅ | - |
+| AWS::Redshift::ClusterSubnetGroup | ✅ | ✅ | - |
+| AWS::Route53::HostedZone | ✅ | ✅ | - |
+| AWS::SageMaker::Endpoint | ✅ | ✅ | - |
+| AWS::SageMaker::EndpointConfig | ✅ | ✅ | - |
+| AWS::SageMaker::Model | ✅ | ✅ | - |
+| AWS::SES::ReceiptRule | ✅ | ✅ | - |
+| AWS::SES::ReceiptRuleSet | ✅ | ✅ | - |
+| AWS::SES::Template | ✅ | ✅ | ✅ |
+| AWS::SecretsManager::SecretTargetAttachment | ✅ | ✅ | - |
+| AWS::VerifiedPermissions::IdentitySource | ✅ | ✅ | - |
+| AWS::VerifiedPermissions::Policy | ✅ | ✅ | - |
+| AWS::VerifiedPermissions::PolicyStore | ✅ | ✅ | - |
+| AWS::VerifiedPermissions::PolicyTemplate | ✅ | ✅ | - |
+| AWS::WAFv2::IPSet | ✅ | ✅ | - |
+| AWS::WAFv2::LoggingConfiguration | ✅ | ✅ | - |
+| AWS::WAFv2::WebACL | ✅ | ✅ | - |
+| AWS::WAFv2::WebACLAssociation | ✅ | ✅ | - |
diff --git a/src/content/docs/aws/services/cloudfront.md b/src/content/docs/aws/services/cloudfront.md
new file mode 100644
index 00000000..74adac0f
--- /dev/null
+++ b/src/content/docs/aws/services/cloudfront.md
@@ -0,0 +1,119 @@
+---
+title: "CloudFront"
+linkTitle: "CloudFront"
+description: Get started with CloudFront on LocalStack
+tags: ["Base"]
+persistence: supported
+---
+
+## Introduction
+
+CloudFront is a content delivery network (CDN) service provided by Amazon Web Services (AWS).
+CloudFront distributes its web content, videos, applications, and APIs with low latency and high data transfer speeds.
+CloudFront APIs allow you to configure distributions, customize cache behavior, secure content with access controls, and monitor the CDN's performance through real-time metrics.
+
+LocalStack allows you to use the CloudFront APIs in your local environment to create local CloudFront distributions to transparently access your applications and file artifacts.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_cloudfront" >}}), which provides information on the extent of CloudFront's integration with LocalStack.
+
+## Getting started
+
+This guide is intended for users who wish to get more acquainted with CloudFront over LocalStack.
+It assumes you have basic knowledge of the AWS CLI (and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script).
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create an S3 bucket, put a text file named `hello.txt` to the bucket, and then create a CloudFront distribution which makes the file accessible via a `https://abc123.cloudfront.net/hello.txt` proxy URL (where `abc123` is a placeholder for the real distribution ID).
+
+To get started, create an S3 bucket using the `mb` command:
+
+{{< command >}}
+$ awslocal s3 mb s3://abc123
+{{< / command >}}
+
+You can now go ahead, create a new text file named `hello.txt` and upload it to the bucket:
+
+{{< command >}}
+$ echo 'Hello World' > /tmp/hello.txt
+$ awslocal s3 cp /tmp/hello.txt s3://abc123/hello.txt --acl public-read
+{{< / command >}}
+
+After uploading the file to S3, you can create a CloudFront distribution using the [`CreateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html) API call.
+Run the following command to create a distribution with the default settings:
+
+{{< command >}}
+$ domain=$(awslocal cloudfront create-distribution \
+ --origin-domain-name abc123.s3.amazonaws.com | jq -r '.Distribution.DomainName')
+$ curl -k https://$domain/hello.txt
+{{< / command >}}
+
+{{< callout "tip" >}}
+If you wish to use CloudFront on system host, ensure your local DNS setup is correctly configured.
+Refer to the section on [System DNS configuration]({{< ref "dns-server#system-dns-configuration" >}}) for details.
+{{< /callout >}}
+
+In the example provided above, be aware that the final command (`curl https://$domain/hello.txt`) might encounter a temporary failure accompanied by a warning message `Could not resolve host`.
+This can occur because different operating systems adopt diverse DNS caching strategies, causing a delay in the availability of the CloudFront distribution's DNS name (e.g., `abc123.cloudfront.net`) within the system.
+Typically, after a few retries, the command should succeed.
+It's worth noting that similar behavior can be observed in the actual AWS environment, where CloudFront DNS names may take up to 10-15 minutes to propagate across the network.
+
+## Lambda@Edge
+
+{{< callout "note">}}
+We’re introducing an early, incomplete, and experimental feature that emulates AWS CloudFront Lambda@Edge, starting with version 4.3.0.
+
+It enables running Lambda functions at simulated edge locations.
+This allows you to locally test and develop request/response modifications, security enhancements and more.
+
+This feature is still under development, and functionality is limited.
+{{< /callout >}}
+
+You can enable this feature by setting `CLOUDFRONT_LAMBDA_EDGE=1` in your LocalStack configuration.
+
+### Current features
+
+- Support for [`CreateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateDistribution.html) API to set up CloudFront distributions with Lambda@Edge.
+- Support for modifying request and response headers dynamically.
+- Support for [`IncludeBody`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_LambdaFunctionAssociation.html#cloudfront-Type-LambdaFunctionAssociation-IncludeBody) parameter.
+- Support for Node.js & Python 3.x runtime.
+
+### Current limitations
+
+- The [`UpdateDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html), [`DeleteDistribution`](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_DeleteDistribution.html), and [`Persistence Restore`]({{< ref "persistence" >}}) features are not yet supported for Lambda@Edge.
+- The `origin-request` and `origin-response` event types currently trigger for each request because caching is not implemented in CloudFront.
+
+## Using custom URLs
+
+LocalStack Pro supports using an alternate domain name, also referred to as a `CNAME` or custom domain name, to access your applications and file artifacts instead of relying on the domain name generated by CloudFront for your distribution.
+
+To set up the custom domain name, you must configure it in your local DNS server.
+Once that is done, you can designate the desired domain name as an alias for the target distribution.
+To achieve this, you'll need to provide the `Aliases` field in the `--distribution-config` option when creating or updating a distribution.
+The format of this structure is similar to the one used in [AWS CloudFront options](https://docs.aws.amazon.com/cli/latest/reference/cloudfront/create-distribution.html#options).
+
+In the given example, two domains are specified as `Aliases` for a distribution.
+Please note that a complete configuration would entail additional values relevant to the distribution, which have been omitted here for brevity.
+
+{{< command >}}
+--distribution-config {...'Aliases':'{'Quantity':2, 'Items': ['custom.domain.one', 'customDomain.two']}'...}
+{{< / command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for CloudFront, which allows you to view and manage your CloudFront distributions.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **CloudFront** under the **Analytics** section.
+
+
+
+
+
+
+
+- **Create Log Group**: Create a new log group by specifying the `Log Group Name`, `KMS Key ID`, and `Tags`.
+- **Put metric**: Create a new metric by specifying the `Namespace` and `Metric Data`.
+- **Put Alarm**: Create a new alarm by specifying the `Alarm Name`, `Alarm Description`, `Actions Enabled`, `Metric Name`, `Namespace`, `Statistic`, `Comparison Operator`, `Threshold`, `Evaluation Periods`, `Period`, `Unit`, `Treat Missing Data`, `Tags`, and `Alarm Actions`.
+- **Check the Resources**: View and manage existing log groups, metrics, and alarms and perform actions such as `Delete`, `View`, and `Edit`.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use CloudWatch in LocalStack for various use cases:
+
+- [Creating Cloudwatch metric alarms](https://github.com/localstack/localstack-pro-samples/tree/master/cloudwatch-metrics-aws) to demonstrate a simple example for creating CloudWatch metric alarm based on the metrics of a failing Lambda function.
+- [Event-driven architecture with Amazon SNS FIFO, DynamoDB, Lambda, and S3](https://github.com/localstack/event-driven-architecture-with-amazon-sns-fifo) to deploy a recruiting agency application with a job listings website and view the CloudWatch logs.
diff --git a/src/content/docs/aws/services/cloudwatchlogs.md b/src/content/docs/aws/services/cloudwatchlogs.md
new file mode 100644
index 00000000..bc3c23c2
--- /dev/null
+++ b/src/content/docs/aws/services/cloudwatchlogs.md
@@ -0,0 +1,149 @@
+---
+title: "CloudWatch Logs"
+linkTitle: "CloudWatch Logs"
+description: Get started with AWS CloudWatch Logs on LocalStack
+tags: ["Free"]
+persistence: supported
+---
+
+[CloudWatch Logs](https://docs.aws.amazon.com/cloudwatch/index.html) allows to store and retrieve logs.
+While some services automatically create and write logs (e.g. Lambda), logs can also be added manually.
+CloudWatch Logs is available in the Community version.
+However, some specific features are only available in Pro.
+
+## Subscription Filters
+
+Subscription filters can be used to forward logs to certain services, e.g. Kinesis, Lambda, and Kinesis Data Firehose.
+You can read upon details in the [official AWS docs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SubscriptionFilters.html).
+
+### Subscription Filters with Kinesis Example
+
+In the following we setup a little example on how to use subscription filters with kinesis.
+
+First, we setup the required resources.
+Therefore, we create a kinesis stream, a log group and log stream.
+Then we can configure the subscription filter.
+{{< command >}}
+$ awslocal kinesis create-stream --stream-name "logtest" --shard-count 1
+$ kinesis_arn=$(awslocal kinesis describe-stream --stream-name "logtest" | jq -r .StreamDescription.StreamARN)
+$ awslocal logs create-log-group --log-group-name test
+
+$ awslocal logs create-log-stream \
+ --log-group-name test \
+ --log-stream-name test
+
+$ awslocal logs put-subscription-filter \
+ --log-group-name "test" \
+ --filter-name "kinesis_test" \
+ --filter-pattern "" \
+ --destination-arn $kinesis_arn \
+ --role-arn "arn:aws:iam::000000000000:role/kinesis_role"
+{{< / command >}}
+
+Next, we can add a log event, that will be forwarded to kinesis.
+{{< command >}}
+$ timestamp=$(($(date +'%s * 1000 + %-N / 1000000')))
+$ awslocal logs put-log-events --log-group-name test --log-stream-name test --log-events "[{\"timestamp\": ${timestamp} , \"message\": \"hello from cloudwatch\"}]"
+{{< / command >}}
+
+Now we can retrieve the data.
+In our example, there will only be one record.
+The data record is base64 encoded and compressed in gzip format:
+{{< command >}}
+$ shard_iterator=$(awslocal kinesis get-shard-iterator --stream-name logtest --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON | jq -r .ShardIterator)
+$ record=$(awslocal kinesis get-records --limit 10 --shard-iterator $shard_iterator | jq -r '.Records[0].Data')
+$ echo $record | base64 -d | zcat
+{{< / command >}}
+
+## Filter Pattern (Pro only)
+
+[Filter patterns](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html) can be used to select certain logs only.
+
+LocalStack currently supports simple json-property filter.
+
+### Metric Filter Example
+
+Metric filters can be used to automatically create CloudWatch metrics.
+
+In the following example we are interested in logs that include a key-value pair `"foo": "bar"` and create a metric filter.
+{{< command >}}
+$ awslocal logs create-log-group --log-group-name test-filter
+
+$ awslocal logs create-log-stream \
+ --log-group-name test-filter \
+ --log-stream-name test-filter-stream
+
+$ awslocal logs put-metric-filter \
+ --log-group-name test-filter \
+ --filter-name my-filter \
+ --filter-pattern "{$.foo = \"bar\"}" \
+ --metric-transformations \
+ metricName=MyMetric,metricNamespace=MyNamespace,metricValue=1,defaultValue=0
+{{< / command >}}
+
+Next, we can insert some values:
+{{< command >}}
+$ timestamp=$(($(date +'%s * 1000 + %-N / 1000000')))
+$ awslocal logs put-log-events --log-group-name test-filter \
+ --log-stream-name test-filter-stream \
+ --log-events \
+ timestamp=$timestamp,message='"{\"foo\":\"bar\", \"hello\": \"world\"}"' \
+ timestamp=$timestamp,message="my test event" \
+ timestamp=$timestamp,message='"{\"foo\":\"nomatch\"}"'
+{{< / command >}}
+
+Now we can check that the metric was indeed created:
+{{< command >}}
+end=$(date +%s)
+awslocal cloudwatch get-metric-statistics --namespace MyNamespace \
+ --metric-name MyMetric --statistics Sum --period 3600 \
+ --start-time 1659621274 --end-time $end
+{{< / command >}}
+
+### Filter Log Events
+
+Similarly, you can use filter-pattern to filter logs with different kinds of patterns as described by [AWS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html).
+
+#### JSON Filter Pattern
+
+For purely JSON structured log messages, you can use JSON filter patterns to traverse the JSON object.
+Enclose your pattern in curly braces, like this:
+{{< command >}}
+$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "{$.foo = \"bar\"}"
+{{< / command >}}
+
+This returns all events whose top level "foo" key has the "bar" value.
+
+#### Regular Expression Filter Pattern
+
+You can use a simplified regex syntax for regular expression matching.
+Enclose your pattern in percentage signs like this:
+{{< command >}}
+$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "\%[fF]oo\%"
+{{< / command >}}
+This returns all events containing "Foo" or "foo".
+For a complete set of the supported syntax, check [the official AWS documentation](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/FilterAndPatternSyntax.html#regex-expressions)
+
+#### Unstructured Filter Pattern
+
+If not specified otherwise in the pattern, we look for a match in the whole event message:
+{{< command >}}
+$ awslocal logs filter-log-events --log-group-name test-filter --filter-pattern "foo"
+{{< / command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for exploring CloudWatch Logs.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **CloudWatch Logs** under the **Management/Governance** section.
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Repository**: Create a new CodeCommit repository by specifying the repository name and description, along with optional tags and KMS key ID.
+- **View Repository**: View the details of a CodeCommit repository, including the repository name, description, ARN, and clone URLs.
+- **Delete Repository**: Delete a CodeCommit repository by selecting the repository from the list and clicking the **Actions** dropdown menu followed by **Delete**.
+
+## Examples
+
+You can find a sample application illustrating the usage of the CodeCommit APIs locally in the [LocalStack Pro Samples](https://github.com/localstack/localstack-pro-samples/tree/master/codecommit-git-repo).
diff --git a/src/content/docs/aws/services/codedeploy.md b/src/content/docs/aws/services/codedeploy.md
new file mode 100644
index 00000000..aa26ce03
--- /dev/null
+++ b/src/content/docs/aws/services/codedeploy.md
@@ -0,0 +1,251 @@
+---
+title: CodeDeploy
+linkTitle: CodeDeploy
+description: >
+ Get started with CodeDeploy on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+CodeDeploy is a service that automates application deployments.
+On AWS, it supports deployments to Amazon EC2 instances, on-premises instances, serverless Lambda functions, or Amazon ECS services.
+Furthermore, based on the target it is also possible to use an in-place deployment or a blue/green deployment.
+
+LocalStack supports a mocking of CodeDeploy API operations.
+The supported operations are listed on the [API coverage page]({{< ref "coverage_codedeploy" >}}).
+
+## Getting Started
+
+This guide will walk through the process of creating CodeDeploy applications, deployment configuration, deployment groups, and deployments.
+
+Basic knowledge of the AWS CLI and the [`awslocal`](https://github.com/localstack/awscli-local) wrapper is expected.
+
+Start LocalStack using your preferred method.
+
+### Applications
+
+An application is a CodeDeploy construct that uniquely identifies your targetted application.
+Create an application with the [CreateApplication](https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_CreateApplication.html) operation:
+
+{{< command >}}
+$ awslocal deploy create-application --application-name hello --compute-platform Server
+`.
+For instance, the redirect URL might look like `http://example.com?code=test123`.
+
+To obtain a token, you need to submit the received code using `grant_type=authorization_code` to LocalStack's implementation of the Cognito OAuth2 TOKEN Endpoint, which is documented [on the AWS Cognito Token endpoint page](https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html).
+
+Note that the value of the `redirect_uri` parameter in your token request must match the value provided during the login process.
+Ensuring this match is crucial for the proper functioning of the authentication flow.
+
+```sh
+% curl \
+ --data-urlencode 'grant_type=authorization_code' \
+ --data-urlencode 'redirect_uri=http://example.com' \
+ --data-urlencode "client_id=${client_id}" \
+ --data-urlencode 'code=test123' \
+ 'http://localhost:4566/_aws/cognito-idp/oauth2/token'
+{"access_token": "eyJ0eXAi…lKaHx44Q", "expires_in": 86400, "token_type": "Bearer", "refresh_token": "e3f08304", "id_token": "eyJ0eXAi…ADTXv5mA"}
+```
+
+### Client credentials grant
+
+The client credentials grant is designed for machine-to-machine (M2M) communication.
+The Client Credentials Grant allows the machine (client) to authenticate itself directly with the authorization server using its credentials, such as a client ID and client secret.
+The client credentials grant allows for scope-based authorization from a non-interactive system to an API.
+Your app can directly request client credentials from the token endpoint to receive an access token.
+
+To request the token from the LocalStack URL, use the following endpoint: `://cognito-idp.localhost.localstack.cloud:4566/_aws/cognito-idp/oauth2/token`.
+For additional information on our endpoints, refer to our [Internal Endpoints]({{< ref "/references/internal-endpoints" >}}) documentation.
+
+If there are multiple user pools, LocalStack identifies the appropriate one by examining the `clientid` of the request.
+
+To get started, follow the example below:
+
+```sh
+#Create client user pool with a client.
+export client_id=$(awslocal cognito-idp create-user-pool-client --user-pool-id $pool_id --client-name test-client --generate-secret | jq -rc ".UserPoolClient.ClientId")
+
+#Retrieve secret.
+export client_secret=$(awslocal cognito-idp describe-user-pool-client --user-pool-id $pool_id --client-id $client_id | jq -r '.UserPoolClient.ClientSecret')
+
+#Create resource server
+awslocal cognito-idp create-resource-server \
+ --user-pool-id $pool_id \
+ --identifier "api-client-organizations" \
+ --name "Resource Server Name" \
+ --scopes '[{"ScopeName":"read","ScopeDescription":"Read access to Organizations"}]'
+
+```
+
+You can retrieve the token from your application using the specified endpoint: `http://cognito-idp.localhost.localstack.cloud:4566/_aws/cognito-idp/oauth2/token`.
+
+```javascript
+require('dotenv').config();
+const axios = require('axios');
+
+async function getAccessTokenWithSecret() {
+ const clientId = process.env.client_id;
+ const clientSecret = process.env.client_secret;
+ const scope = 'api-client-organizations/read';
+ const url = 'http://cognito-idp.localhost.localstack.cloud:4566/_aws/cognito-idp/oauth2/token';
+
+ const authHeader = Buffer.from(`${clientId}:${clientSecret}`).toString('base64');
+
+ const headers = {
+ 'Content-Type': 'application/x-www-form-urlencoded',
+ 'Authorization': `Basic ${authHeader}`
+ };
+
+ const payload = new URLSearchParams({
+ grant_type: 'client_credentials',
+ client_id: clientId,
+ scope: scope
+ });
+
+ try {
+ const response = await axios.post(url, payload, { headers });
+ console.log(response.data);
+ } catch (error) {
+ console.error('Error fetching access token:', error.response ? error.response.data : error.message);
+ }
+}
+
+getAccessTokenWithSecret();
+```
+
+## Serverless and Cognito
+
+Furthermore, you have the option to combine Cognito and LocalStack seamlessly with the [Serverless framework](https://www.serverless.com/).
+
+For instance, consider this snippet from a `serverless.yml` configuration:
+
+```yaml
+service: test
+
+plugins:
+ - serverless-deployment-bucket
+ - serverless-pseudo-parameters
+ - serverless-localstack
+
+custom:
+ localstack:
+ stages: [local]
+
+functions:
+ http_request:
+ handler: http.request
+ events:
+ - http:
+ path: v1/request
+ authorizer:
+ arn: arn:aws:cognito-idp:us-east-1:#{AWS::AccountId}:userpool/ExampleUserPool
+
+resources:
+ Resources:
+ UserPool:
+ Type: AWS::Cognito::UserPool
+ Properties:
+ ...
+```
+
+After configuring the Serverless setup, you can deploy it using `serverless deploy --stage local`.
+The provided example includes a Lambda function called `http_request` that's linked to an API Gateway endpoint.
+
+Once deployed, the `v1/request` API Gateway endpoint will be protected by the Cognito user pool named "`ExampleUserPool`".
+As a result, you can register users against the local pool using the same API calls as you would with AWS.
+
+To send requests to the secured API Gateway endpoint, you need to fetch identity credentials from the local Cognito API.
+These credentials can then be included as `Authentication` HTTP headers (where `test-1234567` represents the name of the access key ID generated by Cognito):
+
+```bash
+Authentication: AWS4-HMAC-SHA256 Credential=test-1234567/20190821/us-east-1/cognito-idp/aws4_request ...
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing Cognito User Pools, and more.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Cognito** under the **Security Identity Compliance** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create User Pool**: Create a new Cognito User Pool, by specifying the pool name, policies, and other settings.
+- **View User Pools**: View a list of all existing Cognito User Pools, including their **Details**, **Groups**, and **Users**.
+- **Edit User Pool**: Edit an existing Cognito User Pool, by adding additional configurations, policies, and more.
+- **Create Group**: Add a new Group to an existing Cognito User Pool, by specifying the group name, description, Role Arn, and Precedence.
+- **Create User**: Add a new User to an existing Cognito User Pool, by specifying the user name, user attributes, and more.
+- **Remove Selected**: Remove the selected User Pool, Group, or User from the list of existing Cognito resources.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Cognito in LocalStack for various use cases:
+
+- [Running Cognito authentication and user pools locally](https://github.com/localstack/localstack-pro-samples/tree/master/cognito-jwt)
+- [Serverless Container-based APIs with ECS & API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample)
+- [Step-up Authentication using Cognito](https://github.com/localstack/step-up-auth-sample)
+
+## Current Limitations
+
+By default, LocalStack's Cognito does not send actual email messages.
+However, if you wish to enable this feature, you will need to provide an email address and configure the corresponding SMTP settings.
+The instructions on configuring the connection parameters of your SMTP server can be found in the [Configuration]({{< ref "configuration#emails" >}}) guide to allow your local Cognito environment to send email notifications.
diff --git a/src/content/docs/aws/services/config.md b/src/content/docs/aws/services/config.md
new file mode 100644
index 00000000..9c28d46d
--- /dev/null
+++ b/src/content/docs/aws/services/config.md
@@ -0,0 +1,106 @@
+---
+title: "Config"
+linkTitle: "Config"
+description: Get started with Config on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+AWS Config is a service provided by Amazon Web Services (AWS) that enables you to assess, audit, and manage the configuration state of your AWS resources.
+Config provides a comprehensive view of the resource configuration across your AWS environment, helping you ensure compliance with security policies, track changes, and troubleshoot operational issues.
+Config continuously records configuration changes and allows you to retain a historical record of these changes.
+
+LocalStack allows you to use the Config APIs in your local environment to assesses resource configurations and notifies you of any non-compliant items to mitigate potential security risks.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_config" >}}), which provides information on the extent of Config's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Config and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to specify the resource types you want Config to record and grant it the needful permissions to access an S3 bucket and SNS topic with the AWS CLI.
+
+### Create an S3 bucket and SNS topic
+
+The S3 bucket will be used to receive a configuration snapshot on request and configuration history.
+The SNS topic will be used to notify you when a configuration snapshot is available.
+You can create a new S3 bucket and SNS topic using the AWS CLI:
+
+{{< command >}}
+$ awslocal s3 mb s3://config-test
+$ awslocal sns create-topic --name config-test-topic
+{{< /command >}}
+
+### Create a new configuration recorder
+
+You can now create a new configuration recorder to record configuration changes for specified resource types, using the [`PutConfigurationRecorder`](https://docs.aws.amazon.com/config/latest/APIReference/API_PutConfigurationRecorder.html) API.
+Run the following command to create a new configuration recorder:
+
+{{< command >}}
+$ awslocal configservice put-configuration-recorder \
+ --configuration-recorder name=default,roleARN=arn:aws:iam::000000000000:role/config-role
+{{< /command >}}
+
+We have specified the `roleARN` parameter to grant the configuration recorder the needful permissions to access the S3 bucket and SNS topic.
+In LocalStack, IAM roles are not enforced, so you can specify any role ARN you like.
+The `name` parameter has been set to `default`, and you can optionally specify a `recordingGroup` parameter to specify the resource types you want to record.
+
+### Create a delivery channel
+
+You can now create a delivery channel object to deliver configuration information to an S3 bucket and an SNS topic.
+You have already created the S3 bucket and SNS topic, so you can now create the delivery channel object using the [`PutDeliveryChannel`](https://docs.aws.amazon.com/config/latest/APIReference/API_PutDeliveryChannel.html) API.
+
+We're going to create a delivery channel with the following configuration.
+You can inline the JSON into the `awslocal` command.
+
+```json
+{
+ "name": "default",
+ "s3BucketName": "config-test",
+ "snsTopicARN": "arn:aws:sns:us-east-1:000000000000",
+ "configSnapshotDeliveryProperties": {
+ "deliveryFrequency": "Twelve_Hours"
+ }
+}
+```
+
+Run the following command to create the delivery channel:
+
+{{< command >}}
+$ awslocal configservice put-delivery-channel \
+ --delivery-channel '{
+ "name": "default",
+ "s3BucketName": "config-test",
+ "snsTopicARN": "arn:aws:sns:us-east-1:000000000000",
+ "configSnapshotDeliveryProperties": {
+ "deliveryFrequency": "Twelve_Hours"
+ }
+}'
+{{< /command >}}
+
+### Start the configuration recorder
+
+You can now start recording configurations of the local AWS resources you have selected to record in your running LocalStack container.
+You can use the [`StartConfigurationRecorder`](https://docs.aws.amazon.com/config/latest/APIReference/API_StartConfigurationRecorder.html) API to start the configuration recorder.
+Run the following command to start the configuration recorder:
+
+{{< command >}}
+$ awslocal configservice start-configuration-recorder \
+ --configuration-recorder-name default
+{{< /command >}}
+
+You can list the delivery channels and configuration recorders using the [`DescribeDeliveryChannels`](https://docs.aws.amazon.com/config/latest/APIReference/API_DescribeDeliveryChannels.html) and [`DescribeConfigurationRecorderStatus`](https://docs.aws.amazon.com/config/latest/APIReference/API_DescribeConfigurationRecorderStatus.html) APIs respectively.
+
+{{< command >}}
+$ awslocal configservice describe-delivery-channels
+$ awslocal configservice describe-configuration-recorder-status
+{{< /command >}}
+
+## Current Limitations
+
+AWS Config is currently mocked in LocalStack.
+You can create, read, update, and delete AWS Config resources (like delivery channels or configuration recorders),
+but LocalStack will currently not record any configuration changes to service resources.
+If you need this feature, please consider opening a [feature request on GitHub](https://github.com/localstack/localstack/issues/new).
diff --git a/src/content/docs/aws/services/cost-explorer.md b/src/content/docs/aws/services/cost-explorer.md
new file mode 100644
index 00000000..00f5cf75
--- /dev/null
+++ b/src/content/docs/aws/services/cost-explorer.md
@@ -0,0 +1,177 @@
+---
+title: "Cost Explorer"
+linkTitle: "Cost Explorer"
+description: >
+ Get started with Cost Explorer on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Cost Explorer is a service provided by Amazon Web Services (AWS) that enables you to visualize, analyze, and manage your AWS spending and usage.
+Cost Explorer offers options to filter and group data by dimensions such as service, region, instance type, and more.
+With Cost Explorer, you can forecast costs, track budget progress, and set up alerts to receive notifications when spending exceeds predefined thresholds.
+
+LocalStack allows you to use the Cost Explorer APIs in your local environment to create and manage cost category definition, cost anomaly monitors & subscriptions.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_ce" >}}), which provides information on the extent of Cost Explorer's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Cost Explorer and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to mock the Cost Explorer APIs with the AWS CLI.
+
+### Create a Cost Category definition
+
+You can create a Cost Category definition using the [`CreateCostCategoryDefinition`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_CreateCostCategoryDefinition.html)) API.
+The following example creates a Cost Category definition using an empty rule condition of type "REGULAR":
+
+{{< command >}}
+$ awslocal ce create-cost-category-definition --name test \
+ --rule-version "CostCategoryExpression.v1" --rules '[{"Value": "test", "Rule": {}, "Type": "REGULAR"}]'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "CostCategoryArn": "arn:aws:ce::000000000000:costcategory/test"
+}
+```
+
+You can describe the Cost Category definition using the [`DescribeCostCategoryDefinition`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_DescribeCostCategoryDefinition.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal ce describe-cost-category-definition \
+ --cost-category-arn arn:aws:ce::000000000000:costcategory/test
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "CostCategory": {
+ "CostCategoryArn": "arn:aws:ce::000000000000:costcategory/test",
+ "Name": "test",
+ "RuleVersion": "CostCategoryExpression.v1",
+ "Rules": [
+ {
+ "Value": "test",
+ "Rule": {},
+ "Type": "REGULAR"
+ }
+ ]
+ }
+}
+```
+
+### Create a cost anomaly subscription
+
+You can add an alert subscription to a cost anomaly detection monitor to define subscribers using the [`CreateAnomalySubscription`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_CreateAnomalySubscription.html) API.
+The following example creates a cost anomaly subscription:
+
+{{< command >}}
+$ awslocal ce create-anomaly-subscription --anomaly-subscription '{
+ "AccountId": "12345",
+ "SubscriptionName": "sub1",
+ "Frequency": "DAILY",
+ "MonitorArnList": [],
+ "Subscribers": [],
+ "Threshold": 111
+}'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "SubscriptionArn": "arn:aws:ce::000000000000:anomalysubscription/70644961"
+}
+```
+
+You can retrieve the cost anomaly subscriptions using the [`GetAnomalySubscriptions`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_GetAnomalySubscriptions.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal ce get-anomaly-subscriptions
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "AnomalySubscriptions": [
+ {
+ "SubscriptionArn": "arn:aws:ce::000000000000:anomalysubscription/70644961",
+ "AccountId": "12345",
+ "MonitorArnList": [],
+ "Subscribers": [],
+ "Threshold": 111.0,
+ "Frequency": "DAILY",
+ "SubscriptionName": "sub1"
+ }
+ ]
+}
+```
+
+### Create a cost anomaly monitor
+
+You can create a new cost anomaly detection subscription with the requested type and monitor specification using the [`CreateAnomalyMonitor`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_CreateAnomalyMonitor.html) API.
+The following example creates a cost anomaly monitor:
+
+{{< command >}}
+$ awslocal ce create-anomaly-monitor --anomaly-monitor '{
+ "MonitorName": "mon5463",
+ "MonitorType": "DIMENSIONAL"
+}'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "MonitorArn": "arn:aws:ce::000000000000:anomalymonitor/22570ff3"
+}
+```
+
+You can retrieve the cost anomaly monitors using the [`GetAnomalyMonitors`](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_GetAnomalyMonitors.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal ce get-anomaly-monitors
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "AnomalyMonitors": [
+ {
+ "MonitorArn": "arn:aws:ce::000000000000:anomalymonitor/22570ff3",
+ "MonitorName": "mon5463",
+ "MonitorType": "DIMENSIONAL"
+ }
+ ]
+}
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing cost category definitions for the Cost Explorer service.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the Resources section, and then clicking on **Cost Explorer** under the **Cloud Financial Management** section.
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Cost Category definition**: Create a new Cost Category definition by clicking on the **Create** button and providing the required details.
+- **View Cost Category definition**: View the details of a Cost Category definition by clicking on the Cost Category definition.
+- **Delete Cost Category definition**: Delete a Cost Category definition by selecting on the Cost Categorty definition, and then clicking on the **Actions** button followed by **Remove Selected**.
+
+## Current Limitations
+
+LocalStack's Cost Explorer implementation cannot programmatically query your cost and usage data, or provide aggregated data such as total monthly costs or total daily usage.
+However, you can use the integrations to mock the Cost Explorer APIs and test your workflow locally.
diff --git a/src/content/docs/aws/services/dms.md b/src/content/docs/aws/services/dms.md
new file mode 100644
index 00000000..b2e26fe9
--- /dev/null
+++ b/src/content/docs/aws/services/dms.md
@@ -0,0 +1,213 @@
+---
+title: "Database Migration Service (DMS)"
+linkTitle: "Database Migration Service (DMS)"
+description: Get started with Database Migration Service (DMS) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+AWS Database Migration Service provides migration solution from databases, data warehouses, and other type of data stores (e.g. S3, SAP).
+The migration can be homogeneous (source and target have the same type), but often times is heterogeneous as it supports migration from various sources to various targets (self-hosted and AWS services).
+
+LocalStack only supports selected use cases for DMS at the moment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_dms" >}}), which provides information on the extent of DMS integration with LocalStack.
+
+{{< callout "note">}}
+DMS is in a preview state, supporting only [selected use cases](#supported-use-cases).
+You need to set the env `ENABLE_DMS=1` in order to activate it.
+{{< /callout >}}
+
+## Getting started
+
+You can run a DMS sample showcasing MariaDB source and Kinesis target from our [GitHub repository](https://github.com/localstack-samples/sample-dms-kinesis-rds-mariadb/).
+
+* The sample is using CDK to setup the infrastructure.
+* It setups two databases: one external MariaDB (starting in a docker container) and one RDS MariaDB.
+* It creates two `cdc` replication tasks, with different table mappings, that will run against the RDS database,
+* and two `full-load` replication tasks with different table mappings, running against the hosted (containerized) MariaDB.
+
+To follow the sample, simply clone the repository:
+
+```sh
+git clone https://github.com/localstack-samples/sample-dms-kinesis-rds-mariadb.git
+```
+
+Next, start LocalStack (there is a docker-compose included, setting the `ENABLE_DMS=1` flag):
+
+```sh
+export LOCALSTACK_AUTH_TOKEN= # this must be a enterprise license token
+docker-compose up
+```
+
+Now you can install the dependencies, deploy the resources, and run the tests:
+
+```sh
+# install dependencies
+make install
+# deploys cdk stack with all required resources (replication instances, tasks, endpoints)
+make deploy
+# starts the tasks
+make run
+```
+
+You will then see some log output, indicating the status of the ongoing replication:
+
+```sh
+************
+STARTING FULL LOAD FLOW
+************
+db endpoint: localhost:3306
+
+ Cleaning tables
+ Creating tables
+ Inserting data
+
+ Added the following authors
+[{'first_name': 'John', 'last_name': 'Doe'}]
+
+ Added the following accounts
+[{'account_balance': Decimal('1500.00'), 'name': 'Alice'}]
+
+ Added the following novels
+[{'author_id': 1, 'title': 'The Great Adventure'},
+ {'author_id': 1, 'title': 'Journey to the Stars'}]
+
+****Full Task 1****
+
+
+ Starting Full load task 1 a%
+Replication Task arn:aws:dms:us-east-1:000000000000:task:FQWFF7YIZ4VGQHBIXCLI9FJTUUS17NSECIM0UR7 status: starting
+Waiting for task status stopped
+task='arn:aws:dms:us-east-1:000000000000:task:FQWFF7YIZ4VGQHBIXCLI9FJTUUS17NSECIM0UR7' status='starting'
+task='arn:aws:dms:us-east-1:000000000000:task:FQWFF7YIZ4VGQHBIXCLI9FJTUUS17NSECIM0UR7' status='stopped'
+
+ Kinesis events
+
+fetching Kinesis event
+Received: 6 events
+[{'control': {},
+ 'metadata': {'operation': 'drop-table',
+ 'partition-key-type': 'task-id',
+ 'partition-key-value': 'FQWFF7YIZ4VGQHBIXCLI9FJTUUS17NSECIM0UR7',
+ 'record-type': 'control',
+ 'schema-name': 'dms_sample',
+ 'table-name': 'accounts',
+ 'timestamp': '2024-05-23T19:17:33.126Z'},
+ 'partition_key': 'FQWFF7YIZ4VGQHBIXCLI9FJTUUS17NSECIM0UR7.dms_sample.accounts'},
+ {'control': {},
+ 'metadata': {'operation': 'drop-table',
+ 'partition-key-type': 'task-id',
+ 'partition-key-value': 'FQWFF7YIZ4VGQHBIXCLI9FJTUUS17NSECIM0UR7',
+ 'record-type': 'control',
+ 'schema-name': 'dms_sample',
+ 'table-name': 'authors',
+ 'timestamp': '2024-05-23T19:17:33.128Z'},
+...
+...
+...
+```
+
+## Supported Use Cases
+
+DMS is in a preview state on LocalStack and only supports some selected use cases:
+
+| Source | Target | Migration Types | Serverless Support |
+| - | - | - | - |
+| MariaDB (external) | Kinesis | full-load, cdc | Yes |
+| MySQL (external) | Kinesis | full-load, cdc | Yes |
+| RDS MariaDB | Kinesis | full-load, cdc | Yes |
+| RDS MySQL | Kinesis | full-load, cdc | Yes |
+| S3 | Kinesis | full-load, cdc | Not supported by AWS |
+| Aurora PostgreSQL | Kinesis | full-load, cdc | No |
+| RDS PostgreSQL | Kinesis | full-load, cdc | No |
+| PostgreSQL (external) | Kinesis | full-load, cdc | No |
+
+## Serverless
+
+[DMS Serverless](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Serverless.html) can be used in Localstack for the above mentioned supported use cases that are [officially supported by AWS](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Serverless.Components.html#CHAP_Serverless.SupportedVersions).
+
+In order to simulate the different states that the replication config goes through when provisioning, you can set the env `DMS_SERVERLESS_STATUS_CHANGE_WAITING_TIME`, which will cause the state-change to wait the configured seconds.
+
+The waiting time is applied for every status change before the replication is actually in `running`.
+See also the [official docs for explanation about the different states](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Serverless.Components.html).
+
+Be aware that the replication table statistics on AWS is deleted automatically once the replication finished, and the replication configuration deprovisioned.
+
+For parity reasons, this is also true on LocalStack.
+In order to delay the deprovisioning, you can use the env `DMS_SERVERLESS_DEPROVISIONING_DELAY`, which by default is set to 60 seconds.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing:
+
+* [Replication Instances](https://app.localstack.cloud/inst/default/resources/dms/replication-instances)
+* [Endpoints](https://app.localstack.cloud/inst/default/resources/dms/endpoints)
+* [Replication Tasks](https://app.localstack.cloud/inst/default/resources/dms/replication-tasks)
+
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Database Migration Service** under the **Migration and transfer** section.
+
+
+
+
+The Resource Browser supports CRD (Create, Read, Delete) operations on DMS resources.
+
+### Replication Instances
+
+* **Create Replication Instance**: To create a new replication instance, click the **Create Replication Instance** button and enter details such as the Replication Instance Identifier and Replication Instance class.
+* **View Replication Instance**: To view details of a replication instance, click on its ARN.
+* **Delete Replication Instance**: To delete a replication instance, select it, go to **Actions**, and choose **Remove Selected**.
+
+### Endpoints
+
+* **Create Endpoint**: To create a new endpoint, click on the **Create Endpoint** button and fill in necessary details such as the Endpoint Identifier, Endpoint Type, and Engine Name.
+* **View Endpoint**: To see the details of an endpoint, click on its ARN.
+ You can further click **Connections** and test a connection by specifying the Replication Instance ARN.
+* **Delete Endpoint**: To remove an endpoint, select it, navigate to **Actions**, and click **Remove Selected**.
+
+### Replication Tasks
+
+* **Create Replication Task**: To create a new replication task, press the **Create Replication Task** button and specify the Task Identifier, Source Endpoint Identifier, and Target Endpoint Identifier, among other settings.
+* **View Replication Task**: To review a replication task, click on the task identifier.
+* **Delete Replication Task**: To delete a replication task, choose the task, click on **Actions**, and select **Remove Selected**.
+
+## Current Limitations
+
+For RDS MariaDB and RDS MySQL it is not yet possible to set custom db-parameters.
+In order to make those databases work with `cdc` migration for DMS, some default db-parameters are changed upon start if the `ENABLE_DMS=1` flag is set:
+
+```sh
+binlog_checksum=NONE
+binlog_row_image=FULL
+binlog_format=ROW
+server_id=1
+log_bin=mysqld-bin
+```
+
+For S3 as a source, only the first 1000 files of a table in a bucket are considered for migration.
+
+For PostgreSQL as a source, the `ReplicationTaskSettings.BeforeImageSettings` parameter is not supported.
+
+### Enum Values for CDC data events
+
+To support Enum values for CDC data events, you need to enable the database setting `BINLOG_ROW_METADATA=FULL`
+
+### Migration Type
+
+A replication task on LocalStack does currently only support `full-load` (migrate existing data) or `cdc` (replicate data changes only).
+On AWS there is also a combination for those, which is not yet implemented on LocalStack.
+
+### ReplicationTaskSettings
+
+The `ReplicationTaskSettings` for a [replication task](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.html) only considers `BeforeImageSettings`, `FullLoadSettings.CommitRate` and `FullLoadSettings.TargetTablePrepMode`
+
+### Other Limitations
+
+* [Data Validation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Validating.html#CHAP_Validating.TaskStatistics) is not supported
+* [Reload](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.ReloadTables.html) of tables is not supported
+* [Task Logs](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Monitoring.html#CHAP_Monitoring.ManagingLogs), specifically CloudWatch, and CloudTrail are not supported (table statistics are supported)
+* [Time Travel](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.TimeTravel.html) is not supported
+* [Target Metadata Settings](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TaskSettings.TargetMetadata.html): `ParallelLoadThreads` is not supported
+* [Transformation](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Tasks.CustomizingTasks.TableMapping.SelectionTransformation.Transformations.html): `"rule-type": "transformation"` is not supported
+* [AWS DMS Schema Conversion Tool](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_SchemaConversion.html) is not supported
+* [AWS DMS Fleet Advisor](https://docs.aws.amazon.com/dms/latest/userguide/CHAP_FleetAdvisor.html) is not supported
diff --git a/src/content/docs/aws/services/docdb.md b/src/content/docs/aws/services/docdb.md
new file mode 100644
index 00000000..7f89ade5
--- /dev/null
+++ b/src/content/docs/aws/services/docdb.md
@@ -0,0 +1,422 @@
+---
+title: "DocumentDB (DocDB)"
+linkTitle: "DocumentDB (DocDB)"
+tags: ["Ultimate"]
+description: Get started with AWS DocumentDB on LocalStack
+---
+
+## Introduction
+
+DocumentDB is a fully managed, non-relational database service that supports MongoDB workloads.
+DocumentDB is compatible with MongoDB, meaning you can use the same MongoDB drivers, applications, and tools to run, manage, and scale workloads on DocumentDB without having to worry about managing the underlying infrastructure.
+
+LocalStack allows you to use the DocumentDB APIs to create and manage DocumentDB clusters and instances.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_docdb" >}}), which provides information on the extent of DocumentDB's integration with LocalStack.
+
+## Getting started
+
+To create a new DocumentDB cluster we use the `create-db-cluster` command as follows:
+
+{{< command >}}
+$ awslocal docdb create-db-cluster --db-cluster-identifier test-docdb-cluster --engine docdb
+{{< /command >}}
+
+```yaml
+{
+ "DBCluster": {
+ "DBClusterIdentifier": "test-docdb-cluster",
+ "DBClusterParameterGroup": "default.docdb",
+ "Status": "available",
+ "Endpoint": "localhost.localstack.cloud",
+ "MultiAZ": false,
+ "Engine": "docdb",
+ "Port": 39045,
+ "MasterUsername": "test",
+ "DBClusterMembers": [],
+ "VpcSecurityGroups": [
+ {
+ "VpcSecurityGroupId": "sg-a30edea1f7da6ff90",
+ "Status": "active"
+ }
+ ],
+ "StorageEncrypted": false,
+ "DBClusterArn": "arn:aws:rds:us-east-1:000000000000:cluster:test-docdb-cluster"
+ }
+}
+```
+
+If we break down the previous command, we can identify:
+
+- `docdb`: The command related to Amazon DocumentDB for the `AWS CLI`.
+- `create-db-cluster`: The command to create an Amazon DocumentDB cluster.
+- `--db-cluster-identifier test-docdb-cluster`: Specifies the unique identifier for the DocumentDB
+ cluster.
+ In this case, it is set to `test-docdb-cluster`.
+ You can customize this identifier to a
+ name of your choice.
+- `--engine docdb`: Specifies the database engine.
+ Here, it is set to `docdb`, indicating the use of
+ Amazon DocumentDB.
+
+Notice in the `DBClusterMembers` field of the cluster description that there are no other databases
+created.
+As we did not specify a `MasterUsername` or `MasterUserPassword` for the creation of the database, the mongo-db will not set any credentials when starting the docker container.
+To create a new database, we can use the `create-db-instance` command, like in this example:
+
+{{< command >}}
+$ awslocal docdb create-db-instance --db-instance-identifier test-company \
+--db-instance-class db.r5.large --engine docdb --db-cluster-identifier test-docdb-cluster
+{{< /command >}}
+
+```yaml
+{
+ "DBInstance": {
+ "DBInstanceIdentifier": "test-docdb-instance",
+ "DBInstanceClass": "db.r5.large",
+ "Engine": "docdb",
+ "DBInstanceStatus": "creating",
+ "Endpoint": {
+ "Address": "localhost.localstack.cloud",
+ "Port": 50761
+ },
+ "InstanceCreateTime": "2022-10-28T04:27:35.917000+00:00",
+ "PreferredBackupWindow": "03:50-04:20",
+ "BackupRetentionPeriod": 1,
+ "VpcSecurityGroups": [ ,
+ "AvailabilityZone": "us-east-1a",
+ "PreferredMaintenanceWindow": "wed:06:38-wed:07:08",
+ "EngineVersion": "12.34",
+ "AutoMinorVersionUpgrade": false,
+ "PubliclyAccessible": false,
+ "StatusInfos": [],
+ "DBClusterIdentifier": "test-docdb-cluster",
+ "StorageEncrypted": false,
+ "DbiResourceId": "db-M5ENSHXFPU6XHZ4G4ZEI5QIO2U",
+ "CopyTagsToSnapshot": false,
+ "DBInstanceArn": "arn:aws:rds:us-east-1:000000000000:db:test-docdb-instance",
+ "EnabledCloudwatchLogsExports": []
+ }
+ }
+```
+
+Some noticeable fields:
+
+- `--db-instance-identifier test-company`: Represents the unique identifier of the newly created
+ database.
+- `--db-instance-class db.r5.large`: Is the type or class of the Amazon DocumentDB
+ instance.
+ It determines the compute and memory capacity allocated to the instance. `db.r5.large` refers to a specific instance type in
+ the R5 family.
+ Although the flag is required for database creation, LocalStack will only mock the `DBInstanceClass` attribute.
+
+ You can find out more about instance classes in
+ the [AWS documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.DBInstanceClass.html)
+ .
+
+To obtain detailed information about the cluster, we use the `describe-db-cluster` command:
+{{< command >}}
+$ awslocal docdb describe-db-clusters --db-cluster-identifier test-docdb-cluster
+{{< /command >}}
+
+```yaml
+{
+ "DBClusters": [
+ {
+ "DBClusterIdentifier": "test-docdb-cluster",
+ "DBClusterParameterGroup": "default.docdb",
+ "Status": "available",
+ "Endpoint": "localhost.localstack.cloud",
+ "MultiAZ": false,
+ "Engine": "docdb",
+ "Port": 39045,
+ "MasterUsername": "test",
+ "DBClusterMembers": [
+ {
+ "DBInstanceIdentifier": "test-company",
+ "IsClusterWriter": true,
+ "DBClusterParameterGroupStatus": "in-sync",
+ "PromotionTier": 1
+ }
+ ],
+ "VpcSecurityGroups": [
+ {
+ "VpcSecurityGroupId": "sg-a30edea1f7da6ff90",
+ "Status": "active"
+ }
+ ],
+ "StorageEncrypted": false,
+ "DBClusterArn": "arn:aws:rds:us-east-1:000000000000:cluster:test-docdb-cluster"
+ }
+ ]
+}
+```
+
+### Connect to DocumentDB using mongosh
+
+Interacting with the databases is done using `mongosh`, which is an official command-line shell and
+[interactive MongoDB shell provided by MongoDB](https://www.mongodb.com/docs/mongodb-shell/).
+It is designed to provide a modern and enhanced user experience for interacting with MongoDB
+databases.
+
+{{< command >}}
+
+$ mongosh mongodb://localhost:39045
+Current Mongosh Log ID: 64a70b795697bcd4865e1b9a
+Connecting to: mongodb://localhost:
+39045/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1
+Using MongoDB: 6.0.7
+Using Mongosh: 1.10.1
+
+For mongosh info see: https://docs.mongodb.com/mongodb-shell/
+
+------
+
+test>
+
+{{< /command >}}
+
+This command will default to accessing the `test` database that was created with the cluster.
+Notice the port, `39045`,
+which is the cluster port that appears in the aforementioned description.
+
+To work with a specific database, the command is:
+
+{{< command >}}
+$ mongosh mongodb://localhost:39045/test-company
+Current Mongosh Log ID: 64a71916fae7fdeeb8b43a73
+Connecting to: mongodb://localhost:
+39045/test-company?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+1.10.1
+Using MongoDB: 6.0.7
+Using Mongosh: 1.10.1
+
+For mongosh info see: https://docs.mongodb.com/mongodb-shell/
+
+------
+test-company>
+
+{{< /command >}}
+
+From here on we can manipulate collections
+using [the JavaScript methods provided](https://www.mongodb.com/docs/manual/reference/method/)
+by `mongosh`:
+
+{{< command >}}
+
+test-company> db.createCollection("employees")
+{ ok: 1 }
+test-company> db.createCollection("customers")
+{ ok: 1 }
+test-company> show collections
+customers
+employees
+test-company> exit
+
+{{< /command >}}
+
+For more information on how to use MongoDB with `mongosh` please refer to
+the [MongoDB documentation](https://www.mongodb.com/docs/).
+
+### Connect to DocumentDB using Node.js Lambda
+
+{{< callout >}}
+You need to set `DOCDB_PROXY_CONTAINER=1` when starting LocalStack to be able to use the returned `Endpoint`, which will be correctly resolved automatically.
+
+The flag `DOCDB_PROXY_CONTAINER=1` changes the default behavior and the container will be started as proxied container.
+Meaning a port from the [pre-defined port]({{< ref "/references/external-ports" >}}) range will be chosen, and when using lambda, you can use `localhost.localstack.cloud` to connect to the instance.
+{{< /callout >}}
+
+In this sample we will use a Node.js lambda function to connect to a DocumentDB.
+For the mongo-db connection we will use the `mongodb` lib.
+Please note, that this sample is only for demo purpose, e.g., we will set the credentials as environment variables to the lambda function.
+
+In a best-practise sample you would use a secret instead.
+We included a snippet at the very end.
+
+#### Create the DocDB Cluster with a username and password
+
+We assume you have a `MasterUsername` and `MasterUserPassword` set for DocDB e.g:
+{{< command >}}
+$ awslocal docdb create-db-cluster --db-cluster-identifier test-docdb \
+ --engine docdb \
+ --master-user-password S3cretPwd! \
+ --master-username someuser
+{{< /command >}}
+
+#### Prepare the lambda function
+
+First, we create the zip required for the lambda function with the mongodb dependency.
+You will need [`npm`](https://docs.npmjs.com/) in order to install the dependencies.
+In your terminal run:
+
+{{< command >}}
+$ mkdir resources
+$ cd resources
+$ mkdir node_modules
+$ npm install mongodb@6.3.0
+{{< /command >}}
+
+Next, copy the following code into a new file named `index.js` in the `resources` folder:
+
+{{< command >}}
+const AWS = require('aws-sdk');
+const RDS = AWS.RDS;
+const { MongoClient } = require('mongodb');
+
+const docdb_client = new RDS();
+
+const docdb_id = process.env.DOCDB_CLUSTER_ID;
+const pwd = process.env.DOCDB_SECRET;
+
+exports.handler = async (event) => {
+ try {
+ // Get endpoint details using rds/docdb client:
+ const cluster_result = await docdb_client.describeDBClusters({DBClusterIdentifier: docdb_id}).promise();
+ const cluster = cluster_result.DBClusters[0];
+ const host = cluster.Endpoint;
+ const port = cluster.Port;
+ const user = cluster.MasterUsername;
+
+ // Connection URI
+ const dbname = "mydb";
+ // retryWrites is by default true, but not supported by AWS DocumentDB
+ const uri = `mongodb://${user}:${pwd}@${host}:${port}/?retryWrites=false`;
+
+ // Connect to DocumentDB
+ const client = await MongoClient.connect(uri);
+ const db = client.db(dbname);
+
+ // Insert data
+ const collection = db.collection('your_collection');
+ await collection.insertOne({ key: 'value' });
+
+ // Query data
+ const result = await collection.findOne({ key: 'value' });
+ await client.close();
+
+ // Return result
+ return {
+ statusCode: 200,
+ body: JSON.stringify(result),
+ };
+ } catch (error) {
+ return {
+ statusCode: 500,
+ body: JSON.stringify({ error: error.message }),
+ };
+ }
+};
+{{< /command >}}
+
+Now, you can zip the entire.
+Make sure you are inside `resources` directory and run:
+{{< command >}}
+$ zip -r function.zip .
+{{< /command >}}
+
+Finally, we can create the `lambda` function using `awslocal`:
+{{< command >}}
+$ awslocal lambda create-function \
+ --function-name MyNodeLambda \
+ --runtime nodejs16.x \
+ --role arn:aws:iam::000000000000:role/lambda-role \
+ --handler index.handler \
+ --zip-file fileb://function.zip \
+ --environment Variables="{DOCDB_CLUSTER_ID=test-docdb,DOCDB_SECRET=S3cretPwd!}"
+{{< /command >}}
+
+You can invoke the lambda by calling:
+{{< command >}}
+$ awslocal lambda invoke --function-name MyNodeLambda outfile
+{{< /command >}}
+
+The `outfile` contains the returned value, e.g.:
+
+```yaml
+{"statusCode":200,"body":"{\"_id\":\"6560a21ca7771a02ef128c72\",\"key\":\"value\"}"}
+````
+
+#### Use Secret To Connect to DocDB
+
+The best-practise for accessing databases is by using secrets.
+Secrets follow a [well-defined pattern](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_database_secret.html).
+
+For the lambda function, you can pass the secret arn as `SECRET_NAME`.
+In the lambda, you can then retrieve the secret details like this:
+
+{{< command >}}
+const AWS = require('aws-sdk');
+const { MongoClient } = require('mongodb');
+
+const secretsManager = new AWS.SecretsManager();
+const secretName = process.env.SECRET_NAME;
+
+function customURIEncode(str) {
+ // encode also characters that encodeURIComponent does not encode
+ return encodeURIComponent(str)
+ .replace(/!/g, '%21')
+ .replace(/~/g, '%7E')
+ .replace(/\*/g, '%2A')
+ .replace(/'/g, '%27')
+ .replace(/\(/g, '%28')
+ .replace(/\)/g, '%29');
+}
+
+exports.handler = async (event) => {
+ try {
+ // Retrieve secret
+ const secretValue = await secretsManager.getSecretValue({ SecretId: secretName }).promise();
+ const { username, password, host, port } = JSON.parse(secretValue.SecretString);
+
+ // make sure username and password are correctly encoded for the URI
+ const user = customURIEncode(username);
+ const pwd = customURIEncode(password);
+
+ // retryWrites is by default true, but not supported by AWS DocumentDB
+ const uri = `mongodb://${user}:${pwd}@${host_name}:${port}/?retryWrites=false`;
+
+ // Connect to DocumentDB
+ const client = await MongoClient.connect(uri);
+
+ // ... interact with the mongo-db ...
+
+ return {
+ statusCode: 200
+ };
+ } catch (error) {
+ console.error('Error: ', error);
+ return {
+ statusCode: 500,
+ body: JSON.stringify({ error: error.message }),
+ };
+ }
+};
+
+{{< /command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing DocumentDB instances and clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **DocumentDB** under the **Database** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Cluster**: Create a new DocumentDB cluster by specifying the DBCluster Identifier, Availability Zone, and other parameters.
+- **Create Instance**: Create a new DocumentDB instance by specifying the database class, engine, DBInstance Identifier, and other parameters.
+- **View Instance & Cluster**: View an existing DocumentDB instance or cluster by clicking the instance/cluster name.
+- **Edit Instance & Cluster**: Edit an existing DocumentDB instance or cluster by clicking the instance/cluster name and clicking the **Edit Instance** or **Edit Cluster** button.
+- **Remove Instance & Cluster**: Remove an existing DocumentDB instance or cluster by clicking the instance/cluster name and clicking the **Actions** followed by **Remove Selected** button.
+
+## Current Limitations
+
+Under the hood, LocalStack starts a MongoDB server, to handle DocumentDB storage, in a separate Docker container and adds port-mapping so that it can be accessed from `localhost`.
+When defining a port to access the container, an available port on the host machine will be selected, that means there is no pre-defined port range by default.
+
+Because LocalStack utilizes a MongoDB container to provide DocumentDB storage, LocalStack may not have exact feature parity with Amazon DocumentDB.
+The database engine may support additional features that DocumentDB does not and vice versa.
+
+DocumentDB currently uses the default configuration of the latest [MongoDB Docker image](https://hub.docker.com/_/mongo).
+When the `MasterUsername` and `MasterUserPassword` are set for the creation for the DocumentDB cluster or instance, the container will be started with the corresponding ENVs `MONGO_INITDB_ROOT_USERNAME` respectively `MONGO_INITDB_ROOT_PASSWORD`.
diff --git a/src/content/docs/aws/services/dynamodb.md b/src/content/docs/aws/services/dynamodb.md
new file mode 100644
index 00000000..82e8be99
--- /dev/null
+++ b/src/content/docs/aws/services/dynamodb.md
@@ -0,0 +1,224 @@
+---
+title: DynamoDB
+linkTitle: DynamoDB
+description: Get started with DynamoDB on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+DynamoDB is a fully managed NoSQL database service provided by AWS.
+It offers a flexible and highly scalable way to store and retrieve data, making it suitable for a wide range of applications.
+DynamoDB provides a fast and scalable key-value datastore with support for replication, automatic scaling, data encryption at rest, and on-demand backup, among other capabilities.
+
+LocalStack allows you to use the DynamoDB APIs in your local environment to manage key-value and document data models.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_dynamodb" >}}), which provides information on the extent of DynamoDB's integration with LocalStack.
+
+DynamoDB emulation is powered by [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
+
+## Getting started
+
+This guide is designed for users new to DynamoDB and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create DynamoDB table, along with its replicas, and put an item into the table using the AWS CLI.
+
+### Create a DynamoDB table
+
+You can create a DynamoDB table using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API.
+Execute the following command to create a table named `global01` with a primary key `id`:
+
+{{< command >}}
+$ awslocal dynamodb create-table \
+ --table-name global01 \
+ --key-schema AttributeName=id,KeyType=HASH \
+ --attribute-definitions AttributeName=id,AttributeType=S \
+ --billing-mode PAY_PER_REQUEST \
+ --region ap-south-1
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "TableDescription": {
+ "AttributeDefinitions": [
+ {
+ "AttributeName": "id",
+ "AttributeType": "S"
+ }
+ ],
+ "TableName": "global01",
+ "KeySchema": [
+ {
+ "AttributeName": "id",
+ "KeyType": "HASH"
+ }
+ ],
+ "TableStatus": "ACTIVE",
+ "CreationDateTime": 1693244562.147,
+ ...
+ "TableArn": "arn:aws:dynamodb:ap-south-1:000000000000:table/global01",
+ "TableId": "6bc6dd46-98d8-486a-aed8-6ef66a35df7c",
+ ...
+ }
+ }
+}
+```
+
+### Create replicas
+
+You can create replicas of a DynamoDB table using the [`UpdateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateTable.html) API.
+Execute the following command to create replicas in `ap-south-1` and `us-west-1` regions:
+
+{{< command >}}
+$ awslocal dynamodb update-table \
+ --table-name global01 \
+ --replica-updates '[{"Create": {"RegionName": "eu-central-1"}}, {"Create": {"RegionName": "us-west-1"}}]' \
+ --region ap-south-1
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "TableDescription": {
+ "AttributeDefinitions": [
+ {
+ "AttributeName": "id",
+ "AttributeType": "S"
+ }
+ ],
+ ...
+ "Replicas": [
+ {
+ "RegionName": "eu-central-1",
+ "ReplicaStatus": "ACTIVE"
+ },
+ {
+ "RegionName": "us-west-1",
+ "ReplicaStatus": "ACTIVE"
+ }
+ ]
+ }
+}
+```
+
+You can now operate on the table in the replicated regions as well.
+You can use the [`ListTables`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListTables.html) API to list the tables in the replicated regions.
+Run the following command to list the tables in the `eu-central-1` region:
+
+{{< command >}}
+$ awslocal dynamodb list-tables \
+ --region eu-central-1
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "TableNames": [
+ "global01"
+ ]
+}
+```
+
+### Insert an item
+
+You can insert an item into a DynamoDB table using the [`PutItem`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API.
+Execute the following command to insert an item into the `global01` table:
+
+{{< command >}}
+$ awslocal dynamodb put-item \
+ --table-name global01 \
+ --item '{"id":{"S":"foo"}}' \
+ --region eu-central-1
+{{< /command >}}
+
+You can now query the number of items in the table using the [`DescribeTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API.
+Run the following command to query the number of items in the `global01` table from a different region:
+
+{{< command >}}
+$ awslocal dynamodb describe-table \
+ --table-name global01 \
+ --query 'Table.ItemCount' \
+ --region ap-south-1
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+1
+```
+
+{{< callout >}}
+You can run DynamoDB in memory, which can greatly improve the performance of your database operations.
+However, this also means that the data will not be possible to persist on disk and will be lost even though persistence is enabled in LocalStack.
+To enable this feature, you need to set the environment variable `DYNAMODB_IN_MEMORY=1` while starting LocalStack.
+{{< /callout >}}
+
+### Time To Live
+
+LocalStack supports [Time to Live (TTL)](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TTL.html) in DynamoDB.
+To enable this feature, you need to set the environment variable `DYNAMODB_REMOVE_EXPIRED_ITEMS` to 1.
+This enables a worker running every 60 minutes that scans all the tables and deletes the expired items.
+
+In addition, to programmatically trigger the worker at convenience, we provide the following endpoint:
+- `DELETE /_aws/dynamodb/expired`
+
+The response returns the number of deleted items:
+
+```console
+curl -X DELETE localhost:4566/_aws/dynamodb/expired
+{"ExpiredItems": 3}
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing DynamoDB tables and items.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **DynamoDB** under the **Database** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Table**: Create a new DynamoDB table by clicking on the **Create Table** button.
+You can specify the table name, table class, key schema and other attributes of the table.
+- **Edit Table**: Edit an existing DynamoDB table by clicking on the **Edit Table** button.
+You can modify the table name, key schema and other attributes of the table.
+- **View items**: View the items in a DynamoDB table by clicking on the **Items** button.
+You can also add, edit and delete items in the table.
+You can also switch to scan or query mode to view the items in the table.
+- **Run PartiQL**: Run a PartiQL query against a DynamoDB table by clicking on the **PartiQL** button.
+You can add your query in the editor and click on the **Execute** button to execute the query.
+- **Delete Table**: Delete an existing DynamoDB table by selecting the DynamoDB table and clicking **Actions** and then **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use IAM in LocalStack for various use cases:
+
+- [Serverless Container-based APIs with Amazon ECS & API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample)
+- [Full-Stack application with AWS Lambda, DynamoDB & S3 for shipment validation](https://github.com/localstack/shipment-list-demo)
+- [Step-up Authentication using Amazon Cognito](https://github.com/localstack/step-up-auth-sample)
+- [Serverless microservices with Amazon API Gateway, DynamoDB, SQS, and Lambda](https://github.com/localstack/microservices-apigateway-lambda-dynamodb-sqs-sample)
+- [Event-driven architecture with Amazon SNS FIFO, DynamoDB, Lambda, and S3](https://github.com/localstack/event-driven-architecture-with-amazon-sns-fifo)
+- [Note-Taking application using AWS SDK for JavaScript](https://github.com/localstack/aws-sdk-js-notes-app)
+- [AppSync GraphQL APIs for DynamoDB and RDS Aurora PostgreSQL](https://github.com/localstack/appsync-graphql-api-sample)
+- [Loan Broker application with AWS Step Functions, DynamoDB, Lambda, SQS, and SNS](https://github.com/localstack/loan-broker-stepfunctions-lambda-app)
+- [Messaging Processing application with SQS, DynamoDB, and Fargate](https://github.com/localstack/sqs-fargate-ddb-cdk-go)
+
+## Current Limitations
+
+### Global tables
+
+LocalStack provides support for global tables (Version 2019), which are tables that exist within the same account and are replicated across various regions.
+
+However, legacy global tables (Version 2017) are not supported by LocalStack.
+Operations such as `CreateGlobalTable`, `UpdateGlobalTable`, and `DescribeGlobalTable` will not replicate globally.
+
+### Replication
+
+- Removing the original table region from the replication set while retaining the replicas is currently not feasible.
+Deleting the original table will result in the removal of all replicas as well.
+- DynamoDB Streams are exclusively supported for original tables and not for replicated ones.
+More information can be found in [our public GitHub issue tracker](https://github.com/localstack/localstack/issues/7405).
+- Batch operations such as `BatchWriteItem`, `BatchGetItem`, etc. are currently not supported for replicated tables.
diff --git a/src/content/docs/aws/services/dynamodbstreams.md b/src/content/docs/aws/services/dynamodbstreams.md
new file mode 100644
index 00000000..89bd856c
--- /dev/null
+++ b/src/content/docs/aws/services/dynamodbstreams.md
@@ -0,0 +1,230 @@
+---
+title: DynamoDB Streams
+linkTitle: DynamoDB Streams
+description: Get started with DynamoDB Streams on LocalStack
+---
+
+## Introduction
+
+DynamoDB Streams captures data modification events in a DynamoDB table.
+The stream records are written to a DynamoDB stream, which is an ordered flow of information about changes to items in a table.
+DynamoDB Streams records data in near-real time, enabling you to develop workflows that process these streams and respond based on their contents.
+
+LocalStack supports DynamoDB Streams, allowing you to create and manage streams in a local environment.
+The supported APIs are available on our [DynamoDB Streams coverage page]({{< ref "coverage_dynamodbstreams" >}}), which provides information on the extent of DynamoDB Streams integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to DynamoDB Streams and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate the following process using LocalStack:
+
+- A user adds an entry to a DynamoDB table.
+- A new stream record is generated in DynamoDB Streams when an entry is added.
+- This stream record triggers a Lambda function.
+- If the record indicates a new entry in the DynamoDB table, the Lambda function extracts the data.
+
+### Create a DynamoDB table
+
+You can create a DynamoDB table named `BarkTable` using the [`CreateTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_CreateTable.html) API.
+Run the following command to create the table:
+
+{{< command >}}
+$ awslocal dynamodb create-table \
+ --table-name BarkTable \
+ --attribute-definitions AttributeName=Username,AttributeType=S AttributeName=Timestamp,AttributeType=S \
+ --key-schema AttributeName=Username,KeyType=HASH AttributeName=Timestamp,KeyType=RANGE \
+ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
+ --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES
+{{< /command >}}
+
+The `BarkTable` has a stream enabled which you can trigger by associating a Lambda function with the stream.
+You can notice that in the `LatestStreamArn` field of the response:
+
+```bash
+...
+"LatestStreamArn": "arn:aws:dynamodb:000000000000:us-east-1:table/BarkTable/stream/timestamp
+...
+```
+
+### Create a Lambda function
+
+You can now create a Lambda function (`publishNewBark`) to process stream records from `BarkTable`.
+Create a new file named `index.js` with the following code:
+
+```javascript
+'use strict';
+var AWS = require("aws-sdk");
+
+exports.handler = (event, context, callback) => {
+
+ event.Records.forEach((record) => {
+ console.log('Stream record: ', JSON.stringify(record, null, 2));
+
+ if (record.eventName == 'INSERT') {
+ var who = JSON.stringify(record.dynamodb.NewImage.Username.S);
+ var when = JSON.stringify(record.dynamodb.NewImage.Timestamp.S);
+ var what = JSON.stringify(record.dynamodb.NewImage.Message.S);
+ var params = {
+ Subject: 'A new bark from ' + who,
+ Message: 'Woofer user ' + who + ' barked the following at ' + when + ':\n\n ' + what,
+ };
+ }
+ });
+ callback(null, `Successfully processed ${event.Records.length} records.`);
+};
+```
+
+You can now create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API.
+Run the following command to create the Lambda function:
+
+{{< command >}}
+$ zip index.zip index.js
+$ awslocal lambda create-function \
+ --function-name publishNewBark \
+ --zip-file fileb://index.zip \
+ --role roleARN \
+ --handler index.handler \
+ --timeout 50 \
+ --runtime nodejs16.x \
+ --role arn:aws:iam::000000000000:role/lambda-role
+{{< /command >}}
+
+### Invoke the Lambda function
+
+To test the Lambda function, you can invoke it using the [`Invoke`](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html) API.
+Create a new file named `payload.json` with the following content:
+
+```json
+{
+ "Records": [
+ {
+ "eventID": "7de3041dd709b024af6f29e4fa13d34c",
+ "eventName": "INSERT",
+ "eventVersion": "1.1",
+ "eventSource": "aws:dynamodb",
+ "awsRegion": "us-east-1",
+ "dynamodb": {
+ "ApproximateCreationDateTime": 1479499740,
+ "Keys": {
+ "Timestamp": {
+ "S": "2016-11-18:12:09:36"
+ },
+ "Username": {
+ "S": "John Doe"
+ }
+ },
+ "NewImage": {
+ "Timestamp": {
+ "S": "2016-11-18:12:09:36"
+ },
+ "Message": {
+ "S": "This is a bark from the Woofer social network"
+ },
+ "Username": {
+ "S": "John Doe"
+ }
+ },
+ "SequenceNumber": "13021600000000001596893679",
+ "SizeBytes": 112,
+ "StreamViewType": "NEW_IMAGE"
+ },
+ "eventSourceARN": "arn:aws:dynamodb:000000000000:us-east-1 ID:table/BarkTable/stream/2016-11-16T20:42:48.104"
+ }
+ ]
+}
+```
+
+Run the following command to invoke the Lambda function:
+
+{{< command >}}
+$ awslocal lambda invoke \
+ --function-name publishNewBark \
+ --payload file://payload.json \
+ --cli-binary-format raw-in-base64-out output.txt
+{{< /command >}}
+
+In the `output.txt` file, you should see the following output:
+
+```text
+"Successfully processed 1 records."
+```
+
+### Add event source mapping
+
+To add the DynamoDB stream as an event source for the Lambda function, you need the stream ARN.
+You can get the stream ARN using the [`DescribeTable`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeTable.html) API.
+Run the following command to get the stream ARN:
+
+{{< command >}}
+awslocal dynamodb describe-table --table-name BarkTable
+{{< /command >}}
+
+You can now create an event source mapping using the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API.
+Run the following command to create the event source mapping:
+
+{{< command >}}
+awslocal lambda create-event-source-mapping \
+ --function-name publishNewBark \
+ --event-source arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101 \
+ --batch-size 1 \
+ --starting-position TRIM_HORIZON
+{{< /command >}}
+
+Make sure to replace the `event-source` value with the stream ARN you obtained from the previous command.
+You should see the following output:
+
+```bash
+{
+ "UUID": "7ae3426a-eda6-4c10-a596-100c59bd6787",
+ ...
+ "EventSourceArn": "arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101",
+ "FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:publishNewBark",
+ ...
+ "FunctionResponseTypes": []
+}
+```
+
+You can now test the event source mapping by adding an item to the `BarkTable` table using the [`PutItem`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html) API.
+Run the following command to add an item to the table:
+
+{{< command >}}
+$ awslocal dynamodb put-item \
+ --table-name BarkTable \
+ --item Username={S="Jane Doe"},Timestamp={S="2016-11-18:14:32:17"},Message={S="Testing...1...2...3"}
+{{< /command >}}
+
+You can find Lambda function being triggered in the LocalStack logs.
+
+### Inspect the stream
+
+You can list the streams using the [`ListStreams`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_ListStreams.html) API.
+Run the following command to list the streams:
+
+{{< command >}}
+awslocal dynamodbstreams list-streams
+{{< /command >}}
+
+The following output shows the list of streams:
+
+```bash
+{
+ "Streams": [
+ {
+ "StreamArn": "arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101",
+ "TableName": "BarkTable",
+ "StreamLabel": "2024-07-12T06:18:37.101"
+ }
+ ]
+}
+```
+
+You can also describe the stream using the [`DescribeStream`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_DescribeStream.html) API.
+Run the following command to describe the stream:
+
+{{< command >}}
+$ awslocal dynamodbstreams describe-stream --stream-arn arn:aws:dynamodb:us-east-1:000000000000:table/BarkTable/stream/2024-07-12T06:18:37.101
+{{< /command >}}
+
+Replace the `stream-arn` value with the stream ARN you obtained from the previous command.
diff --git a/src/content/docs/aws/services/ec2.md b/src/content/docs/aws/services/ec2.md
new file mode 100644
index 00000000..fc77a441
--- /dev/null
+++ b/src/content/docs/aws/services/ec2.md
@@ -0,0 +1,673 @@
+---
+title: "Elastic Compute Cloud (EC2)"
+linkTitle: "Elastic Compute Cloud (EC2)"
+tags: ["Free"]
+description: Get started with Amazon Elastic Compute Cloud (EC2) on LocalStack
+persistence: supported with limitations
+---
+
+## Introduction
+
+Elastic Compute Cloud (EC2) is a core service within Amazon Web Services (AWS) that provides scalable and flexible virtual computing resources.
+EC2 enables users to launch and manage virtual machines, referred to as instances.
+
+LocalStack allows you to use the EC2 APIs in your local environment to create and manage EC2 instances and related resources such as VPCs, EBS volumes, etc.
+The list of supported APIs can be found on the [API coverage page]({{< ref "coverage_ec2" >}}).
+
+## Getting started
+
+This guide is designed for users new to EC2 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+We will demonstrate how to create an EC2 instance that runs a simple Python web server.
+LocalStack Pro running on a Linux host is required as network access to containers is not possible on macOS.
+
+Start your LocalStack container using your preferred method.
+
+### Create or import a key pair
+
+Key pairs are SSH public key/private key combinations that are used to log in to created instances.
+
+To create a key pair, you can use the [`CreateKeyPair`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateKeyPair.html) API.
+Run the following command to create the key pair and pipe the output to a file named `key.pem`:
+
+{{< command >}}
+$ awslocal ec2 create-key-pair \
+ --key-name my-key \
+ --query 'KeyMaterial' \
+ --output text | tee key.pem
+{{< /command >}}
+
+You may need to assign necessary permissions to the key files for security reasons.
+This can be done using the following commands:
+
+{{< tabpane text=true >}}
+
+{{< tab header="**Linux**" >}}
+
+{{< command >}}
+$ chmod 400 key.pem
+{{< /command >}}
+
+{{< /tab >}}
+
+{{< tab header="**Windows (Powershell)**" >}}
+
+{{< command >}}
+$acl = Get-Acl -Path "key.pem"
+$fileSystemAccessRule = New-Object System.Security.AccessControl.FileSystemAccessRule("$env:username", "Read", "Allow")
+$acl.SetAccessRule($fileSystemAccessRule)
+$acl.SetAccessRuleProtection($true, $false)
+Set-Acl -Path "key.pem" -AclObject $acl
+{{< /command >}}
+
+{{< /tab >}}
+
+{{< tab header="**Windows (Command Prompt)**" >}}
+
+{{< command >}}
+icacls.exe key.pem /reset
+icacls.exe key.pem /grant:r "$($env:username):(r)"
+icacls.exe key.pem /inheritance:r
+{{< /command >}}
+
+{{< /tab >}}
+
+{{< /tabpane >}}
+
+If you already have an SSH public key that you wish to use, such as the one located in your home directory at `~/.ssh/id_rsa.pub`, you can import it instead.
+
+{{< command >}}
+$ awslocal ec2 import-key-pair --key-name my-key --public-key-material file://~/.ssh/id_rsa.pub
+{{< /command >}}
+
+If you only have the SSH private key, a public key can be generated using the following command, and then imported:
+
+{{< command >}}
+$ ssh-keygen -y -f id_rsa > id_rsa.pub
+{{< /command >}}
+
+### Add rules to your security group
+
+Currently, LocalStack only supports the `default` security group.
+You can add rules to the security group using the [`AuthorizeSecurityGroupIngress`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_AuthorizeSecurityGroupIngress.html) API.
+Run the following command to add a rule to allow inbound traffic on port 8000:
+
+{{< command >}}
+$ awslocal ec2 authorize-security-group-ingress \
+ --group-id default \
+ --protocol tcp \
+ --port 8000 \
+ --cidr 0.0.0.0/0
+{{< /command >}}
+
+The above command will enable rules in the security group to allow incoming traffic from your local machine on port 8000 of an emulated EC2 instance.
+
+### Run an EC2 instance
+
+You can fetch the Security Group ID using the [`DescribeSecurityGroups`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSecurityGroups.html) API.
+Run the following command to fetch the Security Group ID:
+
+{{< command >}}
+$ awslocal ec2 describe-security-groups
+{{< /command >}}
+
+You should see the following output:
+
+```bash
+{
+ "SecurityGroups": [
+ {
+ "Description": "default VPC security group",
+ "GroupName": "default",
+ ...
+ "OwnerId": "000000000000",
+ "GroupId": "sg-0372ee3c519883079",
+ ...
+ }
+ ]
+}
+```
+
+To start your Python Web Server in your locally emulated EC2 instance, you can use the following user script by saving it to a file named `user_script.sh`:
+
+```bash
+#!/bin/bash -xeu
+
+apt update
+apt install python3 -y
+python3 -m http.server 8000
+```
+
+You can now run an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_RunInstances.html) API.
+Run the following command to run an EC2 instance by adding the appropriate Security Group ID that we fetched in the previous step:
+
+{{< command >}}
+$ awslocal ec2 run-instances \
+ --image-id ami-df5de72bdb3b \
+ --count 1 \
+ --instance-type t3.nano \
+ --key-name my-key \
+ --security-group-ids '' \
+ --user-data file://./user_script.sh
+{{< /command >}}
+
+### Test the Python web server
+
+You can now open the LocalStack logs to find the IP address of the locally emulated EC2 instance.
+Run the following command to open the LocalStack logs:
+
+{{< command >}}
+$ localstack logs
+{{< /command >}}
+
+You should see the following output:
+
+```bash
+2023-08-16T17:18:29.702 INFO --- [ asgi_gw_0] l.s.ec2.vmmanager.docker : Instance i-b07acefd77a3c415f will be accessible via SSH at: 127.0.0.1:12862, 172.17.0.4:22
+2023-08-16T17:18:29.702 INFO --- [ asgi_gw_0] l.s.ec2.vmmanager.docker : Instance i-b07acefd77a3c415f port mappings (container -> host): {'8000/tcp': 29043, '22/tcp': 12862}
+```
+
+You can now use the IP address to test the Python Web Server.
+Run the following command to test the Python Web Server:
+
+{{< command >}}
+$ curl 172.17.0.4:8000
+# Or, you can run
+$ curl 127.0.0.1:29043
+{{< /command >}}
+
+You should see the following output:
+
+```bash
+
+
+
+
+Directory listing for /
+...
+```
+
+{{< callout "note" >}}
+Similar to the setup in production AWS, the user data content is stored at `/var/lib/cloud/instances//` within the instance.
+Any execution of this data is recorded in the `/var/log/cloud-init-output.log` file.
+{{< /callout >}}
+
+### Connecting via SSH
+
+You can also set up an SSH connection to the locally emulated EC2 instance using the instance IP address.
+
+This section assumes that you have created or imported an SSH key pair named `my-key`.
+When running the EC2 instance, make sure to pass the `--key-name` parameter to the command:
+
+{{< command >}}
+$ awslocal ec2 run-instances --key-name my-key ...
+{{< /command >}}
+
+Once the instance is up and running, we can use the `ssh` command to set up an SSH connection.
+Assuming the instance is available under `127.0.0.1:12862` (as per the LocalStack log output), use this command:
+
+{{< command >}}
+$ ssh -p 12862 -i key.pem root@127.0.0.1
+{{< /command >}}
+
+{{< callout "tip" >}}
+If the `ssh` command throws an error like "Identity file not accessible" or "bad permissions", make sure that the key file has a restrictive `0400` permission as illustrated above.
+{{< /callout >}}
+
+## VM Managers
+
+LocalStack EC2 supports multiple methods to simulate the EC2 service.
+All tiers support the mock/CRUD capability.
+For advanced setups, LocalStack Pro comes with emulation capability for certain resource types so that they behave more closely like AWS.
+
+The underlying method for this can be controlled using the [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) configuration option.
+You may choose between plain mocked resources, containerized or virtualized.
+
+## Mock VM Manager
+
+With the Mock VM manager, all resources are stored as in-memory representation.
+This only offers the CRUD capability.
+
+This is the default VM manager in LocalStack Community edition.
+To use this VM manager in LocalStack Pro, set [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) to `mock`.
+
+This serves as the fallback manager if an operation is not implemented in other VM managers.
+
+## Docker VM Manager
+
+LocalStack Pro supports the Docker VM manager which uses the [Docker Engine](https://docs.docker.com/engine/) to emulate EC2 instances.
+This VM manager requires the Docker socket from the host machine to be mounted inside the LocalStack container at `/var/run/docker.sock`.
+
+This is the default VM manager in LocalStack Pro.
+You may set [`EC2_VM_MANAGER`]({{< ref "configuration#ec2" >}}) to `docker` to explicitly use this VM manager.
+
+All launched EC2 instances have the Docker socket mounted inside them at `/var/run/docker.sock` to make Docker-in-Docker usecases possible.
+
+All limitations associated with containers are also applicable to EC2 instances managed by the Docker manager.
+These restrictions include things like root access and networking.
+
+Please note that this VM manager does not fully support persistence.
+While the records of resources will be persisted, the instances or AMIs themselves (i.e. Docker containers and Docker images) will not be persisted.
+
+### AMIs
+
+Docker base images which are tagged with the scheme `localstack-ec2/:` are recognized as Amazon Machine Images (AMIs).
+These can be used to launch EC2 instances which are in fact Docker containers.
+
+You can mark any Docker base image as AMI using the below command:
+
+{{< command >}}
+$ docker tag ubuntu:focal localstack-ec2/ubuntu-focal-ami:ami-000001
+{{< /command >}}
+
+The above example will make LocalStack treat the `ubuntu:focal` Docker image as an AMI with name `ubuntu-focal-ami` and ID `ami-000001`.
+
+At startup, LocalStack downloads the following AMIs that can be used to launch Dockerized instances.
+- Ubuntu 22.04 `ami-df5de72bdb3b`
+- Amazon Linux 2023 `ami-024f768332f0`
+
+{{< callout "note" >}}
+The auto download of Docker images for default AMIs can be disabled using the `EC2_DOWNLOAD_DEFAULT_IMAGES=0` configuration variable.
+{{< /callout >}}
+
+All LocalStack-managed Docker AMIs bear the resource tag `ec2_vm_manager:docker`.
+These can be listed using:
+
+{{< command >}}
+$ awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=docker
+{{< /command >}}
+
+{{< callout "note" >}}
+If an AMI does not have the `ec2_vm_manager:docker` tag, it means that it is mocked.
+Attempting to launch Dockerized instances using these AMIs will result in an `InvalidAMIID.NotFound` error.
+See [Mock VM manager](#mock-vm-manager).
+{{< /callout >}}
+
+AWS does not provide an API to download AMIs which prevents the use of real AWS AMIs on LocalStack.
+However, in certain cases it may be possible to tweak your workflow to make it work with Localstack.
+
+For example, you can use [Packer](https://packer.io/) to customise the Amazon Linux AMI on AWS.
+Packer can be made to use the [Docker builder](https://developer.hashicorp.com/packer/integrations/hashicorp/docker/latest/components/builder/docker) instead of the Amazon builder and add the customisations on top of the Amazon Linux [Docker base image](https://hub.docker.com/_/amazonlinux/).
+The final image then can be used by LocalStack EC2 as illustrated above.
+
+### Instances
+
+When `RunInstances` is invoked, LocalStack creates an underlying Docker container to simulate an instance.
+Docker containers that back EC2 instances have the naming scheme `localstack-ec2.`.
+
+LocalStack EC2 supports execution of user data scripts when the instance starts.
+A shell script can be passed to the `UserData` argument of `RunInstances`.
+Alternatively, the user data may also be added using the `ModifyInstanceAttribute` operation.
+
+The user data is placed at `/var/lib/cloud/instances//` in the container.
+The execution log is generated at `/var/log/cloud-init-output.log` in the container.
+
+### Networking
+
+{{< callout "note" >}}
+Network access from host to EC2 instance containers is not possible on macOS.
+This is because Docker Desktop on macOS does not expose the bridge network to the host system.
+See [Docker Desktop Known Limitations](https://docs.docker.com/desktop/networking/#known-limitations).
+{{< /callout >}}
+
+Network addresses for Dockerized instances are allocated by the Docker daemon and can be obtained from the `PublicIpAddress` attribute.
+These addresses are also printed in the logs while the instance is being initialized.
+
+```bash
+2022-03-21T14:46:49.540 INFO Instance i-1d6327abf04e31be6 will be accessible via SSH at: 127.0.0.1:55705
+```
+
+When instances are launched, LocalStack attempts to start SSH server `/usr/sbin/sshd` in the Docker base image.
+If not found, it installs and starts the [Dropbear](https://github.com/mkj/dropbear) SSH server.
+
+To be able to access the instance at additional ports from the host system, you can modify the default security group and include the required ingress ports.
+
+{{< callout "note" >}}
+Security group ingress rules are applied only during the creation of the Dockerized instance.
+Modifying a security group will not open any ports for a running instance.
+{{< /callout >}}
+
+The system supports up to 32 ingress ports.
+This constraint is in place to prevent exhausting free ports on the host.
+
+{{< command >}}
+$ awslocal ec2 authorize-security-group-ingress \
+ --group-id default \
+ --protocol tcp \
+ --port 8080
+{{< /command >}}
+{{< command >}}
+$ awslocal ec2 describe-security-groups --group-names default
+{{< /command >}}
+
+The port mapping details are provided in the logs when the instance starts up.
+
+```bash
+2022-12-20T19:43:44.544 INFO Instance i-1d6327abf04e31be6 port mappings (container -> host): {'8080/tcp': 51747, '22/tcp': 55705}
+```
+
+### Elastic Block Store
+
+A common use case is to attach an EBS block device to an EC2 instance, which can then be used to create a custom filesystem for additional storage.
+This section illustrates how this functionality can be achieved with EC2 Docker instances in LocalStack.
+
+{{< callout "note" >}}
+This feature is disabled by default.
+Please set the [`EC2_MOUNT_BLOCK_DEVICES`]({{< ref "configuration#ec2" >}}) configuration option to enable it.
+{{< /callout >}}
+
+First, we create a user data script `init.sh` which creates an ext3 file system on the block device `/ebs-dev/sda1` and mounts it under `/ebs-mounted`:
+{{< command >}}
+$ cat > init.sh <}}
+
+We can then start an EC2 instance, specifying a block device mapping under the device name `/ebs-dev/sda1`, and pointing to our `init.sh` user data script:
+{{< command >}}
+$ awslocal ec2 run-instances --image-id ami-ff0fea8310f3 --count 1 --instance-type t3.nano \
+ --block-device-mapping '{"DeviceName":"/ebs-dev/sda1","Ebs":{"VolumeSize":10}}' \
+ --user-data file://init.sh
+{{< /command >}}
+
+Please note that, whereas real AWS uses GiB for volume sizes, LocalStack uses MiB as the unit for `VolumeSize` in the command above (to avoid creating huge files locally).
+Also, by default block device images are limited to 1 GiB in size, but this can be customized by setting the [`EC2_EBS_MAX_VOLUME_SIZE`]({{< ref "configuration#ec2" >}}) config variable (defaults to `1000`).
+
+Once the instance is successfully started and initialized, we can first determine the container ID via `docker ps`, and then list the contents of the mounted filesystem `/ebs-mounted`, which should contain our test file named `my-test-file`:
+{{< command >}}
+$ docker ps
+CONTAINER ID IMAGE PORTS NAMES
+5c60cf72d84a ...:ami-ff0fea8310f3 19419->22/tcp localstack-ec2...
+$ docker exec 5c60cf72d84a ls /ebs-mounted
+my-test-file
+{{< /command >}}
+
+### Instance Metadata Service
+
+The Docker VM manager supports the [Instance Metadata Service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) which provides information about the running instance.
+
+Both IMDSv1 and IMDSv2 can be used.
+LocalStack does not strictly enforce either versions.
+If the `X-aws-ec2-metadata-token` header is present, LocalStack will use IMDSv2, otherwise it will fall back to IMDSv1.
+
+To create an IMDSv2 token, run the following inside the EC2 container:
+
+{{< command >}}
+$ curl -X PUT "http://169.254.169.254/latest/api/token" -H "x-aws-ec2-metadata-token-ttl-seconds: 300"
+{{< /command >}}
+
+The token can be used in subsequent requests like so:
+
+{{< command >}}
+$ curl -H "x-aws-ec2-metadata-token: " -v http://169.254.169.254/latest/meta-data/
+{{< /command >}}
+
+{{< callout "note" >}}
+IMDS IPv6 endpoint is currently not supported.
+{{< /callout >}}
+
+#### Metadata Categories
+
+Currently a limited set of [metadata categories](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-categories) are implemented.
+They are:
+
+- `ami-id`
+- `ami-launch-index`
+- `instance-id`
+- `instance-type`
+- `local-hostname`
+- `local-ipv4`
+- `public-hostname`
+- `public-ipv4`
+
+If you would like support for more metadata categories, please make a feature request on [GitHub](https://github.com/localstack/localstack/issues/new/choose).
+
+### Configuration
+
+You can use the [`EC2_DOCKER_FLAGS`]({{< ref "configuration#ec2" >}}) LocalStack configuration variable to pass supplementary flags to Docker during the initiation of containerized instances.
+This allows for fine-tuned behaviours, for example, running containers in privileged mode using `--privileged` or specifying an alternate CPU platform with `--platform`.
+Keep in mind that this will apply to all instances that are launched in the LocalStack session.
+
+### Operations
+
+The following table explains the emulated action for various API operations.
+Any operation not listed below will use the mock VM manager.
+
+| Operation | Notes |
+|:----------------------|:---------------------------------------------------------------------------------------------|
+| `CreateImage` | Uses Docker commit to capture a snapshot of a running instance into a new AMI |
+| `DescribeImages` | Retrieves a list of Docker images that can be used as AMIs |
+| `DescribeInstances` | Describes both mocked and Docker-backed instances. Docker-backed instances are marked with the resource tag `ec2_vm_manager:docker` |
+| `RunInstances` | Creates and runs Docker containers that back instances |
+| `StopInstances` | Pauses the Docker containers that back instances |
+| `StartInstances` | Resumes the Docker containers that back instances |
+| `TerminateInstances` | Stops the Docker containers that back instances |
+
+## Libvirt VM Manager
+
+{{< callout "note" >}}
+The Libvirt VM manager is under active development.
+It is currently offered as a preview and will be part of the Ultimate plan upon release.
+If a functionality you desire is missing, please create a feature request on the [GitHub issue tracker](https://github.com/localstack/localstack/issues/new/choose).
+{{< /callout >}}
+
+The Libvirt VM manager uses the [Libvirt](https://libvirt.org/index.html) API to create fully virtualized EC2 resources.
+This lets you create EC2 setups which closely resemble AWS EC2.
+Currently LocalStack Pro supports the KVM-accelerated QEMU hypervisor on Linux hosts.
+
+Installation steps for QEMU/KVM will vary based on the Linux distribution on the host machine.
+On Debian/Ubuntu-based distributions, you can run:
+
+{{< command >}}
+$ sudo apt install -y qemu-kvm libvirt-daemon-system
+{{< /command >}}
+
+To check CPU support for virtualization, run:
+{{< command >}}
+$ kvm-ok
+INFO: /dev/kvm exists
+KVM acceleration can be used
+{{< /command >}}
+
+{{< callout "tip" >}}
+You may also need to enable virtualization support at hardware level.
+This is often labelled as 'Virtualization Technology', 'VT-d' or 'VT-x' in UEFI/BIOS setups.
+{{< /callout >}}
+
+If the Docker host and Libvirt host is the same, the Libvirt socket on the host must be mounted inside the LocalStack container.
+This can be done by including the volume mounts when the LocalStack container is started.
+If you are using the [Docker Compose template]({{< ref "getting-started/installation#docker-compose" >}}), include the following line in `services.localstack.volumes` list:
+
+```text
+"/var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock"
+```
+
+If you are using [Docker CLI]({{< ref "getting-started/installation#docker" >}}), include the following parameter in `docker run`:
+
+```text
+-v /var/run/libvirt/libvirt-sock:/var/run/libvirt/libvirt-sock
+```
+
+If you are using a remote Libvirt hypervisor, you can set the [`EC2_HYPERVISOR_URI`]({{< ref "configuration#ec2" >}}) config option with a connection URI.
+
+{{< callout "tip" >}}
+If you encounter an error like `failed to connect to the hypervisor: Permission denied`, you may need to perform additional setup on the hypervisor host.
+Please refer to [Libvirt Wiki](https://wiki.libvirt.org/Failed_to_connect_to_the_hypervisor.html#permission-denied) for more details.
+{{< /callout >}}
+
+The Libvirt VM manager currently does not have full support for persistence.
+Underlying virtual machines and volumes are not persisted, only their mock representations are.
+
+### AMIs
+
+All qcow2 images with cloud-init support can be used as AMIs.
+You can find the download links for images of popular OSs below.
+
+{{< tabpane text=true >}}
+
+{{% tab "Ubuntu" %}}
+Canonical provides official Ubuntu images at [cloud-images.ubuntu.com](https://cloud-images.ubuntu.com/).
+
+Please use the images in qcow2 format ending in `.img`.
+{{% /tab %}}
+
+{{< tab "Debian" >}}
+
+Debian provides cloud images for direct download at cdimage.debian.org/cdimage/cloud.
+
+
+
+Please use the genericcloud image in qcow2 format.
+
+{{< /tab >}}
+
+{{< tab "Fedora" >}}
+
+The Fedora project maintains the official cloud images at fedoraproject.org/cloud/download.
+
+
+
+Please use the qcow2 images.
+
+{{< /tab >}}
+
+{{% tab "Microsoft Windows" %}}
+An evaluation version of Windows Server 2012 R2 is provided by [Cloudbase Solutions](https://cloudbase.it/windows-cloud-images/).
+{{% /tab %}}
+
+{{< /tabpane >}}
+
+LocalStack does not come preloaded with any AMIs.
+
+Compatible qcow2 images must be placed at the default Libvirt storage pool at `/var/lib/libvirt/images` on the host machine.
+Images must be named with the prefix `ami-` followed by at least 8 hexadecimal characters without an extension, e.g. `ami-1234abcd`.
+You may need run the following command to make sure the image is registered with Libvirt:
+
+{{< command >}}
+$ virsh pool-refresh default
+
+Pool default refreshed
+
+{{< /command >}}
+{{< command >}}
+$ virsh vol-list --pool default
+
+ Name Path
+--------------------------------------------------------------------------------------------------------
+ ami-1234abcd /var/lib/libvirt/images/ami-1234abcd
+
+{{< /command >}}
+
+Only the images that follow the above naming scheme will be recognised by LocalStack as AMIs suitable for launching virtualized instances.
+These AMIs will also have the resource tag `ec2_vm_manager:libvirt`.
+
+{{< command >}}
+$ awslocal ec2 describe-images --filters Name=tag:ec2_vm_manager,Values=libvirt
+{{< /command >}}
+
+### Instances
+
+Virtualized instances can be launched with `RunInstances` operation and specifying a compatible AMI.
+LocalStack will create and start a Libvirt domain to represent the instance.
+
+When instances are launched, LocalStack uses the [NoCloud](https://cloudinit.readthedocs.io/en/latest/reference/datasources/nocloud.html) datasource to customize the virtual machine.
+The login user is created with the username `localstack` and password `localstack`.
+If a key pair is provided, it will added as an authorised SSH key for this user.
+
+LocalStack shuts down all virtual machines when it terminates.
+The Libvirt domains and volumes are left defined and can be used for debugging, etc.
+
+{{< callout "tip" >}}
+Use [Virtual Machine Manager](https://virt-manager.org/) or [virsh](https://www.libvirt.org/manpages/virsh.html) to manage the virtual machines outside of LocalStack.
+{{< /callout >}}
+
+The Libvirt VM manager supports basic shell scripts for user data.
+This can be passed to the `UserData` parameter of the `RunInstances` operation.
+
+To connect to the graphical display of the instance, first obtain the VNC address using:
+
+{{< command >}}
+$ virsh vncdisplay
+127.0.0.1:0
+{{< /command >}}
+
+You can then use a compatible VNC client (e.g. [TigerVNC](https://tigervnc.org/)) to connect and interact with the virtual machine.
+
+
+
+
+
+### Networking
+
+All instances are assigned interfaces on the default Libvirt network.
+This makes it possible to have host/instance as well as instance/instance network communication.
+
+It is possible to allow network access to the LocalStack container from within the virtualized instance.
+This is done by configuring the Docker daemon to use the KVM network.
+Use the following configuration at `/etc/docker/daemon.json` on the host machine:
+
+```json
+{
+ "bridge": "virbr0",
+ "iptables": false
+}
+```
+
+Then restart the Docker daemon:
+
+{{< command >}}
+$ sudo systemctl restart docker
+{{< /command >}}
+
+You can now start the LocalStack container, obtain its IP address and use it from the virtualized instance.
+
+{{< command >}}
+$ docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' localstack_main
+{{< /command >}}
+
+### Elastic Block Stores
+
+LocalStack clones the AMI into an EBS volume when the instance is initialised.
+LocalStack does not resize the instance root volume, instead it inherits the properties of the AMI.
+
+Currently it is not possible to attach additional EBS volumes to instances.
+
+### Instance Metadata Service
+
+The Libvirt VM manager does not support the Instance Metadata Service endpoints.
+
+### Operations
+
+The following table explains the emulated action for various API operations.
+Any operation not listed below will use the mock VM manager.
+
+| Operation | Notes |
+|:----------------------|:---------------------------------------------------------------------------------------------|
+| `DescribeImages` | Returns all mock and Libvirt AMIs |
+| `RunInstances` | Defines and starts a Libvirt domain |
+| `StartInstances` | Starts an already defined Libvirt domain |
+| `StopInstances` | Stops a running Libvirt domain |
+| `RebootInstances` | Restarts a Libvirt domain |
+| `TerminateInstances` | Stops and undefines a Libvirt domain |
+| `CreateVolume` | Creates a sparse Libvirt volume |
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing EC2 instances.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **EC2** under the **Compute** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+- **Create Instance**: Create a new EC2 instance by clicking the **Launch Instance** button and specifying the AMI ID, instance type, and other parameters.
+- **View Instance**: View the details of an EC2 instance by clicking on the Instance ID.
+- **Terminate Instance**: Terminate an EC2 instance by selecting the Instance ID, and clicking on the **ACTIONS** button followed by clicking on **Terminate Selected**.
+- **Start Instance**: Start a stopped EC2 instance by selecting the Instance ID, and clicking on the **ACTIONS** button followed by clicking on **Start Selected**.
+- **Stop Instance**: Stop a running EC2 instance by selecting the Instance ID, and clicking on the **ACTIONS** button followed by clicking on **Stop Selected**.
diff --git a/src/content/docs/aws/services/ecr.md b/src/content/docs/aws/services/ecr.md
new file mode 100644
index 00000000..f9b7da45
--- /dev/null
+++ b/src/content/docs/aws/services/ecr.md
@@ -0,0 +1,164 @@
+---
+title: "Elastic Container Registry (ECR)"
+linkTitle: "Elastic Container Registry (ECR)"
+description: Get started with Elastic Container Registry (ECR) on LocalStack
+tags: ["Base"]
+persistence: supported
+---
+
+## Introduction
+
+Elastic Container Registry (ECR) is a fully managed container registry service provided by Amazon Web Services.
+ECR enables you to store, manage, and deploy Docker container images to build, store, and deploy containerized applications.
+ECR integrates with other AWS services, such as Lambda, ECS, and EKS.
+
+LocalStack allows you to use the ECR APIs in your local environment to build & push Docker images to a local ECR registry.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_ecr" >}}), which provides information on the extent of ECR's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elastic Container Registry and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to build and push a Docker image to a local ECR repository.
+
+### Create a Docker image
+
+To get started, create a Docker image for a simple web application that can be used in an ECS task definition.
+Create a new file named `Dockerfile` (with no file extension) in your project directory.
+This file will contain the instructions for building the Docker image.
+Add the following content to the file:
+
+```Dockerfile
+FROM public.ecr.aws/docker/library/ubuntu:18.04
+
+# Install dependencies
+RUN apt-get update && \
+ apt-get -y install apache2
+
+# Install apache and write hello world message
+RUN echo 'Hello World!' > /var/www/html/index.html
+
+# Configure apache
+RUN echo '. /etc/apache2/envvars' > /root/run_apache.sh && \
+ echo 'mkdir -p /var/run/apache2' >> /root/run_apache.sh && \
+ echo 'mkdir -p /var/lock/apache2' >> /root/run_apache.sh && \
+ echo '/usr/sbin/apache2 -D FOREGROUND' >> /root/run_apache.sh && \
+ chmod 755 /root/run_apache.sh
+
+EXPOSE 80
+
+CMD /root/run_apache.sh
+```
+
+You can now build the Docker image from the `Dockerfile` using the `docker CLI:
+
+{{< command >}}
+$ docker build -t localstack-ecr-image .
+{{< / command >}}
+
+You can run the following command to verify that the image was built successfully:
+
+{{< command >}}
+$ docker images
+{{< / command >}}
+
+You will see output similar to the following:
+
+```bash
+REPOSITORY TAG IMAGE ID CREATED SIZE
+..
+localstack-ecr-image latest 38883941b8fa 1 minute ago 185MB
+```
+
+### Create an ECR repository
+
+To push the Docker image to ECR, you first need to create a repository.
+You can create an ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API.
+Run the following command to create a repository named `localstack-ecr-repository`:
+
+{{< command >}}
+$ awslocal ecr create-repository \
+ --repository-name localstack-ecr-repository \
+ --image-scanning-configuration scanOnPush=true
+{{< / command >}}
+
+You will see an output similar to the following:
+
+```sh
+{
+ "repository": {
+ "repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/localstack-ecr-repository",
+ "registryId": "000000000000",
+ "repositoryName": "localstack-ecr-repository",
+ "repositoryUri": "000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository",
+ "createdAt": "2023-07-24T16:58:36+05:30",
+ "imageTagMutability": "MUTABLE",
+ "imageScanningConfiguration": {
+ "scanOnPush": true
+ },
+ "encryptionConfiguration": {
+ "encryptionType": "AES256"
+ }
+ }
+}
+```
+
+You will need the `repositoryUri` value to push the Docker image to the repository.
+
+### Push the Docker image to the repository
+
+To push the Docker image to the repository, you first need to tag the image with the `repositoryUri`.
+Run the following command to tag the image:
+
+{{< command >}}
+$ docker tag localstack-ecr-image 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository
+{{< / command >}}
+
+You can now push the image to the repository using the `docker` CLI:
+
+{{< command >}}
+$ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/localstack-ecr-repository
+{{< / command >}}
+
+The image will take a few seconds to push to the repository.
+You can run the following command to verify that the image was pushed successfully:
+
+{{< command >}}
+$ awslocal ecr list-images --repository-name localstack-ecr-repository
+{{< / command >}}
+
+You will see an output similar to the following:
+
+```bash
+{
+ "imageIds": [
+ {
+ "imageDigest": "sha256:1cbc853c42983362817b5eecac80b1389c0a5cf9cfd1e711d9d0a1f5a7a36d43",
+ "imageTag": "latest"
+ }
+ ]
+}
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing ECR repositories and images.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **ECR** under the **Compute** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create repository**: Create a new ECR repository by clicking the **Create** button, and specify the **Registry Id**, **Repository Name**, **Tags**, and other options.
+- **View repository**: View the details of an ECR repository by clicking on the repository name.
+You can also view the push commands to push an image to the repository by clicking the **View Push Commands** button.
+- **Delete repository**: Delete an ECR repository by selecting the ECR repository, clicking the **Actions** button, and then clicking **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use ECR in LocalStack for various use cases:
+
+- [Amazon RDS initialization using CDK, Lambda, ECR, and Secrets Manager](https://github.com/localstack/amazon-rds-init-cdk)
+- [Lambda Container Images with ECR](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-container-image)
+- [Pushing Docker images to ECR and running them locally on ECS](https://github.com/localstack/localstack-pro-samples/tree/master/ecs-ecr-container-app)
diff --git a/src/content/docs/aws/services/ecs.md b/src/content/docs/aws/services/ecs.md
new file mode 100644
index 00000000..b6f80bcc
--- /dev/null
+++ b/src/content/docs/aws/services/ecs.md
@@ -0,0 +1,372 @@
+---
+title: "Elastic Container Service (ECS)"
+linkTitle: "Elastic Container Service (ECS)"
+tags: ["Base"]
+description: Get started with Elastic Container Service (ECS) on LocalStack
+persistence: supported
+---
+
+## Introduction
+
+Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service provided by Amazon Web Services (AWS).
+It allows you to run, stop, and manage Docker containers on a cluster.
+ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure.
+
+LocalStack allows you to use the ECS APIs in your local environment to create & manage ECS clusters, tasks, and services.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_ecs" >}}), which provides information on the extent of ECS's integration with LocalStack.
+
+## Getting Started
+
+This guide is designed for users new to ECS and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an ECS service using the AWS CLI
+
+### Create a cluster
+
+{{< callout >}}
+By default, the **ECS Fargate** launch type is assumed, i.e., the local Docker engine is used for deployment of applications, and there is no need to create and manage EC2 virtual machines to run the containers.
+{{< /callout >}}
+
+ECS tasks and services run on a cluster.
+Execute the following command to create an ECS cluster named `mycluster`:
+
+{{< command >}}
+$ awslocal ecs create-cluster --cluster-name mycluster
+
+{
+ "cluster": {
+ "clusterArn": "arn:aws:ecs:us-east-1:000000000000:cluster/mycluster",
+ "clusterName": "mycluster",
+ "status": "ACTIVE",
+ "registeredContainerInstancesCount": 0,
+ "runningTasksCount": 0,
+ "pendingTasksCount": 0,
+ "activeServicesCount": 0,
+ "settings": [
+ {
+ "name": "containerInsights",
+ "value": "disabled"
+ }
+ ]
+ }
+}
+
+{{< / command >}}
+
+### Create a task definition
+
+Containers within tasks are defined by a task definition that is managed outside of the context of a cluster.
+To create a task definition that runs an `ubuntu` container forever (by running an infinite loop printing "Running" on startup), create the following file as `task_definition.json`:
+
+```json
+{
+ "containerDefinitions": [
+ {
+ "name": "server",
+ "image": "ubuntu",
+ "cpu": 10,
+ "memory": 10,
+ "command": [
+ "sh",
+ "-c",
+ "while true; do echo running; sleep 1; done"
+ ],
+ "essential": true,
+ "logConfiguration": {
+ "logDriver": "awslogs",
+ "options": {
+ "awslogs-create-group": "true",
+ "awslogs-group": "myloggroup",
+ "awslogs-stream-prefix": "myprefix",
+ "awslogs-region": "us-east-1"
+ }
+ }
+ }
+ ],
+ "family": "myfamily"
+}
+```
+
+and then run the following command:
+
+{{< command >}}
+$ awslocal ecs register-task-definition --cli-input-json file://task_definition.json
+
+{
+ "taskDefinition": {
+ "taskDefinitionArn": "arn:aws:ecs:us-east-1:000000000000:task-definition/myfamily:1",
+ "containerDefinitions": [
+ {
+ "name": "server",
+ "image": "ubuntu",
+ "cpu": 10,
+ "memory": 10,
+ "portMappings": [],
+ "essential": true,
+ "command": [
+ "sh",
+ "-c",
+ "while true; do echo running; sleep 1; done"
+ ],
+ "environment": [],
+ "mountPoints": [],
+ "volumesFrom": [],
+ "logConfiguration": {
+ "logDriver": "awslogs",
+ "options": {
+ "awslogs-create-group": "true",
+ "awslogs-group": "myloggroup",
+ "awslogs-stream-prefix": "myprefix",
+ "awslogs-region": "us-east-1"
+ }
+ }
+ }
+ ],
+ "family": "myfamily",
+ "networkMode": "bridge",
+ "revision": 1,
+ "volumes": [],
+ "status": "ACTIVE",
+ "placementConstraints": [],
+ "compatibilities": [
+ "EXTERNAL",
+ "EC2"
+ ],
+ "registeredAt": 1713364207.068659
+ }
+}
+
+{{< / command >}}
+
+Task definitions are immutable, and are identified by their `family` field, and calling `register-task-definition` again with the same `family` value creates a new _version_ of a task definition.
+
+This task definition creates a CloudWatch Logs log group and log stream for the container so you can view the service logs.
+
+### Launch a service
+
+Finally we launch an ECS service using the task definition above.
+This will create a number of containers in replica mode meaning they are distributed over the nodes of the cluster, or in the case of Fargate, over availability zones within the region of the cluster.
+To create a service, execute the following command:
+
+{{< command >}}
+$ awslocal ecs create-service --service-name myservice --cluster mycluster --task-definition myfamily --desired-count 1
+
+{
+ "service": {
+ "serviceArn": "arn:aws:ecs:us-east-1:000000000000:service/mycluster/myservice",
+ "serviceName": "myservice",
+ "clusterArn": "arn:aws:ecs:us-east-1:000000000000:cluster/mycluster",
+ "loadBalancers": [],
+ "serviceRegistries": [],
+ "status": "ACTIVE",
+ "desiredCount": 1,
+ "runningCount": 1,
+ "pendingCount": 0,
+ "launchType": "EC2",
+ "taskDefinition": "arn:aws:ecs:us-east-1:000000000000:task-definition/myfamily:1",
+ "deploymentConfiguration": {
+ "deploymentCircuitBreaker": {
+ "enable": false,
+ "rollback": false
+ },
+ "maximumPercent": 200,
+ "minimumHealthyPercent": 100
+ },
+ "deployments": [
+ {
+ "id": "ecs-svc/49976591540684372",
+ "status": "PRIMARY",
+ "taskDefinition": "arn:aws:ecs:us-east-1:000000000000:task-definition/myfamily:1",
+ "desiredCount": 1,
+ "pendingCount": 0,
+ "runningCount": 1,
+ "failedTasks": 0,
+ "createdAt": 1709242525.05109,
+ "updatedAt": 1709242525.051093,
+ "launchType": "EC2",
+ "rolloutState": "IN_PROGRESS",
+ "rolloutStateReason": "ECS deployment ecs-svc/49976591540684372 in progress."
+ }
+ ],
+ "events": [],
+ "createdAt": 1709242525.051096,
+ "placementStrategy": [],
+ "schedulingStrategy": "REPLICA",
+ "createdBy": "arn:aws:iam::000000000000:user/test"
+ }
+}
+
+{{< / command >}}
+
+You should see a new docker container has been created, using the `ubuntu:latest` image, and running the infinite loop command:
+
+```bash
+$ docker ps
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+5dfeb9376391 ubuntu "sh -c 'while true; …" 3 minutes ago Up 3 minutes ls-ecs-mycluster-75f0515e-0364-4ee5-9828-19026140c91a-0-a1afaa9d
+9967fe5300cc localstack/localstack-pro "docker-entrypoint.sh" 5 minutes ago Up 5 minutes (healthy) 0.0.0.0:443->443/tcp, 0.0.0.0:4510-4560->4510-4560/tcp, 53/tcp, 5678/tcp, 0.0.0.0:4566->4566/tcp localstack-main
+```
+
+### Collect container logs
+
+To access the generated logs from the container, run the following command:
+
+{{< command >}}
+awslocal logs filter-log-events --log-group-name myloggroup --query 'events[].message'
+
+$ awslocal logs filter-log-events --log-group-name myloggroup | head -n 20
+{
+ "events": [
+ {
+ "logStreamName": "myprefix/ls-ecs-mycluster-75f0515e-0364-4ee5-9828-19026140c91a-0-a1afaa9d/75f0515e-0364-4ee5-9828-19026140c91a",
+ "timestamp": 1713364216375,
+ "message": "running",
+ "ingestionTime": 1713364216704,
+ "eventId": "0"
+ },
+ {
+ "logStreamName": "myprefix/ls-ecs-mycluster-75f0515e-0364-4ee5-9828-19026140c91a-0-a1afaa9d/75f0515e-0364-4ee5-9828-19026140c91a",
+ "timestamp": 1713364216440,
+ "message": "running",
+ "ingestionTime": 1713364216704,
+ "eventId": "1"
+ },
+ {
+ "logStreamName": "myprefix/ls-ecs-mycluster-75f0515e-0364-4ee5-9828-19026140c91a-0-a1afaa9d/75f0515e-0364-4ee5-9828-19026140c91a",
+ "timestamp": 1713364216505,
+ "message": "running",
+
+{{< / command >}}
+
+See our [CloudWatch Logs user guide]({{< ref "user-guide/aws/logs" >}}) for more details.
+
+## LocalStack ECS behavior
+
+You can use the configuration option `MAIN_DOCKER_NETWORK` to specify the network the ECS containers are started in.
+Otherwise, your ECS containers will be created in the same Docker network that LocalStack is in.
+If your ECS containers depend on LocalStack services, your ECS task network should be the same as the LocalStack container network.
+
+If you are running LocalStack through a `docker run` command, do not forget to enable the communication from the container to the Docker Engine API.
+You can provide the access by adding the following option `-v /var/run/docker.sock:/var/run/docker.sock`.
+
+For more information regarding the configuration of LocalStack, please check the [LocalStack configuration]({{< ref "configuration" >}}) section.
+
+## Remote debugging
+
+To enable a remote debugging port for your ECS tasks, set the environment variable `ECS_DOCKER_FLAGS="-p 0:"` to expose your debugger on a random port on your host.
+You can then use this port to remote attach your debugger.
+Or if you are working with a single container, you can set `ECS_DOCKER_FLAGS="-p :"` to expose the debugger port to your host system.
+
+## Mounting local directories for ECS tasks
+
+In some cases, it can be useful to mount code from the host filesystem into the ECS container.
+For example, to enable a quick debugging loop where you can test changes without having to build and redeploy the task's Docker image each time - similar to the [Lambda Hot Reloading]({{< ref "hot-reloading" >}}) feature in LocalStack.
+
+In order to leverage code mounting, we can use the ECS bind mounts feature, which is covered in the [AWS Bind mounts documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/bind-mounts.html).
+
+### Boto3 example
+
+The Python sample code below registers a task definition, mounting a host path `/host/path` into the container under `/container/path`:
+
+```bash
+ecs_client = boto3.client("ecs", endpoint_url="http://localhost:4566")
+...
+ecs_client.register_task_definition(
+ family="...",
+ containerDefinitions=[
+ {
+ "name": "...",
+ "image": "alpine",
+ "command": ["..."],
+ "mountPoints": [
+ {"containerPath": "/container/path", "sourceVolume": "test-volume"}
+ ],
+ }
+ ],
+ volumes=[{"host": {"sourcePath": "/host/path"}, "name": "test-volume"}],
+)
+```
+
+### CDK example
+
+The same functionality can be achieved with the AWS CDK following this (Python) example:
+
+```python
+task_definition = ecs.TaskDefinition(
+ ...
+ volumes=[
+ ecs.Volume(name="test-volume", host=ecs.Host(source_path="/host/path"))
+ ]
+)
+
+container = task_def.add_container(...)
+
+container.add_mount_points(
+ ecs.MountPoint(
+ container_path="/container/path",
+ source_volume="test-volume",
+ ),
+)
+```
+
+## Private registry authentication
+
+To download images from a private registry using LocalStack, you must provide your credentials.
+You can pass your Docker credentials to the container by setting the `DOCKER_CONFIG` environment variable and mounting the `~/.docker/config.json` file as a volume at `/config.json`.
+Your file paths might differ, so check Docker's documentation on [Environment Variables](https://docs.docker.com/engine/reference/commandline/cli/#environment-variables) and [Configuration Files](https://docs.docker.com/engine/reference/commandline/cli/#configuration-files) for details.
+
+Here is a Docker Compose example:
+
+```yaml
+services:
+ localstack:
+ container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
+ image: localstack/localstack-pro
+ ports:
+ - "127.0.0.1:4566:4566"
+ - "127.0.0.1:4510-4559:4510-4559"
+ - "127.0.0.1:443:443"
+ environment:
+ - LOCALSTACK_AUTH_TOKEN=${LOCALSTACK_AUTH_TOKEN:?}
+ - DOCKER_CONFIG=/config.json
+ volumes:
+ - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
+ - "/var/run/docker.sock:/var/run/docker.sock"
+ - ~/.docker/config.json:/config.json:ro
+```
+
+Alternatively, you can download the image from the private registry before using it or employ an [Initialization Hook]({{< ref "/references/init-hooks" >}}) to install the Docker client and use these credentials to download the image.
+
+## Firelens for ECS Tasks
+
+{{< callout >}}
+Firelens emulation is currently available as part of the **LocalStack Enterprise** plan.
+If you'd like to try it out, please [contact us](https://www.localstack.cloud/demo) to request access.
+{{< /callout >}}
+
+LocalStack's ECS emulation supports custom log routing via FireLens.
+FireLens allows the ECS service to manage the configuration of the logging driver of application containers, and to create the proper configuration for the `fluentbit`/`fluentd` logging layer.
+
+However the current implementation of FireLens does not support [custom configurations via S3 buckets](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/firelens-taskdef.html#firelens-taskdef-customconfig).
+Additionally, you cannot use ECS on Kubernetes with FireLens.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing ECS clusters & task definitions.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **ECS** under the **Compute** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Cluster**: Create a new ECS cluster by clicking on the **Create Cluster** button in the **Clusters** tab and providing the cluster name among other details.
+- **Register Task Definition**: Register a new task definition by clicking on the **Register Task Definition** button in the **Task Definitions** tab and providing the task definition details.
+- **View Cluster Details**: Click on a cluster in the **Clusters** tab to view the cluster details, including the cluster ARN, status, and other information.
+- **View Task Definition Details**: Click on a task definition in the **Task Definitions** tab to view the task definition details, including the task definition ARN, family, and other information.
+- **Edit Cluster**: Click on the **Edit Cluster** button while you are viewing a cluster to edit the cluster details.
+- **Edit Task Definition**: Click on the **Edit Task Definition** button while you are viewing a task definition to edit the task definition details.
+- **Delete Cluster**: Select the cluster name in the **Clusters** tab and click on the **Actions** button followed by **Remove Selected** button.
+- **Delete Task Definition**: Select the task definition name in the **Task Definitions** tab and click on the **Actions** button followed by **Remove Selected** button.
diff --git a/src/content/docs/aws/services/efs.md b/src/content/docs/aws/services/efs.md
new file mode 100644
index 00000000..69571e11
--- /dev/null
+++ b/src/content/docs/aws/services/efs.md
@@ -0,0 +1,115 @@
+---
+title: "Elastic File System (EFS)"
+linkTitle: "Elastic File System (EFS)"
+description: Get started with Elastic File System (EFS) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Elastic File System (EFS) is a fully managed file storage service provided by Amazon Web Services (AWS).
+EFS offers scalable and shared file storage that can be accessed by multiple EC2 instances and on-premises servers simultaneously.
+EFS utilizes the Network File System protocol to allow it to be used as a data source for various applications and workloads.
+
+LocalStack allows you to use the EFS APIs in your local environment to create local file systems, lifecycle configurations, and file system policies.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_efs" >}}), which provides information on the extent of EFS's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elastic File System and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a file system, apply an IAM resource-based policy, and create a lifecycle configuration using the AWS CLI.
+
+### Create a filesystem
+
+To create a new, empty file system you can use the [`CreateFileSystem`](https://docs.aws.amazon.com/goto/WebAPI/elasticfilesystem-2015-02-01/CreateFileSystem) API.
+Run the following command to create a new file system:
+
+{{< command >}}
+$ awslocal efs create-file-system \
+ --performance-mode generalPurpose \
+ --throughput-mode bursting \
+ --encrypted \
+ --tags Key=Name,Value=my-file-system
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "CreationToken": "53465731-0032-4cef-92f5-8aefe7c7b91e",
+ "FileSystemId": "fs-34feac549e66b814",
+ "FileSystemArn": "arn:aws:elasticfilesystem:us-east-1:000000000000:file-system/fs-34feac549e66b814",
+ "CreationTime": 1692808338.424,
+ "LifeCycleState": "available",
+ "PerformanceMode": "generalPurpose",
+ "Encrypted": true,
+ "ThroughputMode": "bursting",
+ "Tags": [
+ {
+ "Key": "Name",
+ "Value": "my-file-system"
+ }
+ ]
+}
+```
+
+You can also describe the locally available file systems using the [`DescribeFileSystems`](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystems.html) API.
+Run the following command to describe the local file systems available:
+
+{{< command >}}
+$ awslocal efs describe-file-systems
+{{< /command >}}
+
+You can alternatively pass the `--file-system-id` parameter to the `describe-file-system` command to retrieve information about a specific file system in AWS CLI.
+
+### Put file system policy
+
+You can apply an EFS `FileSystemPolicy` to an EFS file system using the [`PutFileSystemPolicy`](https://docs.aws.amazon.com/efs/latest/ug/API_PutFileSystemPolicy.html) API.
+Run the following command to apply a policy to the file system created in the previous step:
+
+{{< command >}}
+$ awslocal efs put-file-system-policy \
+ --file-system-id \
+ --policy "{\"Version\":\"2012-10-17\",\"Id\":\"ExamplePolicy01\",\"Statement\":[{\"Sid\":\"ExampleStatement01\",\"Effect\":\"Allow\",\"Principal\":{\"AWS\":\"*\"},\"Action\":[\"elasticfilesystem:ClientMount\",\"elasticfilesystem:ClientWrite\"],\"Resource\":\"arn:aws:elasticfilesystem:us-east-1:000000000000:file-system/fs-34feac549e66b814\"}]}"
+{{< /command >}}
+
+You can list the file system policies using the [`DescribeFileSystemPolicy`](https://docs.aws.amazon.com/efs/latest/ug/API_DescribeFileSystemPolicy.html) API.
+Run the following command to list the file system policies:
+
+{{< command >}}
+$ awslocal efs describe-file-system-policy \
+ --file-system-id
+{{< /command >}}
+
+Replace `` with the ID of the file system you want to list the policies for.
+The output will return the `FileSystemPolicy` for the specified EFS file system.
+
+### Create a lifecycle configuration
+
+You can create a lifecycle configuration for an EFS file system using the [`PutLifecycleConfiguration`](https://docs.aws.amazon.com/efs/latest/ug/API_PutLifecycleConfiguration.html) API.
+Run the following command to create a lifecycle configuration for the file system created in the previous step:
+
+{{< command >}}
+$ awslocal efs put-lifecycle-configuration \
+ --file-system-id \
+ --lifecycle-policies "{\"TransitionToIA\":\"AFTER_30_DAYS\"}"
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "LifecyclePolicies": [
+ {
+ "TransitionToIA": "AFTER_30_DAYS"
+ }
+ ]
+}
+```
+
+## Current Limitations
+
+LocalStack's EFS implementation is limited and lacks support for functionalities like creating mount targets, configuring access points, and generating tags.
+LocalStack uses Moto to emulate the EFS APIs, and efforts are underway to incorporate support for these features in upcoming updates.
diff --git a/src/content/docs/aws/services/eks.md b/src/content/docs/aws/services/eks.md
new file mode 100644
index 00000000..53518df5
--- /dev/null
+++ b/src/content/docs/aws/services/eks.md
@@ -0,0 +1,584 @@
+---
+title: "Elastic Kubernetes Service (EKS)"
+linkTitle: "Elastic Kubernetes Service (EKS)"
+description: Get started with Elastic Kubernetes Service (EKS) on LocalStack
+tags: ["Ultimate"]
+persistence: supported with limitations
+---
+
+## Introduction
+
+Elastic Kubernetes Service (EKS) is a managed Kubernetes service that makes it easy to run Kubernetes on AWS without installing, operating, and maintaining your own Kubernetes control plane or worker nodes.
+Kubernetes is an open-source system for automating containerized applications' deployment, scaling, and management.
+
+LocalStack allows you to use the EKS APIs in your local environment to spin up embedded Kubernetes clusters in your local Docker engine or use an existing Kubernetes installation you can access from your local machine (defined in `$HOME/.kube/config`).
+The supported APIs are available on our [API coverage page]({{< ref "coverage_eks" >}}), which provides information on the extent of EKS's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elastic Kubernetes Service and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+To interact with the Kubernetes cluster, you should also install [`kubectl`](https://kubernetes.io/docs/tasks/tools/).
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can auto-install an embedded Kubernetes cluster, configure ingress, and deploy a sample service with ECR.
+
+### Create an embedded Kubernetes cluster
+
+The default approach for creating Kubernetes clusters using the local EKS API is by setting up an embedded [k3d](https://k3d.io/) kube cluster within Docker.
+LocalStack seamlessly manages the download and installation process, making it hassle-free for users.
+In most cases, the installation is automatic, eliminating the need for any manual customizations.
+
+You can create a new cluster using the [`CreateCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateCluster.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal eks create-cluster \
+ --name cluster1 \
+ --role-arn "arn:aws:iam::000000000000:role/eks-role" \
+ --resources-vpc-config "{}"
+{{ command >}}
+
+You can see an output similar to the following:
+
+```bash
+{
+ "cluster": {
+ "name": "cluster1",
+ "arn": "arn:aws:eks:us-east-1:000000000000:cluster/cluster1",
+ "createdAt": "2022-04-13T16:38:24.850000+02:00",
+ "roleArn": "arn:aws:iam::000000000000:role/eks-role",
+ "resourcesVpcConfig": {},
+ "identity": {
+ "oidc": {
+ "issuer": "https://localhost.localstack.cloud/eks-oidc"
+ }
+ },
+ "status": "CREATING",
+ "clientRequestToken": "cbdf2bb6-fd3b-42b1-afe0-3c70980b5959"
+ }
+}
+```
+
+{{< callout >}}
+When setting up a local EKS cluster, if you encounter a `"status": "FAILED"` in the command output and see `Unable to start EKS cluster` in LocalStack logs, remove or rename the `~/.kube/config` file on your machine and retry.
+The CLI mounts this file automatically for CLI versions before `3.7`, leading EKS to assume you intend to use the specified cluster, a feature that has specific requirements.
+{{< /callout >}}
+
+You can use the `docker` CLI to check that some containers have been created:
+
+{{< command >}}
+$ docker ps
+
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+...
+b335f7f089e4 rancher/k3d-proxy:5.0.1-rc.1 "/bin/sh -c nginx-pr…" 1 minute ago Up 1 minute 0.0.0.0:8081->80/tcp, 0.0.0.0:44959->6443/tcp k3d-cluster1-serverlb
+f05770ec8523 rancher/k3s:v1.21.5-k3s2 "/bin/k3s server --t…" 1 minute ago Up 1 minute
+...
+
+{{< / command >}}
+
+After successfully creating and initializing the cluster, we can easily find the server endpoint, using the [`DescribeCluster`](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal eks describe-cluster --name cluster1
+
+{
+ "cluster": {
+ "name": "cluster1",
+ "arn": "arn:aws:eks:us-east-1:000000000000:cluster/cluster1",
+ "createdAt": "2022-04-13T17:12:39.738000+02:00",
+ "endpoint": "https://localhost.localstack.cloud:4511",
+ "roleArn": "arn:aws:iam::000000000000:role/eks-role",
+ "resourcesVpcConfig": {},
+ "identity": {
+ "oidc": {
+ "issuer": "https://localhost.localstack.cloud/eks-oidc"
+ }
+ },
+ "status": "ACTIVE",
+ "certificateAuthority": {
+ "data": "..."
+ },
+ "clientRequestToken": "d188f578-b353-416b-b309-5d8c76ecc4e2"
+ }
+}
+
+{{< / command >}}
+
+### Utilizing ECR Images within EKS
+
+You can now use ECR (Elastic Container Registry) images within your EKS environment.
+
+#### Initial configuration
+
+To modify the return value of resource URIs for most services, including ECR, you can utilize the `LOCALSTACK_HOST` variable in the [configuration]({{< ref "configuration" >}}).
+By default, ECR returns a `repositoryUri` starting with `localhost.localstack.cloud`, such as: `localhost.localstack.cloud:/`.
+
+{{< callout >}}
+In this section, we assume that `localhost.localstack.cloud` resolves in your environment, and LocalStack is connected to a non-default bridge network.
+For more information, refer to the article about [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}).
+
+If the domain `localhost.localstack.cloud` does not resolve on your host, you can still proceed by setting `LOCALSTACK_HOST=localhost` (not recommended).
+
+LocalStack will take care of the DNS resolution of `localhost.localstack.cloud` within ECR itself, allowing you to use the `localhost:/` URI for tagging and pushing the image on your host.
+{{< /callout >}}
+
+Once you have configured this correctly, you can seamlessly use your ECR image within EKS as expected.
+
+#### Deploying a sample application from an ECR image
+
+To showcase this behavior, let's go through a concise step-by-step guide that will lead us to the successful pulling of an image from local ECR.
+For the purpose of this guide, we will retag the `nginx` image to be pushed to a local ECR repository under a different name, and then utilize it for a pod configuration.
+
+You can create a new ECR repository using the [`CreateRepository`](https://docs.aws.amazon.com/AmazonECR/latest/APIReference/API_CreateRepository.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal ecr create-repository --repository-name "fancier-nginx"
+
+{
+ "repository": {
+ "repositoryArn": "arn:aws:ecr:us-east-1:000000000000:repository/fancier-nginx",
+ "registryId": "c75fd0e2",
+ "repositoryName": "fancier-nginx",
+ "repositoryUri": "000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx",
+ "createdAt": "2022-04-13T14:22:47+02:00",
+ "imageTagMutability": "MUTABLE",
+ "imageScanningConfiguration": {
+ "scanOnPush": false
+ },
+ "encryptionConfiguration": {
+ "encryptionType": "AES256"
+ }
+ }
+}
+
+{{< / command >}}
+
+You can now pull the `nginx` image from Docker Hub using the `docker` CLI:
+
+{{< command >}}
+$ docker pull nginx
+{{< / command >}}
+
+You can further tag the image to be pushed to ECR:
+
+{{< command >}}
+$ docker tag nginx 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx
+{{< / command >}}
+
+Finally, you can push the image to local ECR:
+
+{{< command >}}
+$ docker push 000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx
+{{< / command >}}
+
+Now, let us set up the EKS cluster using the image pushed to local ECR.
+
+Next, we can configure `kubectl` to use the EKS cluster, using the [`UpdateKubeconfig`](https://docs.aws.amazon.com/eks/latest/APIReference/API_UpdateClusterConfig.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal eks update-kubeconfig --name cluster1 && \
+ kubectl config use-context arn:aws:eks:us-east-1:000000000000:cluster/cluster1
+
+...
+Added new context arn:aws:eks:us-east-1:000000000000:cluster/cluster1 to /home/localstack/.kube/config
+Switched to context "arn:aws:eks:us-east-1:000000000000:cluster/cluster1".
+...
+
+{{< / command >}}
+
+You can now go ahead and add a deployment configuration for the `fancier-nginx` image.
+
+{{< command >}}
+$ cat <}}
+
+You can now describe the pod to see if the image was pulled successfully:
+
+{{< command >}}
+$ kubectl describe pod fancier-nginx
+{{< / command >}}
+
+In the events, we can see that the pull from ECR was successful:
+
+```bash
+ Normal Pulled 10s kubelet Successfully pulled image "000000000000.dkr.ecr.us-east-1.localhost.localstack.cloud:4566/fancier-nginx:latest" in 2.412775896s
+```
+
+{{< callout "tip" >}}
+Public Docker images from `registry.k8s.io` can be pulled without additional configuration from EKS nodes, but if you pull images from any other locations that resolve to S3 you can configure `DNS_NAME_PATTERNS_TO_RESOLVE_UPSTREAM=\.s3.*\.amazonaws\.com` in your [configuration]({{< ref "configuration" >}}).
+{{< /callout >}}
+
+### Configuring an Ingress for your services
+
+To make an EKS service externally accessible, it is necessary to create an Ingress configuration, which exposes the service on a specific path to the load balancer.
+
+For our sample deployment, we can create an `nginx` Kubernetes service by applying the following configuration:
+
+{{< command >}}
+$ cat <}}
+
+Use the following ingress configuration to expose the `nginx` service on path `/test123`:
+
+{{< command >}}
+$ cat <}}
+
+You will be able to send a request to `nginx` via the load balancer port `8081` from the host:
+
+{{< command >}}
+$ curl http://localhost:8081/test123
+
+
+...
+
nginx/1.21.6
+...
+
+{{< / command >}}
+
+{{< callout "tip" >}}
+You can customize the Load Balancer port by configuring `EKS_LOADBALANCER_PORT` in your environment.
+{{< /callout >}}
+
+### Enabling HTTPS with local SSL/TLS certificate for the Ingress
+
+To enable HTTPS for your endpoints, you can configure Kubernetes to use SSL/TLS with the [certificate for local domain names](https://github.com/localstack/localstack-artifacts/blob/master/local-certs/server.key) `*.localhost.localstack.cloud`.
+
+The local EKS cluster comes pre-configured with a secret named `ls-secret-tls`, which can be conveniently utilized to define the `tls` section in the ingress configuration:
+
+```yaml
+apiVersion: networking.k8s.io/v1
+kind: Ingress
+metadata:
+ name: test-ingress
+ annotations:
+ ingress.kubernetes.io/ssl-redirect: "false"
+ traefik.ingress.kubernetes.io/router.entrypoints: web,websecure
+ traefik.ingress.kubernetes.io/router.tls: "true"
+spec:
+ tls:
+ - secretName: ls-secret-tls
+ hosts:
+ - myservice.localhost.localstack.cloud
+ ...
+```
+
+Once you have deployed your service using the mentioned ingress configuration, it will be accessible via the HTTPS endpoint `https://myservice.localhost.localstack.cloud`.
+
+Remember that the ingress controller does not support HTTP/HTTPS multiplexing within the same Ingress.
+Consequently, if you want your service to be accessible via HTTP and HTTPS, you must create two separate Ingress definitions — one Ingress for HTTP and another for HTTPS.
+
+{{< callout >}}
+The `ls-secret-tls` secret is created in the `default` namespace.
+If your ingress and services are residing in a custom namespace, it is essential to copy the secret to that custom namespace to make use of it.
+{{< /callout >}}
+
+## Use an existing Kubernetes installation
+
+You can also access the EKS API using your existing local Kubernetes installation.
+This can be achieved by setting the configuration variable `EKS_K8S_PROVIDER=local` and mounting the `$HOME/.kube/config` file into the LocalStack container.
+When using a `docker-compose.yml` file, you need to add a bind mount like this:
+
+```yaml
+volumes:
+ - "${HOME}/.kube/config:/root/.kube/config"
+```
+
+When using the LocalStack CLI, please configure the `DOCKER_FLAGS` to mount the kubeconfig into the container:
+
+{{< command >}}
+$ DOCKER_FLAGS="-v ${HOME}/.kube/config:/root/.kube/config" localstack start
+{{ command >}}
+
+{{< callout >}}
+Using an existing Kubernetes installation is currently only possible when the authentication with the cluster uses X509 client certificates: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#x509-client-certificates
+{{< /callout >}}
+
+In recent versions of Docker, you can enable Kubernetes as an embedded service running inside Docker.
+The picture below illustrates the Kubernetes settings in Docker for macOS (similar configurations apply for Linux/Windows).
+By default, the Kubernetes API is assumed to run on the local TCP port `6443`.
+
+
+
+You can create an EKS Cluster configuration using the following command:
+
+{{< command >}}
+$ awslocal eks create-cluster --name cluster1 --role-arn arn:aws:iam::000000000000:role/eks-role --resources-vpc-config '{}'
+
+{
+ "cluster": {
+ "name": "cluster1",
+ "arn": "arn:aws:eks:eu-central-1:000000000000:cluster/cluster1",
+ "createdAt": "Sat, 05 Oct 2019 12:29:26 GMT",
+ "endpoint": "https://172.17.0.1:6443",
+ "status": "ACTIVE",
+ ...
+ }
+}
+
+{{ command >}}
+
+And check that it was created with:
+
+{{< command >}}
+$ awslocal eks list-clusters
+
+{
+ "clusters": [
+ "cluster1"
+ ]
+}
+
+{{< / command >}}
+
+To interact with your Kubernetes cluster, configure your Kubernetes client (such as `kubectl` or other SDKs) to point to the `endpoint` provided in the `create-cluster` output mentioned earlier.
+However, depending on whether you're calling the Kubernetes API from your local machine or from within a Lambda function, you might need to use different endpoint URLs.
+
+For local machine interactions, use `https://localhost:6443` as the endpoint URL.
+If you are accessing the Kubernetes API from within a Lambda function, you should use `https://172.17.0.1:6443` as the endpoint URL, assuming that `172.17.0.1` is the IP address of the Docker network bridge.
+
+By using the appropriate endpoint URL based on your context, you can effectively communicate with your Kubernetes cluster and manage your resources as needed.
+
+## Customizing the Kubernetes Load Balancer Ports
+
+By default, the Kubernetes load balancer (LB) is exposed on port `8081`.
+If you need to customize the port or expose the load balancer on multiple ports, you can utilize the special tag name `_lb_ports_` during the cluster creation process.
+
+For instance, if you want to expose the load balancer on ports 8085 and 8086, you can use the following tag definition when creating the cluster:
+
+{{< command >}}
+$ awslocal eks create-cluster \
+ --name cluster1 \
+ --role-arn arn:aws:iam::000000000000:role/eks-role \
+ --resources-vpc-config '{}' --tags '{"_lb_ports_":"8085,8086"}'
+{{< /command >}}
+
+## Routing Traffic to Services on Different Endpoints
+
+When working with EKS, a common scenario is to access multiple Kubernetes services behind different endpoints.
+
+For instance, you might have multiple microservices, each following a common path versioning scheme, such as API request paths starting with `/v1/...`.
+In such cases, path-based routing may not be ideal if you need the services to be accessible in a uniform manner.
+
+To address this requirement, we recommend utilizing host-based routing rules, as demonstrated in the example below:
+
+{{< command >}}
+$ cat <}}
+
+The example defines routing rules for two local endpoints - the first rule points to a service `service-1` accessible under `/v1`, and the second rule points to a service `service-2` accessible under the same path `/v1`.
+
+In the provided example, we define routing rules for two local endpoints.
+The first rule directs traffic to a service named `service-1`, accessible under the path `/v1`.
+Similarly, the second rule points to a service named `service-2`, also accessible under the same path `/v1`.
+
+This approach enables us to access the two distinct services using the same path and port number, but with different host names.
+This host-based routing mechanism ensures that each service is uniquely identified based on its designated host name, allowing for a uniform and organized way of accessing multiple services within the EKS cluster.
+
+{{< command >}}
+$ curl http://eks-service-1.localhost.localstack.cloud:8081/v1
+
+... [output of service 1]
+
+$ curl http://eks-service-2.localhost.localstack.cloud:8081/v1
+
+... [output of service 2]
+
+{{< /command >}}
+
+It is important to note that the host names `eks-service-1.localhost.localstack.cloud` and `eks-service-2.localhost.localstack.cloud` both resolve to `127.0.0.1` (localhost).
+Consequently, you can utilize them to communicate with your service endpoints and distinguish between different services within the Kubernetes load balancer.
+
+However, it might encounter issues in scenarios where you intend to run your Load Balancer (LB) on standard ports such as 80/443 since some of these ports may already be occupied on your local machine.
+For instance, by default, LocalStack allocates port 443 to expose APIs via the HTTPS endpoint (`https://localhost.localstack.cloud`).
+Hence, it's crucial to ensure that you expose your LB on a custom, non-standard port to prevent conflicts.
+
+Additionally, note that LocalStack EKS employs [Traefik](https://doc.traefik.io/traefik/providers/kubernetes-ingress) as the Kubernetes ingress controller internally.
+
+## Mounting directories from host to pod
+
+If you have specific directories which you want to mount from your local dev machine into one of your pods, you can do this with two simple steps:
+
+If you have specific directories that you want to mount from your local development machine into one of your pods, you can achieve this in two simple steps.
+
+When creating your cluster, include the special tag `_volume_mount_`, which allows you to define the desired volume mounting configuration from your local development machine to the cluster nodes.
+
+{{< command >}}
+$ awslocal eks create-cluster \
+ --name cluster1 \
+ --role-arn arn:aws:iam::000000000000:role/eks-role \
+ --resources-vpc-config '{}' \
+ --tags '{"_volume_mount_":"/path/on/host:/path/on/node"}'
+
+{
+ "cluster": {
+ "name": "cluster1",
+ "arn": "arn:aws:eks:eu-central-1:000000000000:cluster/cluster1",
+ "createdAt": "Sat, 05 Oct 2019 12:29:26 GMT",
+ "endpoint": "https://172.17.0.1:6443",
+ "status": "ACTIVE",
+ "tags": {
+ "_volume_mount_" : "/path/on/host:/path/on/node"
+ }
+ ...
+ }
+}
+
+{{< / command >}}
+
+{{< callout >}}
+Note that the tag was previously referred to as `__k3d_volume_mount__`, but it has now been renamed to `_volume_mount_`.
+As a result, the tag name `__k3d_volume_mount__` is considered deprecated and will be removed in an upcoming release.
+{{< /callout >}}
+
+After creating your cluster with the `_volume_mount_` tag, you can create your path with volume mounts as usual.
+The configuration for the volume mounts can be set up similar to this:
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: test
+spec:
+ volumes:
+ - name: example-volume
+ hostPath:
+ path: /path/on/node
+ containers:
+ - image: alpine:3.12
+ command: ["/bin/sh","-c"]
+ args:
+ - echo "Starting the update command";
+ apk update;
+ echo "Adding the openssh command";
+ apk add openssh;
+ echo "openssh completed";
+ sleep 240m;
+ imagePullPolicy: IfNotPresent
+ name: alpine
+ volumeMounts:
+ - mountPath: "/path/on/pod"
+ name: example-volume
+ restartPolicy: Always
+```
+
+## Supported Versions
+
+LocalStack uses [k3s](https://github.com/k3s-io/k3s) under the hood for creating EKS clusters.
+Below is the list of supported Kubernetes versions and their corresponding k3s versions.
+The default version is `1.31`.
+
+| Kubernetes Version | k3s Version | EKS Platform Version |
+|-------------------|-------------------|---------------------|
+| 1.32 | v1.32.1-k3s1 | eks.3 |
+| 1.31 | v1.31.5-k3s1 | eks.19 |
+| 1.30 | v1.30.9-k3s1 | eks.27 |
+| 1.29 | v1.29.13-k3s1 | eks.30 |
+| 1.28 | v1.28.15-k3s1 | eks.36 |
+| 1.27 | v1.27.16-k3s1 | eks.40 |
+| 1.26 | v1.26.15-k3s1 | eks.42 |
+| 1.25 | v1.25.16-k3s4 | eks.42 |
+
+Users can specify the desired version when creating an EKS cluster in LocalStack using the `EKS_K3S_IMAGE_TAG` configuration variable when starting LocalStack.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing EKS clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **EKS** under the **Compute** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Cluster**: Create a new EKS cluster by clicking on the **Create Cluster** button and providing the cluster name among other details.
+- **View Cluster Details**: View the details of an existing EKS cluster by clicking on the cluster name.
+- **Edit Cluster**: Edit the configuration of an existing EKS cluster by clicking on the **Edit** button while viewing the cluster details.
+- **Delete Cluster**: Select the cluster name and click on the **Actions** button followed by **Remove Selected** button.
diff --git a/src/content/docs/aws/services/elasticache.md b/src/content/docs/aws/services/elasticache.md
new file mode 100644
index 00000000..0904ea10
--- /dev/null
+++ b/src/content/docs/aws/services/elasticache.md
@@ -0,0 +1,135 @@
+---
+title: "ElastiCache"
+linkTitle: "ElastiCache"
+tags: ["Base"]
+description: Get started with AWS ElastiCache on LocalStack
+persistence: supported
+---
+
+## Introduction
+
+Amazon ElastiCache is a managed in-memory caching service provided by Amazon Web Services (AWS).
+It facilitates the deployment and operation of in-memory caches within the AWS cloud environment.
+ElastiCache is designed to improve application performance and scalability by alleviating the workload on backend databases.
+It supports popular open-source caching engines like Redis and Memcached (LocalStack currently supports Redis),
+providing a means to efficiently store and retrieve frequently accessed data with minimal latency.
+
+LocalStack supports ElastiCache via the Pro offering, allowing you to use the ElastiCache APIs in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "references/coverage/coverage_elasticache" >}}),
+which provides information on the extent of ElastiCache integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to ElastiCache and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
+
+### Single cache cluster
+
+After starting LocalStack Pro, you can create a cluster with the following command.
+
+{{< command >}}
+$ awslocal elasticache create-cache-cluster \
+ --cache-cluster-id my-redis-cluster \
+ --cache-node-type cache.t2.micro \
+ --engine redis \
+ --num-cache-nodes 1
+{{< /command>}}
+
+Wait for it to be available, then you can use the cluster endpoint for Redis operations.
+
+{{< command >}}
+$ awslocal elasticache describe-cache-clusters --show-cache-node-info --query "CacheClusters[0].CacheNodes[0].Endpoint"
+{
+ "Address": "localhost.localstack.cloud",
+ "Port": 4510
+}
+{{< /command >}}
+
+The cache cluster uses a random port of the [external service port range]({{< ref "external-ports" >}}).
+Use this port number to connect to the Redis instance like so:
+
+{{< command >}}
+$ redis-cli -p 4510 ping
+PONG
+$ redis-cli -p 4510 set foo bar
+OK
+$ redis-cli -p 4510 get foo
+"bar"
+{{< / command >}}
+
+### Replication groups in non-cluster mode
+
+{{< command >}}
+$ awslocal elasticache create-replication-group \
+ --replication-group-id my-redis-replication-group \
+ --replication-group-description 'my replication group' \
+ --engine redis \
+ --cache-node-type cache.t2.micro \
+ --num-cache-clusters 3
+{{< /command >}}
+
+Wait for it to be available.
+When running the following command, you should see one node group when running:
+
+{{< command >}}
+$ awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group
+{{< /command >}}
+
+To retrieve the primary endpoint:
+
+{{< command >}}
+$ awslocal elasticache describe-replication-groups --replication-group-id my-redis-replication-group \
+ --query "ReplicationGroups[0].NodeGroups[0].PrimaryEndpoint"
+{{< /command >}}
+
+### Replication groups in cluster mode
+
+The cluster mode is enabled by using `--num-node-groups` and `--replicas-per-node-group`:
+
+{{< command >}}
+$ awslocal elasticache create-replication-group \
+ --engine redis \
+ --replication-group-id my-clustered-redis-replication-group \
+ --replication-group-description 'my clustered replication group' \
+ --cache-node-type cache.t2.micro \
+ --num-node-groups 2 \
+ --replicas-per-node-group 2
+{{< /command >}}
+
+Note that the group nodes do not have a primary endpoint.
+Instead they have a `ConfigurationEndpoint`, which you can connect to using `redis-cli -c` where `-c` is for cluster mode.
+
+{{< command >}}
+$ awslocal elasticache describe-replication-groups --replication-group-id my-clustered-redis-replication-group \
+ --query "ReplicationGroups[0].ConfigurationEndpoint"
+{{< /command >}}
+
+## Container mode
+
+In order to start Redis clusters of a specific version, you need to use the container mode for Redis-based services.
+This instructs LocalStack to start Redis instances in a separate container using the specified image tag.
+Another reason you might want to use the container mode is to check the logs of every Redis instance separately.
+
+To do this, you can set the `REDIS_CONTAINER_MODE` configuration variable to `1`.
+
+## Resource browser
+
+The LocalStack Web Application provides a Resource Browser for managing ElastiCache resources.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser and navigating to the Resources section, then clicking on ElastiCache.
+
+In the ElastiCache resource browser you can:
+
+* List and remove existing cache clusters
+ {{< img src="elasticache-resource-browser-list.png" alt="Create a ElastiCache cluster in the resource browser" >}}
+* View details of cache clusters
+ {{< img src="elasticache-resource-browser-show.png" alt="Create a ElastiCache cluster in the resource browser" >}}
+* Create new cache clusters
+ {{< img src="elasticache-resource-browser-create.png" alt="Create a ElastiCache cluster in the resource browser" >}}
+
+## Current Limitations
+
+LocalStack currently supports Redis single-node and cluster mode, but not memcached.
+Moreover, LocalStack emulation support for ElastiCache is mostly centered around starting/stopping Redis servers.
+
+Resources necessary to operate a cluster, like parameter groups, security groups, subnets groups, etc. are mocked, but have no effect on the functioning of the Redis servers.
+
+LocalStack currently doesn't support ElastiCache snapshots, failovers, users/passwords, service updates, replication scaling, SSL, migrations, service integration (like CloudWatch/Kinesis log delivery, SNS notifications) or tests.
diff --git a/src/content/docs/aws/services/elasticbeanstalk.md b/src/content/docs/aws/services/elasticbeanstalk.md
new file mode 100644
index 00000000..95950ae4
--- /dev/null
+++ b/src/content/docs/aws/services/elasticbeanstalk.md
@@ -0,0 +1,122 @@
+---
+title: "Elastic Beanstalk"
+linkTitle: "Elastic Beanstalk"
+description: >
+ Get started with Elastic Beanstalk (EB) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Elastic Beanstalk (EB) is a managed platform-as-a-service (PaaS) provided by Amazon Web Services (AWS) that simplifies the process of deploying, managing, and scaling web applications and services.
+Elastic Beanstalk orchestrates various AWS services, including EC2, S3, SNS, and Elastic Load Balancers.
+Elastic Beanstalk also supports various application environments, such as Java, .NET, Node.js, PHP, Python, Ruby, Go, and Docker.
+
+LocalStack allows you to use the Elastic Beanstalk APIs in your local environment to create and manage applications, environments and versions.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_elasticbeanstalk" >}}), which provides information on the extent of Elastic Beanstalk's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elastic Beanstalk and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an Elastic Beanstalk application and environment with the AWS CLI.
+
+### Create an application
+
+To create an Elastic Beanstalk application, you can use the [`CreateApplication`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateApplication.html) API.
+Run the following command to create an application named `my-app`:
+
+{{< command >}}
+$ awslocal elasticbeanstalk create-application \
+ --application-name my-app
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Application": {
+ "ApplicationArn": "arn:aws:elasticbeanstalk:us-east-1:000000000000:application/my-app",
+ "ApplicationName": "my-app",
+ "DateCreated": "2023-08-24T05:55:57.603443Z"
+ }
+}
+```
+
+You can also use the [`DescribeApplications`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeApplications.html) API to retrieve information about your application.
+Run the following command to retrieve information about the `my-app` application, we created earlier:
+
+{{< command >}}
+$ awslocal elasticbeanstalk describe-applications \
+ --application-names my-app
+{{< /command >}}
+
+### Create an environment
+
+To create an Elastic Beanstalk environment, you can use the [`CreateEnvironment`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateEnvironment.html) API.
+Run the following command to create an environment named `my-environment`:
+
+{{< command >}}
+$ awslocal elasticbeanstalk create-environment \
+ --application-name my-app \
+ --environment-name my-environment
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "EnvironmentName": "my-environment",
+ "EnvironmentId": "4fcae3fb",
+ "ApplicationName": "my-app",
+ "DateCreated": "2023-08-24T05:57:59.889966Z",
+ "EnvironmentArn": "arn:aws:elasticbeanstalk:us-east-1:000000000000:applicationversion/my-app/version"
+}
+```
+
+You can also use the [`DescribeEnvironments`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeEnvironments.html) API to retrieve information about your environment.
+Run the following command to retrieve information about the `my-environment` environment, we created earlier:
+
+{{< command >}}
+$ awslocal elasticbeanstalk describe-environments \
+ --environment-names my-environment
+{{< /command >}}
+
+### Create an application version
+
+To create an Elastic Beanstalk application version, you can use the [`CreateApplicationVersion`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_CreateApplicationVersion.html) API.
+Run the following command to create an application version named `v1`:
+
+{{< command >}}
+$ awslocal elasticbeanstalk create-application-version \
+ --application-name my-app \
+ --version-label v1
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ApplicationVersion": {
+ "ApplicationVersionArn": "arn:aws:elasticbeanstalk:us-east-1:000000000000:applicationversion/my-app/v1",
+ "ApplicationName": "my-app",
+ "VersionLabel": "v1",
+ "DateCreated": "2023-08-24T05:59:58.166021Z"
+ }
+}
+```
+
+You can also use the [`DescribeApplicationVersions`](https://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_DescribeApplicationVersions.html) API to retrieve information about your application version.
+Run the following command to retrieve information about the `v1` application version, we created earlier:
+
+{{< command >}}
+$ awslocal elasticbeanstalk describe-application-versions \
+ --application-name my-app
+{{< /command >}}
+
+## Current Limitations
+
+LocalStack's Elastic Beanstalk implementation is limited and lacks support for installing application and running it in a local Elastic Beanstalk environment.
+LocalStack also does not support the [`eb`](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3.html) CLI tool.
+However, you can use other integrations, such as AWS CLI & Terraform, to mock the Elastic Beanstalk APIs and test your workflow locally.
diff --git a/src/content/docs/aws/services/elastictranscoder.md b/src/content/docs/aws/services/elastictranscoder.md
new file mode 100644
index 00000000..4094f454
--- /dev/null
+++ b/src/content/docs/aws/services/elastictranscoder.md
@@ -0,0 +1,156 @@
+---
+title: "Elastic Transcoder"
+linkTitle: "Elastic Transcoder"
+description: Get started with Elastic Transcoder on LocalStack
+tags: ["Base"]
+---
+
+## Introduction
+
+Elastic Transcoder is a managed service that facilitates the transcoding of multimedia files into various formats to ensure compatibility across devices.
+Elastic Transcoder manages the underlying resources, ensuring high availability and fault tolerance.
+It also supports a wide range of input and output formats, enabling users to efficiently process and deliver video content at scale.
+
+LocalStack allows you to mock the Elastic Transcoder APIs in your local environment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_elastictranscoder" >}}), which provides information on the extent of Elastic Transcoder's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elastic Transcoder and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an Elastic Transcoder pipeline, read the pipeline, and list all pipelines using the AWS CLI.
+
+### Create S3 buckets
+
+You can create S3 buckets using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) API.
+Execute the following command to create two buckets named `elasticbucket` and `outputbucket`:
+
+{{< command >}}
+$ awslocal s3 mb s3://elasticbucket
+$ awslocal s3 mb s3://outputbucket
+{{< /command >}}
+
+### Create an Elastic Transcoder pipeline
+
+You can create an Elastic Transcoder pipeline using the [`CreatePipeline`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/create-pipeline.html) API.
+Execute the following command to create a pipeline named `test-pipeline`:
+
+{{< command >}}
+$ awslocal elastictranscoder create-pipeline \
+ --name Default \
+ --input-bucket elasticbucket \
+ --output-bucket outputbucket \
+ --role arn:aws:iam::000000000000:role/Elastic_Transcoder_Default_Role
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Pipeline": {
+ "Id": "0998507242379-vltecz",
+ "Arn": "arn:aws:elastictranscoder:us-east-1:000000000000:pipeline/0998507242379-vltecz",
+ "Name": "Default",
+ "Status": "Active",
+ "InputBucket": "elasticbucket",
+ "OutputBucket": "outputbucket",
+ "Role": "arn:aws:iam::000000000000:role/Elastic_Transcoder_Default_Role",
+ "Notifications": {
+ "Progressing": "",
+ "Completed": "",
+ "Warning": "",
+ "Error": ""
+ },
+ "ContentConfig": {
+ "Bucket": "outputbucket",
+ "Permissions": []
+ },
+ "ThumbnailConfig": {
+ "Bucket": "outputbucket",
+ "Permissions": []
+ }
+ },
+ "Warnings": []
+}
+```
+
+### List the pipelines
+
+You can list all pipelines using the [`ListPipelines`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/list-pipelines.html) API.
+Execute the following command to list all pipelines:
+
+{{< command >}}
+$ awslocal elastictranscoder list-pipelines
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Pipelines": [
+ {
+ "Id": "0998507242379-vltecz",
+ "Arn": "arn:aws:elastictranscoder:us-east-1:000000000000:pipeline/0998507242379-vltecz",
+ "Name": "Default",
+ "Status": "Active",
+ "InputBucket": "elasticbucket",
+ "OutputBucket": "outputbucket",
+ "Role": "arn:aws:iam::000000000000:role/Elastic_Transcoder_Default_Role",
+ "Notifications": {
+ "Progressing": "",
+ "Completed": "",
+ "Warning": "",
+ "Error": ""
+ },
+ "ContentConfig": {
+ "Bucket": "outputbucket",
+ "Permissions": []
+ },
+ "ThumbnailConfig": {
+ "Bucket": "outputbucket",
+ "Permissions": []
+ }
+ }
+ ]
+}
+```
+
+### Read the pipeline
+
+You can read a pipeline using the [`ReadPipeline`](https://docs.aws.amazon.com/elastictranscoder/latest/developerguide/read-pipeline.html) API.
+Execute the following command to read the pipeline with the ID `0998507242379-vltecz`:
+
+{{< command >}}
+$ awslocal elastictranscoder read-pipeline --id 0998507242379-vltecz
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Pipeline": {
+ "Id": "0998507242379-vltecz",
+ "Arn": "arn:aws:elastictranscoder:us-east-1:000000000000:pipeline/0998507242379-vltecz",
+ "Name": "Default",
+ "Status": "Active",
+ "InputBucket": "elasticbucket",
+ "OutputBucket": "outputbucket",
+ "Role": "arn:aws:iam::000000000000:role/Elastic_Transcoder_Default_Role",
+ "Notifications": {
+ "Progressing": "",
+ "Completed": "",
+ "Warning": "",
+ "Error": ""
+ },
+ "ContentConfig": {
+ "Bucket": "outputbucket",
+ "Permissions": []
+ },
+ "ThumbnailConfig": {
+ "Bucket": "outputbucket",
+ "Permissions": []
+ }
+ }
+}
+```
diff --git a/src/content/docs/aws/services/elb.md b/src/content/docs/aws/services/elb.md
new file mode 100644
index 00000000..ba65a550
--- /dev/null
+++ b/src/content/docs/aws/services/elb.md
@@ -0,0 +1,183 @@
+---
+title: "Elastic Load Balancing (ELB)"
+linkTitle: "Elastic Load Balancing (ELB)"
+description: Get started with Elastic Load Balancing (ELB) on LocalStack
+tags: ["Base"]
+---
+
+## Introduction
+
+Elastic Load Balancing (ELB) is a service that allows users to distribute incoming traffic across multiple targets, such as EC2 instances, containers, IP addresses, and lambda functions and automatically scales its request handling capacity in response to incoming traffic.
+It also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets.
+You can check [the official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) to understand the basic terms and concepts used in the ELB.
+
+Localstack allows you to use the Elastic Load Balancing APIs in your local environment to create, edit, and view load balancers, target groups, listeners, and rules.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_elbv2" >}}), which provides information on the extent of ELB's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elastic Load Balancing and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an Application Load Balancer, along with its target group, listener, and rule, and forward requests to an IP target.
+
+### Start a target server
+
+Launch an HTTP server which will serve as the target for our load balancer.
+
+{{< command >}}
+$ docker run --rm -itd -p 5678:80 ealen/echo-server
+{{< /command >}}
+
+### Create a load balancer
+
+To specify the subnet and VPC in which the load balancer will be created, you can use the [`DescribeSubnets`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_DescribeSubnets.html) API to retrieve the subnet ID and VPC ID.
+In this example, we will use the subnet and VPC in the `us-east-1f` availability zone.
+
+{{< command >}}
+$ subnet_info=$(awslocal ec2 describe-subnets --filters Name=availability-zone,Values=us-east-1f \
+ | jq -r '.Subnets[] | select(.AvailabilityZone == "us-east-1f") | {SubnetId: .SubnetId, VpcId: .VpcId}')
+
+$ subnet_id=$(echo $subnet_info | jq -r '.SubnetId')
+
+$ vpc_id=$(echo $subnet_info | jq -r '.VpcId')
+{{< /command >}}
+
+To create a load balancer, you can use the [`CreateLoadBalancer`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateLoadBalancer.html) API.
+The following command creates an Application Load Balancer named `example-lb`:
+
+{{< command >}}
+$ loadBalancer=$(awslocal elbv2 create-load-balancer --name example-lb \
+ --subnets $subnet_id | jq -r '.LoadBalancers[]|.LoadBalancerArn')
+{{< /command >}}
+
+### Create a target group
+
+To create a target group, you can use the [`CreateTargetGroup`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateTargetGroup.html) API.
+The following command creates a target group named `example-target-group`:
+
+{{< command >}}
+$ targetGroup=$(awslocal elbv2 create-target-group --name example-target-group \
+ --protocol HTTP --target-type ip --port 80 --vpc-id $vpc_id \
+ | jq -r '.TargetGroups[].TargetGroupArn')
+{{< /command >}}
+
+### Register a target
+
+To register a target, you can use the [`RegisterTargets`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_RegisterTargets.html) API.
+The following command registers the target with the target group created in the previous step:
+
+{{< command >}}
+$ awslocal elbv2 register-targets --targets Id=127.0.0.1,Port=5678,AvailabilityZone=all \
+ --target-group-arn $targetGroup
+{{< /command >}}
+
+{{< callout >}}
+Note that in some cases the `targets` parameter `Id` can be the `Gateway` address of the docker container.
+You can find the gateway address by running `docker inspect `.
+{{< /callout >}}
+
+### Create a listener and a rule
+
+We create a listener for the load balancer using the [`CreateListener`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateListener.html) API.
+The following command creates a listener for the load balancer created in the previous step:
+
+{{< command >}}
+$ listenerArn=$(awslocal elbv2 create-listener \
+ --protocol HTTP \
+ --port 80 \
+ --default-actions '{"Type":"forward","TargetGroupArn":"'$targetGroup'","ForwardConfig":{"TargetGroups":[{"TargetGroupArn":"'$targetGroup'","Weight":11}]}}' \
+ --load-balancer-arn $loadBalancer | jq -r '.Listeners[]|.ListenerArn')
+{{< /command >}}
+
+To create a rule for the listener, you can use the [`CreateRule`](https://docs.aws.amazon.com/elasticloadbalancing/latest/APIReference/API_CreateRule.html) API.
+The following command creates a rule for the listener created above:
+
+{{< command >}}
+$ listenerRule=$(awslocal elbv2 create-rule \
+ --conditions Field=path-pattern,Values=/ \
+ --priority 1 \
+ --actions '{"Type":"forward","TargetGroupArn":"'$targetGroup'","ForwardConfig":{"TargetGroups":[{"TargetGroupArn":"'$targetGroup'","Weight":11}]}}' \
+ --listener-arn $listenerArn \
+ | jq -r '.Rules[].RuleArn')
+{{< /command >}}
+
+### Send a request to the load balancer
+
+Finally, you can issue an HTTP request to the `DNSName` parameter of `CreateLoadBalancer` operation, and `Port` parameter of `CreateListener` command with the following command:
+
+{{< command >}}
+$ curl example-lb.elb.localhost.localstack.cloud:4566
+{{< /command >}}
+
+The following output will be retrieved:
+
+```bash
+{
+ "host": {
+ "hostname": "example-lb.elb.localhost.localstack.cloud",
+ "ip": "::ffff:172.17.0.1",
+ "ips": []
+ },
+ "http": {
+ "method": "GET",
+ "baseUrl": "",
+ "originalUrl": "/",
+ "protocol": "http"
+ },
+ "request": {
+ "params": {
+ "0": "/"
+ },
+ "query": {},
+ "cookies": {},
+ "body": {},
+ "headers": {
+ "accept-encoding": "identity",
+ "host": "example-lb.elb.localhost.localstack.cloud:4566",
+ "user-agent": "curl/7.88.1",
+ "accept": "*/*"
+ }
+ },
+ "environment": {
+ "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
+ "HOSTNAME": "bee08b83d633",
+ "TERM": "xterm",
+ "NODE_VERSION": "18.17.1",
+ "YARN_VERSION": "1.22.19",
+ "HOME": "/root"
+ }
+}
+```
+
+#### Alternative URL structure
+
+If a request cannot be made to a subdomain of `localhost.localstack.cloud`, an alternative URL structure is available, however it is not returned by AWS management API methods.
+To make a request against an ELB with id ``, use the URL:
+
+```bash
+http(s)://localhost.localstack.cloud:4566/_aws/elb//
+```
+
+Here's an example of how you would access the load balancer with a name of `example-lb` with the subdomain-based URL format:
+
+```bash
+http(s)://example-lb.elb.localhost.localstack.cloud:4566/test/path
+```
+
+With the alternative URL structure:
+
+```bash
+http(s)://localhost.localstack.cloud:4566/_aws/elb/example-lb/test/path
+```
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use ELB in LocalStack for various use cases:
+
+- [Setting up Elastic Load Balancing (ELB) Application Load Balancers using LocalStack, deployed via the Serverless framework]({{< ref "/tutorials/elb-load-balancing" >}})
+
+## Current Limitations
+
+- The Application Load Balancer currently supports only the `forward`, `redirect` and `fixed-response` action types.
+- When opting for Route53 CNAMEs to direct requests towards the ALBs, it's important to remember that explicit configuration of the `Host` header to match the resource record might be necessary while making calls.
diff --git a/src/content/docs/aws/services/elementalmediaconvert.md b/src/content/docs/aws/services/elementalmediaconvert.md
new file mode 100644
index 00000000..662330d7
--- /dev/null
+++ b/src/content/docs/aws/services/elementalmediaconvert.md
@@ -0,0 +1,200 @@
+---
+title: "Elemental MediaConvert"
+linkTitle: "Elemental MediaConvert"
+description: Get started with Elemental MediaConvert on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Elemental MediaConvert is a file-based video transcoding service with broadcast-grade features.
+It enables you to easily create high-quality video streams for broadcast and multiscreen delivery.
+
+LocalStack allows you to mock the MediaConvert APIs in your local environment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_mediaconvert" >}}), which provides information on the extent of MediaConvert's integration with LocalStack.
+
+{{< callout "note">}}
+Elemental MediaConvert is in a preview state.
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to Elemental MediaConvert and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a MediaConvert job, list jobs, create a queue, and list all queues using the AWS CLI.
+
+### Create a job
+
+Create a new file named `job.json` on your local directory:
+
+```json
+{
+ "Role": "arn:aws:iam::000000000000:role/MediaConvert_Default_Role",
+ "Settings": {
+ "Inputs": [
+ {
+ "VideoSelector": {},
+ "AudioSelectors": {
+ "Audio Selector 1": {
+ "DefaultSelection": "DEFAULT"
+ }
+ },
+ "TimecodeSource": "ZEROBASED",
+ "FileInput": "s3://testbucket/input.mp4"
+ }
+ ],
+ "OutputGroups": [
+ {
+ "Name": "File Group",
+ "OutputGroupSettings": {
+ "Type": "FILE_GROUP_SETTINGS",
+ "FileGroupSettings": {
+ "Destination": "s3://testbucket/output.mp4"
+ }
+ },
+ "Outputs": [
+ {
+ "VideoDescription": {
+ "CodecSettings": {
+ "Codec": "H_264",
+ "H264Settings": {
+ "RateControlMode": "QVBR",
+ "SceneChangeDetect": "TRANSITION_DETECTION",
+ "MaxBitrate": 5000000
+ }
+ }
+ },
+ "AudioDescriptions": [
+ {
+ "CodecSettings": {
+ "Codec": "AAC",
+ "AacSettings": {
+ "Bitrate": 96000,
+ "CodingMode": "CODING_MODE_2_0",
+ "SampleRate": 48000
+ }
+ },
+ "AudioSourceName": "Audio Selector 1"
+ }
+ ],
+ "ContainerSettings": {
+ "Container": "MP4",
+ "Mp4Settings": {}
+ }
+ }
+ ],
+ "CustomName": "output"
+ }
+ ],
+ "TimecodeConfig": {
+ "Source": "ZEROBASED"
+ },
+ "FollowSource": 1
+ }
+}
+```
+
+You can create a MediaConvert job using the [`CreateJob`](https://docs.aws.amazon.com/goto/WebAPI/mediaconvert-2017-08-29/CreateJob) API.
+Execute the following command to create a job using a `job.json` file:
+
+{{< command >}}
+$ awslocal mediaconvert create-job --cli-input-json file://job.json
+{{< /command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "Job": {
+ "AccelerationSettings": {
+ "Mode": "DISABLED"
+ },
+ "AccelerationStatus": "NOT_APPLICABLE",
+ "Arn": "arn:aws:mediaconvert:us-east-1:000000000000:jobs/1727963943858-7bdace",
+ ...
+ "Role": "arn:aws:iam::123456789012:role/MediaConvert_Default_Role",
+ "Settings": {
+ "FollowSource": 1,
+ "Inputs": [
+ {
+ "AudioSelectors": {
+ "Audio Selector 1": {
+ "DefaultSelection": "DEFAULT"
+ }
+ },
+ ...
+ }
+ ],
+ "OutputGroups": [
+ {
+ "CustomName": "output",
+ "Name": "File Group",
+ ...
+ }
+ ],
+ "TimecodeConfig": {
+ "Source": "ZEROBASED"
+ }
+ },
+ "Status": "SUBMITTED",
+ ...
+ }
+}
+```
+
+### List the jobs
+
+You can list all MediaConvert jobs using the [`ListJobs`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/jobs.html#jobsget) API.
+Execute the following command to list all jobs:
+
+{{< command >}}
+$ awslocal mediaconvert list-jobs
+{{< /command >}}
+
+### Create a queue
+
+You can create a MediaConvert queue using the [`CreateQueue`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/queues.html#queuespost) API.
+Execute the following command to create a queue named `MyQueue`:
+
+{{< command >}}
+$ awslocal mediaconvert create-queue
+ --name MyQueue
+ --description "High priority queue for video encoding"
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Queue": {
+ "Arn": "arn:aws:mediaconvert:us-east-1:000000000000:queues/MyQueue",
+ "CreatedAt": "2024-10-03T19:30:04.015501+05:30",
+ "Description": "High priority queue for video encoding",
+ "LastUpdated": "2024-10-03T19:30:04.015501+05:30",
+ "Name": "MyQueue",
+ "PricingPlan": "ON_DEMAND",
+ "ProgressingJobsCount": 0,
+ "Status": "ACTIVE",
+ "SubmittedJobsCount": 0,
+ "Type": "CUSTOM"
+ }
+}
+```
+
+### List the queues
+
+You can list all MediaConvert queues using the [`ListQueues`](https://docs.aws.amazon.com/mediaconvert/latest/apireference/queues.html#queuesget) API.
+Execute the following command to list all queues:
+
+{{< command >}}
+$ awslocal mediaconvert list-queues
+{{< /command >}}
+
+## Current Limitations
+
+Currently, the service mocks the submission of encoding jobs to either the default queue or a custom-created queue.
+While actual transcoding is not performed, job completion is emulated.
+
+Job status progresses after a brief wait, and EventBridge events are emitted when the job state changes, allowing users to determine if a job has finished.
+This delay can be disabled by setting `MEDIACONVERT_DISABLE_JOB_DURATION=1`, which causes processing jobs to complete almost instantly.
diff --git a/src/content/docs/aws/services/emr.md b/src/content/docs/aws/services/emr.md
new file mode 100644
index 00000000..1553dd4e
--- /dev/null
+++ b/src/content/docs/aws/services/emr.md
@@ -0,0 +1,55 @@
+---
+title: "Elastic MapReduce (EMR)"
+linkTitle: "Elastic MapReduce (EMR)"
+tags: ["Ultimate"]
+description: >
+ Get started with Elastic MapReduce (EMR) on LocalStack
+---
+
+## Introduction
+
+Amazon Elastic MapReduce (EMR) is a fully managed big data processing service that allows developers to effortlessly create, deploy, and manage big data applications.
+EMR supports various big data processing frameworks, including Hadoop MapReduce, Apache Spark, Apache Hive, and Apache Pig.
+Developers can leverage these frameworks and their rich ecosystem of tools and libraries to perform complex data transformations, machine learning tasks, and real-time data processing.
+
+LocalStack supports EMR and allows developers to run data analytics workloads locally.
+EMR utilizes various tools in the [Hadoop](https://hadoop.apache.org/) and [Spark](https://spark.apache.org) ecosystem, and your EMR instance is automatically configured to connect seamlessly to LocalStack's S3 API.
+LocalStack also supports EMR Serverless to create applications and job runs, to run your Spark/PySpark jobs locally.
+
+The supported APIs are available on our [API coverage page]({{ ref "coverage_emr" >}}), which provides information on the extent of EMR's integration with LocalStack.
+
+{{< callout >}}
+To utilize the EMR API, certain additional dependencies need to be downloaded from the network (including Hadoop, Hive, Spark, etc).
+These dependencies are fetched automatically during service startup, hence it is important to ensure a reliable internet connection when retrieving the dependencies for the first time.
+Alternatively, you can use one of our `*-bigdata` Docker image tags which already ship with the required libraries baked in and may provide better stability (see [here]({{< ref "/user-guide/ci/#ci-images" >}}) for more details).
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to EMR and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will create a virtual EMR cluster using the AWS CLI.
+To create an EMR cluster, run the following command:
+
+{{< command >}}
+$ awslocal emr create-cluster \
+ --release-label emr-5.9.0 \
+ --instance-groups InstanceGroupType=MASTER,InstanceCount=1,InstanceType=m4.large InstanceGroupType=CORE,InstanceCount=1,InstanceType=m4.large
+{{< / command >}}
+You will see a response similar to the following:
+
+```sh
+{
+ "ClusterId": "j-A2KF3EKLAOWRI"
+}
+```
+
+You can also specify startup commands using the `--steps=...` command line argument to the `CreateCluster` API.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use EMR in LocalStack for various use cases:
+
+- [Running data analytics jobs using EMR](https://github.com/localstack/localstack-pro-samples/tree/master/sample-archive/emr-hadoop-spark-jobs)
+- [Running EMR Serverless Jobs with Java](https://github.com/localstack/localstack-pro-samples/tree/master/emr-serverless-sample)
diff --git a/src/content/docs/aws/services/es.md b/src/content/docs/aws/services/es.md
new file mode 100644
index 00000000..1d0d903b
--- /dev/null
+++ b/src/content/docs/aws/services/es.md
@@ -0,0 +1,315 @@
+---
+title: "Elasticsearch Service"
+linkTitle: "Elasticsearch Service"
+description: >
+ Get started with Amazon Elasticsearch Service (ES) on LocalStack
+tags: ["Free"]
+---
+
+The Elasticsearch Service in LocalStack lets you create one or more single-node Elasticsearch/OpenSearch cluster that behaves like the [Amazon Elasticsearch Service](https://aws.amazon.com/opensearch-service/the-elk-stack/what-is-elasticsearch/).
+This service is, like its AWS counterpart, heavily linked with the [OpenSearch Service](../opensearch).
+Any cluster created with the Elasticsearch Service will show up in the OpenSearch Service and vice versa.
+
+## Creating an Elasticsearch cluster
+
+You can go ahead and use [awslocal]({{< ref "aws-cli.md#localstack-aws-cli-awslocal" >}}) to create a new elasticsearch domain via the `aws es create-elasticsearch-domain` command.
+
+{{< callout >}}
+Unless you use the Elasticsearch default version, the first time you create a cluster with a specific version, the Elasticsearch binary is downloaded, which may take a while to download.
+{{< /callout >}}
+
+{{< command >}}
+$ awslocal es create-elasticsearch-domain --domain-name my-domain
+{
+ "DomainStatus": {
+ "DomainId": "000000000000/my-domain",
+ "DomainName": "my-domain",
+ "ARN": "arn:aws:es:us-east-1:000000000000:domain/my-domain",
+ "Created": true,
+ "Deleted": false,
+ "Endpoint": "my-domain.us-east-1.es.localhost.localstack.cloud:4566",
+ "Processing": true,
+ "ElasticsearchVersion": "7.10.0",
+ "ElasticsearchClusterConfig": {
+ "InstanceType": "m3.medium.elasticsearch",
+ "InstanceCount": 1,
+ "DedicatedMasterEnabled": true,
+ "ZoneAwarenessEnabled": false,
+ "DedicatedMasterType": "m3.medium.elasticsearch",
+ "DedicatedMasterCount": 1
+ },
+ "EBSOptions": {
+ "EBSEnabled": true,
+ "VolumeType": "gp2",
+ "VolumeSize": 10,
+ "Iops": 0
+ },
+ "CognitoOptions": {
+ "Enabled": false
+ }
+ }
+}
+{{< / command >}}
+
+In the LocalStack log you will see something like the following, where you can see the cluster starting up in the background.
+
+```plaintext
+2021-11-08T16:29:28:INFO:localstack.services.es.cluster: starting elasticsearch: /opt/code/localstack/localstack/localstack/infra/elasticsearch/bin/elasticsearch -E http.port=57705 -E http.publish_port=57705 -E transport.port=0 -E network.host=127.0.0.1 -E http.compression=false -E path.data="/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/data" -E path.repo="/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/backup" -E xpack.ml.enabled=false with env {'ES_JAVA_OPTS': '-Xms200m -Xmx600m', 'ES_TMPDIR': '/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/tmp'}
+2021-11-08T16:29:28:INFO:localstack.services.es.cluster: registering an endpoint proxy for http://my-domain.us-east-1.es.localhost.localstack.cloud:4566 => http://127.0.0.1:57705
+2021-11-08T16:29:30:INFO:localstack.services.es.cluster: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
+2021-11-08T16:29:32:INFO:localstack.services.es.cluster: [2021-11-08T16:29:32,502][INFO ][o.e.n.Node ] [noctua] version[7.10.0], pid[22403], build[default/tar/51e9d6f22758d0374a0f3f5c6e8f3a7997850f96/2020-11-09T21:30:33.964949Z], OS[Linux/5.4.0-89-generic/amd64], JVM[Ubuntu/OpenJDK 64-Bit Server VM/11.0.11/11.0.11+9-Ubuntu-0ubuntu2.20.04]
+2021-11-08T16:29:32:INFO:localstack.services.es.cluster: [2021-11-08T16:29:32,510][INFO ][o.e.n.Node ] [noctua] JVM home [/usr/lib/jvm/java-11-openjdk-amd64], using bundled JDK [false]
+2021-11-08T16:29:32:INFO:localstack.services.es.cluster: [2021-11-08T16:29:32,511][INFO ][o.e.n.Node ] [noctua] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.locale.providers=SPI,COMPAT, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.io.tmpdir=/var/lib/localstack/lib//elasticsearch/arn:aws:es:us-east-1:000000000000:domain/my-domain/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Xms200m, -Xmx600m, -XX:MaxDirectMemorySize=314572800, -Des.path.home=/opt/code/localstack/localstack/localstack/infra/elasticsearch, -Des.path.conf=/opt/code/localstack/localstack/localstack/infra/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
+2021-11-08T16:29:36:INFO:localstack.services.es.cluster: [2021-11-08T16:29:36,258][INFO ][o.e.p.PluginsService ] [noctua] loaded module [aggs-matrix-stats]
+2021-11-08T16:29:36:INFO:localstack.services.es.cluster: [2021-11-08T16:29:36,259][INFO ][o.e.p.PluginsService ] [noctua] loaded module [analysis-common]
+2021-11-08T16:29:36:INFO:localstack.services.es.cluster: [2021-11-08T16:29:36,260][INFO ][o.e.p.PluginsService ] [noctua] loaded module [constant-keyword]
+...
+```
+
+and after some time, you should see that the `Processing` state of the domain is set to `false`:
+
+{{< command >}}
+$ awslocal es describe-elasticsearch-domain --domain-name my-domain | jq ".DomainStatus.Processing"
+false
+{{< / command >}}
+
+## Interact with the cluster
+
+You can now interact with the cluster at the cluster API endpoint for the domain,
+in this case `http://my-domain.us-east-1.es.localhost.localstack.cloud:4566`.
+
+For example:
+
+{{< command >}}
+$ curl http://my-domain.us-east-1.es.localhost.localstack.cloud:4566
+{
+ "name" : "localstack",
+ "cluster_name" : "elasticsearch",
+ "cluster_uuid" : "IC7E9daNSiepRBB9Ksul7w",
+ "version" : {
+ "number" : "7.10.0",
+ "build_flavor" : "default",
+ "build_type" : "tar",
+ "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
+ "build_date" : "2020-11-09T21:30:33.964949Z",
+ "build_snapshot" : false,
+ "lucene_version" : "8.7.0",
+ "minimum_wire_compatibility_version" : "6.8.0",
+ "minimum_index_compatibility_version" : "6.0.0-beta1"
+ },
+ "tagline" : "You Know, for Search"
+}
+{{< / command >}}
+
+Or the health endpoint:
+
+{{< command >}}
+$ curl -s http://my-domain.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health | jq .
+{
+ "cluster_name": "elasticsearch",
+ "status": "green",
+ "timed_out": false,
+ "number_of_nodes": 1,
+ "number_of_data_nodes": 1,
+ "active_primary_shards": 0,
+ "active_shards": 0,
+ "relocating_shards": 0,
+ "initializing_shards": 0,
+ "unassigned_shards": 0,
+ "delayed_unassigned_shards": 0,
+ "number_of_pending_tasks": 0,
+ "number_of_in_flight_fetch": 0,
+ "task_max_waiting_in_queue_millis": 0,
+ "active_shards_percent_as_number": 100
+}
+{{< / command >}}
+
+## Advanced topics
+
+### Endpoints
+
+There are three configurable strategies that govern how domain endpoints are created, and can be configured via the `OPENSEARCH_ENDPOINT_STRATEGY` (previously `ES_ENDPOINT_STRATEGY`) environment variable.
+
+| Value | Format | Description |
+| - | - | - |
+| `domain` | `..es.localhost.localstack.cloud:4566` | This is the default strategy that uses the `localhost.localstack.cloud` domain to route to your localhost |
+| `path` | `localhost:4566/es//` | An alternative that can be useful if you cannot resolve LocalStack's localhost domain |
+| `port` | `localhost:` | Exposes the cluster(s) directly with ports from the [external service port range]({{< ref "external-ports" >}})|
+| `off` | | *Deprecated*. This value now reverts to the `port` setting, using a port from the given range instead of `4571` |
+
+Regardless of the service from which the clusters were created, the domain of the cluster always corresponds to the engine type (OpenSearch or Elasticsearch) of the cluster.
+OpenSearch cluster therefore have `opensearch` in their domain (e.g. `my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566`) and Elasticsearch clusters have `es` in their domain (e.g. `my-domain.us-east-1.es.localhost.localstack.cloud:4566`)
+
+#### Custom Endpoints
+
+LocalStack allows you to set arbitrary custom endpoints for your clusters in the domain endpoint options.
+This can be used to overwrite the behavior of the endpoint strategies described above.
+You can also choose custom domains, however it is important to add the edge port (`80`/`443` or by default `4566`).
+
+{{< command >}}
+$ awslocal es create-elasticsearch-domain --domain-name my-domain \
+ --elasticsearch-version 7.10 \
+ --domain-endpoint-options '{ "CustomEndpoint": "http://localhost:4566/my-custom-endpoint", "CustomEndpointEnabled": true }'
+{{< / command >}}
+
+Once the domain processing is complete, you can access the cluster:
+
+{{< command >}}
+$ curl http://localhost:4566/my-custom-endpoint/_cluster/health
+{{< / command >}}
+
+### Re-using a single cluster instance
+
+In some cases, you may not want to create a new cluster instance for each domain,
+for example when you are only interested in testing API interactions instead of actual Elasticsearch functionality.
+In this case, you can set `OPENSEARCH_MULTI_CLUSTER=0` (previously `ES_MULTI_CLUSTER`).
+This will multiplex all domains to the same cluster, or return the same port every time when using the `port` endpoint strategy.
+This can however lead to unexpected behavior when persisting data into Elasticsearch, or creating clusters with different versions, so we do not recommend it.
+
+### Storage Layout
+
+Elasticsearch will be organized in your state directory as follows:
+
+```plaintext
+localstack@machine % tree -L 4 volume/state
+.
+├── elasticsearch
+│ └── arn:aws:es:us-east-1:000000000000:domain
+│ ├── my-cluster-1
+│ │ ├── backup
+│ │ ├── data
+│ │ └── tmp
+│ ├── my-cluster-2
+│ │ ├── backup
+│ │ ├── data
+│ │ └── tmp
+```
+
+### Advanced Security Options
+
+Since LocalStack 1.4.0, the OpenSearch and ElasticSearch services support "Advanced Security Options".
+This feature is currently only supported for OpenSearch domains (which can also be created by the elasticsearch service).
+More info can be found on [the OpenSearch Service docs page](../opensearch#advanced-security-options).
+
+## Custom Elasticsearch backends
+
+LocalStack downloads elasticsearch asynchronously the first time you run the `aws es create-elasticsearch-domain`, so you will get the response from localstack first and then (after download/install) you will have your elasticsearch cluster running locally.
+You may not want this, and instead use your already running elasticsearch cluster.
+This can also be useful when you want to run a cluster with a custom configuration that localstack does not support.
+
+To customize the elasticsearch backend, you can your own elasticsearch cluster locally and point localstack to it using the `OPENSEARCH_CUSTOM_BACKEND` (previously `ES_CUSTOM_BACKEND`) environment variable.
+Note that only a single backend can be configured, meaning that you will get a similar behavior as when you [re-use a single cluster instance](#re-using-a-single-cluster-instance).
+
+### Example
+
+The following shows a sample docker-compose file that contains a single-noded elasticsearch cluster and a basic localstack setp.
+
+```yaml
+services:
+ elasticsearch:
+ container_name: elasticsearch
+ image: docker.elastic.co/elasticsearch/elasticsearch:7.10.2
+ environment:
+ - node.name=elasticsearch
+ - cluster.name=es-docker-cluster
+ - discovery.type=single-node
+ - bootstrap.memory_lock=true
+ - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
+ ports:
+ - "9200:9200"
+ ulimits:
+ memlock:
+ soft: -1
+ hard: -1
+ volumes:
+ - data01:/usr/share/elasticsearch/data
+
+ localstack:
+ container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
+ image: localstack/localstack
+ ports:
+ - "4566:4566"
+ depends_on:
+ - elasticsearch
+ environment:
+ - ES_CUSTOM_BACKEND=http://elasticsearch:9200
+ - DEBUG=${DEBUG:-0}
+ volumes:
+ - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
+ - "/var/run/docker.sock:/var/run/docker.sock"
+
+volumes:
+ data01:
+ driver: local
+```
+
+1. Run docker compose:
+{{< command >}}
+$ docker-compose up -d
+{{< /command >}}
+
+2. Create the Elasticsearch domain:
+{{< command >}}
+$ awslocal es create-elasticsearch-domain \
+ --domain-name mylogs-2 \
+ --elasticsearch-version 7.10 \
+ --elasticsearch-cluster-config '{ "InstanceType": "m3.xlarge.elasticsearch", "InstanceCount": 4, "DedicatedMasterEnabled": true, "ZoneAwarenessEnabled": true, "DedicatedMasterType": "m3.xlarge.elasticsearch", "DedicatedMasterCount": 3}'
+{
+ "DomainStatus": {
+ "DomainId": "000000000000/mylogs-2",
+ "DomainName": "mylogs-2",
+ "ARN": "arn:aws:es:us-east-1:000000000000:domain/mylogs-2",
+ "Created": true,
+ "Deleted": false,
+ "Endpoint": "mylogs-2.us-east-1.es.localhost.localstack.cloud:4566",
+ "Processing": true,
+ "ElasticsearchVersion": "7.10",
+ "ElasticsearchClusterConfig": {
+ "InstanceType": "m3.xlarge.elasticsearch",
+ "InstanceCount": 4,
+ "DedicatedMasterEnabled": true,
+ "ZoneAwarenessEnabled": true,
+ "DedicatedMasterType": "m3.xlarge.elasticsearch",
+ "DedicatedMasterCount": 3
+ },
+ "EBSOptions": {
+ "EBSEnabled": true,
+ "VolumeType": "gp2",
+ "VolumeSize": 10,
+ "Iops": 0
+ },
+ "CognitoOptions": {
+ "Enabled": false
+ }
+ }
+}
+{{< /command >}}
+
+3. If the `Processing` status is true, it means that the cluster is not yet healthy.
+ You can run `describe-elasticsearch-domain` to receive the status:
+{{< command >}}
+$ awslocal es describe-elasticsearch-domain --domain-name mylogs-2
+{{< /command >}}
+
+4. Check the cluster health endpoint and create indices:
+{{< command >}}
+$ curl mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/_cluster/health
+{"cluster_name":"es-docker-cluster","status":"green","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0}[~]
+{{< /command >}}
+
+5. Create an example index:
+{{< command >}}
+$ curl -X PUT mylogs-2.us-east-1.es.localhost.localstack.cloud:4566/my-index
+{"acknowledged":true,"shards_acknowledged":true,"index":"my-index"}
+{{< /command >}}
+
+## Differences to AWS
+
+* By default, AWS only sets the `Endpoint` attribute of the cluster status once the cluster is up.
+ LocalStack will return the endpoint immediately, but keep `Processing = "true"` until the cluster has been started.
+* The `CustomEndpointOptions` allows arbitrary endpoint URLs, which is not allowed in AWS
+
+## Current Limitations
+
+The default Elasticsearch version used is 7.10.0.
+This is a slight deviation from the default version used in AWS (Elasticsearch 1.5), which is not supported in LocalStack.
diff --git a/src/content/docs/aws/services/events.md b/src/content/docs/aws/services/events.md
new file mode 100644
index 00000000..b3c943f5
--- /dev/null
+++ b/src/content/docs/aws/services/events.md
@@ -0,0 +1,160 @@
+---
+title: "EventBridge"
+linkTitle: "EventBridge"
+description: Get started with EventBridge on LocalStack
+persistence: supported with limitations
+tags: ["Free"]
+---
+
+## Introduction
+
+EventBridge provides a centralized mechanism to discover and communicate events across various AWS services and applications.
+EventBridge allows you to register, track, and resolve events, which indicates a change in the environment and then applies a rule to route the event to a target.
+EventBridge rules are tied to an Event Bus to manage event-driven workflows.
+You can use either identity-based or resource-based policies to control access to EventBridge resources, where the former can be attached to IAM users, groups, and roles, and the latter can be attached to specific AWS resources.
+
+LocalStack allows you to use the EventBridge APIs in your local environment to create rules that route events to a target.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_events" >}}), which provides information on the extent of EventBridge's integration with LocalStack.
+For information on EventBridge Pipes, please refer to the [EventBridge Pipes]({{< ref "user-guide/aws/pipes" >}}) section.
+
+{{< callout >}}
+The native EventBridge provider, introduced in [LocalStack 3.5.0](https://discuss.localstack.cloud/t/localstack-release-v3-5-0/947), is now the default in 4.0. The legacy provider can still be enabled using the `PROVIDER_OVERRIDE_EVENTS=v1` configuration, but it is deprecated and will be removed in the next major release. We strongly recommend migrating to the new provider.
+{{< /callout >}}
+
+## Getting Started
+
+This guide is designed for users new to EventBridge and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate creating an EventBridge rule to run a Lambda function on a schedule.
+
+### Create a Lambda Function
+
+To create a new Lambda function, create a new file called `index.js` with the following code:
+
+```js
+'use strict';
+
+exports.handler = (event, context, callback) => {
+ console.log('LogScheduledEvent');
+ console.log('Received event:', JSON.stringify(event, null, 2));
+ callback(null, 'Finished');
+};
+```
+
+Run the following command to create a new Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html) API:
+
+{{< command >}}
+$ zip function.zip index.js
+
+$ awslocal lambda create-function \
+ --function-name events-example \
+ --runtime nodejs16.x \
+ --zip-file fileb://function.zip \
+ --handler index.handler \
+ --role arn:aws:iam::000000000000:role/cool-stacklifter
+{{< /command >}}
+
+The output will consist of the `FunctionArn`, which you will need to add the Lambda function to the EventBridge target.
+
+### Create an EventBridge Rule
+
+Run the following command to create a new EventBridge rule using the [`PutRule`](https://docs.aws.amazon.com/cli/latest/reference/events/put-rule.html) API:
+
+{{< command >}}
+$ awslocal events put-rule \
+ --name my-scheduled-rule \
+ --schedule-expression 'rate(2 minutes)'
+{{< /command >}}
+
+In the above command, we have specified a schedule expression of `rate(2 minutes)`, which will run the rule every two minutes.
+It means that the Lambda function will be invoked every two minutes.
+
+Next, grant the EventBridge service principal (`events.amazonaws.com`) permission to run the rule, using the [`AddPermission`](https://docs.aws.amazon.com/cli/latest/reference/events/add-permission.html) API:
+
+{{< command >}}
+$ awslocal lambda add-permission \
+ --function-name events-example \
+ --statement-id my-scheduled-event \
+ --action 'lambda:InvokeFunction' \
+ --principal events.amazonaws.com \
+ --source-arn arn:aws:events:us-east-1:000000000000:rule/my-scheduled-rule
+{{< /command >}}
+
+### Add the Lambda Function as a Target
+
+Create a file named `targets.json` with the following content:
+
+```json
+[
+ {
+ "Id": "1",
+ "Arn": "arn:aws:lambda:us-east-1:000000000000:function:events-example"
+ }
+]
+```
+
+Finally, add the Lambda function as a target to the EventBridge rule using the [`PutTargets`](https://docs.aws.amazon.com/cli/latest/reference/events/put-targets.html) API:
+
+{{< command >}}
+$ awslocal events put-targets \
+ --rule my-scheduled-rule \
+ --targets file://targets.json
+{{< /command >}}
+
+### Verify the Lambda invocation
+
+You can verify the Lambda invocation by checking the CloudWatch logs.
+However, wait at least 2 minutes after running the last command before checking the logs.
+
+Run the following command to list the CloudWatch log groups:
+
+{{< command >}}
+$ awslocal logs describe-log-groups
+{{< /command >}}
+
+The output will contain the log group name, which you can use to list the log streams:
+
+{{< command >}}
+$ awslocal logs describe-log-streams \
+ --log-group-name /aws/lambda/events-example
+{{< /command >}}
+
+Alternatively, you can fetch LocalStack logs to verify the Lambda invocation:
+
+{{< command >}}
+$ localstack logs
+...
+2023-07-17T09:37:52.028 INFO --- [ asgi_gw_0] localstack.request.aws : AWS lambda.Invoke => 202
+2023-07-17T09:37:52.106 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/97e08ac50c18930f131d9dd9744b8df4/invocations/ecb744d0-b3f2-400f-9e49-c85cf12b1e00/logs => 202
+2023-07-17T09:37:52.114 INFO --- [ asgi_gw_0] localstack.request.http : POST /_localstack_lambda/97e08ac50c18930f131d9dd9744b8df4/invocations/ecb744d0-b3f2-400f-9e49-c85cf12b1e00/response => 202
+...
+{{< /command >}}
+
+## Supported target types
+
+At this time LocalStack supports the following [target types](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-targets.html#eb-console-targets) for EventBridge rules:
+
+- Lambda function
+- SNS Topic
+- SQS queue
+- StepFunctions StateMachine
+- Firehose
+- Event bus
+- API destination
+- Kinesis
+- CloudWatch log group
+- API Gateway
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing EventBridge Buses.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **EventBridge** under the **App Integration** section.
+
+The Resource Browser allows you to perform the following actions:
+
+- **View the Event Buses**: You can view the list of EventBridge Buses running locally, alongside their Amazon Resource Names (ARNs) and Policies.
+- **Create Event Rule**: You can create a new Event Rule by specifying **Name**, **Description**, **Event Pattern**, **Schedule Expressions**, **State**, **Role ARN**, and **Tags**.
+- **Trigger Event**: You can trigger an Event by specifying the **Entries** and **Endpoint Id**.
+ While creating an Entry, you must specify **Source**, **Event Bus Name**, **Detail**, **Resources**, **Detail Type**, and **Trace Header**.
+- **Remove Selected**: You can remove the selected EventBridge Bus.
diff --git a/src/content/docs/aws/services/firehose.md b/src/content/docs/aws/services/firehose.md
new file mode 100644
index 00000000..e66924a7
--- /dev/null
+++ b/src/content/docs/aws/services/firehose.md
@@ -0,0 +1,167 @@
+---
+title: "Data Firehose"
+linkTitle: "Data Firehose"
+description: >
+ Get started with Data Firehose on LocalStack
+tags: ["Free"]
+---
+
+{{< callout >}}
+This service was formerly called as 'Kinesis Data Firehose'.
+{{< /callout >}}
+
+## Introduction
+
+Data Firehose is a service provided by AWS that allows you to extract, transform and load streaming data into various destinations, such as Amazon S3, Amazon Redshift, and Elasticsearch.
+With Data Firehose, you can ingest and deliver real-time data from different sources as it automates data delivery, handles buffering and compression, and scales according to the data volume.
+
+LocalStack allows you to use the Data Firehose APIs in your local environment to load and transform real-time data.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_firehose" >}}), which provides information on the extent of Data Firehose's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Data Firehouse and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to use Firehose to load Kinesis data into Elasticsearch with S3 Backup with the AWS CLI.
+
+### Create an Elasticsearch domain
+
+You can create an Elasticsearch domain using the [`create-elasticsearch-domain`](https://docs.aws.amazon.com/cli/latest/reference/es/create-elasticsearch-domain.html) command.
+Execute the following command to create a domain named `es-local`:
+
+{{< command >}}
+$ awslocal es create-elasticsearch-domain --domain-name es-local
+{{< / command >}}
+
+Save the value of the `Endpoint` field from the response, as it will be required further down to confirm the setup.
+
+### Create the source Kinensis stream
+
+Now let us create our target S3 bucket and our source Kinesis stream:
+
+Before creating the stream, we need to create an S3 bucket to store our backup data.
+You can do this using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command:
+
+{{< command >}}
+$ awslocal s3 mb s3://kinesis-activity-backup-local
+{{< / command >}}
+
+You can now use the [`CreateStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_CreateStream.html) API to create a Kinesis stream named `kinesis-es-local-stream` with two shards:
+
+{{< command >}}
+$ awslocal kinesis create-stream \
+ --stream-name kinesis-es-local-stream \
+ --shard-count 2
+{{< / command >}}
+
+### Create a Firehouse delivery stream
+
+You can now create the Firehose delivery stream.
+In this configuration, Elasticsearch serves as the destination, while S3 serves as the repository for our AllDocuments backup.
+Within the `kinesis-stream-source-configuration`, it is required to specify the ARN of our Kinesis stream and the role that will allow you the access to the stream.
+
+The `elasticsearch-destination-configuration` sets vital parameters, which includes the access role, `DomainARN` of the Elasticsearch domain where you wish to publish, and the settings including the `IndexName` and `TypeName` for the Elasticsearch setup.
+Additionally to backup all documents to S3, the `S3BackupMode` parameter is set to `AllDocuments`, which is accompanied by `S3Configuration`.
+
+{{< callout >}}
+Within LocalStack's default configuration, IAM roles remain unverified and no strict validation is applied on ARNs.
+However, when operating within the AWS environment, you need to check the access rights of the specified role for the task.
+{{< /callout >}}
+
+You can use the [`CreateDeliveryStream`](https://docs.aws.amazon.com/firehose/latest/APIReference/API_CreateDeliveryStream.html) API to create a Firehose delivery stream named `activity-to-elasticsearch-local`:
+
+{{< command >}}
+$ awslocal firehose create-delivery-stream \
+ --delivery-stream-name activity-to-elasticsearch-local \
+ --delivery-stream-type KinesisStreamAsSource \
+ --kinesis-stream-source-configuration "KinesisStreamARN=arn:aws:kinesis:us-east-1:000000000000:stream/kinesis-es-local-stream,RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role" \
+ --elasticsearch-destination-configuration "RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,DomainARN=arn:aws:es:us-east-1:000000000000:domain/es-local,IndexName=activity,TypeName=activity,S3BackupMode=AllDocuments,S3Configuration={RoleARN=arn:aws:iam::000000000000:role/Firehose-Reader-Role,BucketARN=arn:aws:s3:::kinesis-activity-backup-local}"
+{{< / command >}}
+
+On successful execution, the command will return the `DeliveryStreamARN` of the created delivery stream:
+
+```json
+{
+ "DeliveryStreamARN": "arn:aws:firehose:us-east-1:000000000000:deliverystream/activity-to-elasticsearch-local"
+}
+```
+
+### Testing the setup
+
+Before testing the integration, it's necessary to confirm if the local Elasticsearch cluster is up.
+You can use the [`describe-elasticsearch-domain`](https://docs.aws.amazon.com/cli/latest/reference/es/describe-elasticsearch-domain.html) command to check the status of the Elasticsearch cluster.
+Run the following command:
+
+{{< command >}}
+$ awslocal es describe-elasticsearch-domain \
+ --domain-name es-local | jq ".DomainStatus.Processing"
+{{< / command >}}
+
+Once the command returns `false`, you can move forward with data ingestion.
+The data can be added to the source Kinesis stream or directly to the Firehose delivery stream.
+
+You can add data to the Kinesis stream using the [`PutRecord`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html) API.
+The following command adds a record to the stream:
+
+{{< command >}}
+$ awslocal kinesis put-record \
+ --stream-name kinesis-es-local-stream \
+ --data '{ "target": "barry" }' \
+ --partition-key partition
+{{< / command >}}
+
+{{< callout "tip" >}}
+For users using AWS CLI v2, consider adding `--cli-binary-format raw-in-base64-out` to the command mentioned above.
+{{< /callout >}}
+
+You can use the [`PutRecord`](https://docs.aws.amazon.com/firehose/latest/APIReference/API_PutRecord.html) API to add data to the Firehose delivery stream.
+The following command adds a record to the stream:
+
+{{< command >}}
+$ awslocal firehose put-record \
+ --delivery-stream-name activity-to-elasticsearch-local \
+ --record '{ "Data": "eyJ0YXJnZXQiOiAiSGVsbG8gd29ybGQifQ==" }'
+{{< / command >}}
+
+To review the entries in Elasticsearch, you can employ [curl](https://curl.se/) for simplicity.
+Remember to replace the URL with the `Endpoint` field from the initial `create-elasticsearch-domain` operation.
+
+{{< command >}}
+$ curl -s http://es-local.us-east-1.es.localhost.localstack.cloud:443/activity/_search | jq '.hits.hits'
+{{< / command >}}
+
+You will get an output similar to the following:
+
+```json
+[
+ {
+ "_index": "activity",
+ "_type": "activity",
+ "_id": "f38e2c49-d101-46aa-9ce2-0d2ea8fcd133",
+ "_score": 1,
+ "_source": {
+ "target": "Hello world"
+ }
+ },
+ {
+ "_index": "activity",
+ "_type": "activity",
+ "_id": "d2f1c125-b3b0-4c7c-ba90-8acf4075a682",
+ "_score": 1,
+ "_source": {
+ "target": "barry"
+ }
+ }
+]
+```
+
+If you receive a comparable output, your Firehose delivery stream setup is accurate!
+Additionally, take a look at the designated S3 bucket to ensure the backup process is functioning correctly.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Data Firehose in LocalStack for various use cases:
+
+- [Search application with Lambda, Kinesis, Firehose, ElasticSearch, S3](https://github.com/localstack/sample-fuzzy-movie-search-lambda-kinesis-elasticsearch)
+- [Streaming Data Pipeline with Kinesis, Tinybird, CloudWatch, Lambda](https://github.com/localstack/serverless-streaming-data-pipeline)
diff --git a/src/content/docs/aws/services/fis.md b/src/content/docs/aws/services/fis.md
new file mode 100644
index 00000000..9f30687e
--- /dev/null
+++ b/src/content/docs/aws/services/fis.md
@@ -0,0 +1,241 @@
+---
+title: "Fault Injection Service (FIS)"
+linkTitle: "Fault Injection Service (FIS)"
+description: >
+ Get started with Fault Injection Service (FIS) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Fault Injection Service (FIS) is a service provided by Amazon Web Services that enables you to test the resilience of your applications and infrastructure by injecting faults and failures into your AWS resources.
+FIS simulates faults such as resource unavailability and service errors to assess the impact on your application's performance and availability.
+The full list of such possible fault injections is available in the [AWS docs](https://docs.aws.amazon.com/fis/latest/userguide/fis-actions-reference.html).
+
+LocalStack allows you to use the FIS APIs in your local environment to introduce faults in other services, in order to check how your setup behaves when parts of it stop working locally.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_fis" >}}), which provides information on the extent of FIS API's integration with LocalStack.
+
+{{< callout "tip" >}}
+LocalStack also features its own powerful chaos engineering tool, [Chaos API]({{< ref "chaos-api" >}}).
+{{< /callout >}}
+
+## Concepts
+
+FIS defines the following elements:
+
+1. Action: Type of fault to introduce
+1. Target: Resources to be impacted
+1. Duration of the disruption.
+
+Together this is termed as an Experiment.
+After the designated time, running experiments restore systems to their original state and cease introducing faults.
+
+{{< callout "note" >}}
+FIS experiment emulation is part of LocalStack Enterprise.
+If you'd like to try it out, please [contact us](https://www.localstack.cloud/demo).
+{{< /callout >}}
+
+FIS actions can be categorized into two main types:
+
+1. One-time events: For example, the `aws:ec2:stop-instances` FIS action, which sends a [`StopInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_StopInstances.html) API to specific EC2 instances.
+Some of these events can automatically be undone after a defined time, such as sending a [`StartInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_StartInstances.html) command to the affected instances.
+1. Probabilistic API errors: For instance, using `aws:fis:inject-api-unavailable-error` to introduce an HTTP 503 error.
+
+## Getting started
+
+This guide is designed for users new to FIS and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an experiment that stops EC2 instances.
+
+### Creating an experiment
+
+Create a new file named `create-experiment.json`.
+This file should contain a JSON configuration that will be utilized during the subsequent invocation of the [`CreateExperimentTemplate`](https://docs.aws.amazon.com/fis/latest/APIReference/API_CreateExperimentTemplate.html) API.
+
+```json
+{
+ "actions": {
+ "StopInstance": {
+ "actionId": "aws:ec2:stop-instances",
+ "targets": {
+ "Instances": "InstancesToStop"
+ },
+ "description": "stop instances"
+ }
+ },
+ "targets": {
+ "InstancesToStop": {
+ "resourceType": "aws:ec2:instance",
+ "resourceTags": {
+ "foo": "bar"
+ },
+ "selectionMode": "COUNT(1)"
+ }
+ },
+ "description": "template for a test action",
+ "stopConditions": [
+ {
+ "source": "none"
+ }
+ ],
+ "roleArn": "arn:aws:iam:123456789012:role/ExperimentRole"
+}
+```
+
+This configuration will result in EC2 [`StopInstances`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_StopInstances.html) operation being invoked against EC2 instances that have the resource tags `Key=foo Value=bar`.
+Settings pertaining to `stopConditions` and `roleArn` hold no significance for in LocalStack FIS emulation.
+Nonetheless, they are obligatory fields according to AWS specifications and must be included.
+
+Run the following command to create an FIS experiment template using the configuration file we just created:
+
+{{< command >}}
+$ awslocal fis create-experiment-template --cli-input-json file://create-experiment.json
+{{< /command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "experimentTemplate": {
+ "id": "ad16589a-4a91-4aee-88df-c33446605882",
+ "description": "template for a test action",
+ "targets": {
+ "InstancesToStop": {
+ "resourceType": "aws:ec2:instance",
+ "resourceTags": {
+ "foo": "bar"
+ },
+ "selectionMode": "COUNT(1)"
+ }
+ },
+ "actions": {
+ "StopInstance": {
+ "actionId": "aws:ec2:stop-instances",
+ "description": "stop instances",
+ "targets": {
+ "Instances": "InstancesToStop"
+ }
+ }
+ },
+ "stopConditions": [
+ {
+ "source": "none"
+ }
+ ],
+ "creationTime": 1718268196.305881,
+ "lastUpdateTime": 1718268196.305881,
+ "roleArn": "arn:aws:iam:123456789012:role/ExperimentRole"
+ }
+}
+```
+
+You can list all the templates you have created using the [`ListExperimentTemplates`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ListExperimentTemplates.html):
+
+{{< command >}}
+$ awslocal fis list-experiment-templates
+{{< /command >}}
+
+### Starting the experiment
+
+Now let us start an EC2 instance that will match the criteria we specified in the experiment template.
+
+{{< command >}}
+$ awslocal ec2 run-instances --image-id ami-024f768332f0 --count 1 --tag-specifications '{"ResourceType": "instance", "Tags": [{"Key": "foo", "Value": "bar"}]}'
+{{< /command >}}
+
+You can start the experiment using the [`StartExperiment`](https://docs.aws.amazon.com/fis/latest/APIReference/API_StartExperiment.html).
+Run the following command and specify the ID of the experiment template you created earlier:
+
+{{< command >}}
+$ awslocal fis start-experiment --experiment-template-id ad16589a-4a91-4aee-88df-c33446605882
+{{< /command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "experiment": {
+ "id": "efee7c02-8733-4d7c-9628-1b60bbec9759",
+ "experimentTemplateId": "ad16589a-4a91-4aee-88df-c33446605882",
+ "roleArn": "arn:aws:iam:123456789012:role/ExperimentRole",
+ "state": {
+ "status": "running"
+ },
+ "targets": {
+ "InstancesToStop": {
+ "resourceType": "aws:ec2:instance",
+ "resourceTags": {
+ "foo": "bar"
+ },
+ "selectionMode": "COUNT(1)"
+ }
+ },
+ "actions": {
+ "StopInstance": {
+ "actionId": "aws:ec2:stop-instances",
+ "description": "stop instances",
+ "targets": {
+ "Instances": "InstancesToStop"
+ }
+ }
+ },
+ "stopConditions": [
+ {
+ "source": "none"
+ }
+ ],
+ "creationTime": 1718268311.209798,
+ "startTime": 1718268311.209798
+ }
+}
+```
+
+You can use the [`ListExperiments`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ListExperiments.html) to check the status of your experiment.
+Run the following command:
+
+{{< command >}}
+$ awslocal fis list-experiments
+{{< /command >}}
+
+You can fetch the details of your experiment using the [`GetExperiment`](https://docs.aws.amazon.com/fis/latest/APIReference/API_GetExperiment.html) API.
+Run the following command and specify the ID of the experiment you created earlier:
+
+{{< command >}}
+$ awslocal fis get-experiment --id efee7c02-8733-4d7c-9628-1b60bbec9759
+{{< /command >}}
+
+### Verifying the outcome
+
+You can now test that the experiment is working as expected by trying to obtain the state of the EC2 instance using [`DescribeInstanceStatus`](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeInstanceStatus.html).
+Run the following command:
+
+{{< command >}}
+$ awslocal ec2 describe-instance-status --instance-ids i-3c40b52ab72f99c63 --output json --query InstanceStatuses[0].InstanceState
+{{< /command >}}
+
+If everything happened as expected, the following output would be retrieved:
+
+```json
+{
+ "Code": 80,
+ "Name": "stopped"
+}
+```
+
+## Supported Actions
+
+LocalStack FIS currently supports the following actions:
+
+- **`aws:ec2:stop-instances`**: Runs EC2 StopInstances on the target EC2 instances.
+- **`aws:ec2:terminate-instances`**: Runs EC2 TerminateInstances on the target EC2 instances.
+- **`aws:rds:reboot-db-instances`**: Runs EC2 RebootInstances on the target EC2 instances.
+- **`aws:ssm:send-command`**: Runs the Systems Manager SendCommand on the target EC2 instances.
+
+If you would like support for more FIS actions, please make a feature request on [GitHub](https://github.com/localstack/localstack/issues/new/choose).
+
+## Current Limitations
+
+- LocalStack does not implement the [selection mode](https://docs.aws.amazon.com/fis/latest/userguide/targets.html#target-selection-mode) mechanism available on AWS.
+- LocalStack ignores [`RoleARN`](https://docs.aws.amazon.com/fis/latest/APIReference/API_ExperimentTemplate.html#fis-Type-ExperimentTemplate-roleArn).
+On AWS, FIS executes actions based on permissions granted by the specified `RoleARN`.
diff --git a/src/content/docs/aws/services/glacier.md b/src/content/docs/aws/services/glacier.md
new file mode 100644
index 00000000..836f7ff7
--- /dev/null
+++ b/src/content/docs/aws/services/glacier.md
@@ -0,0 +1,201 @@
+---
+title: "Glacier"
+linkTitle: "Glacier"
+description: Get started with S3 Glacier on LocalStack
+tags: ["Ultimate"]
+persistence: supported
+---
+
+## Introduction
+
+Glacier is a data storage service provided by Amazon Web Services to suit the long-term storage of archives and backup of infrequently accessed data.
+It offers various retrieval options, different levels of retrieval speed, and more.
+Glacier uses a Vault container to store your data, similar to how S3 stores data in Buckets.
+A Vault further holds the data in an Archive, which can contain text, images, video, and audio files.
+Glacier uses Jobs to retrieve the data in an Archive or list the inventory of a Vault.
+
+LocalStack allows you to use the Glacier APIs in your local environment to manage Vaults and Archives.
+You can use the Glacier API to configure and set up vaults where you can store archives and manage them.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_glacier" >}}), which provides information on the extent of Glacier's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Glacier and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a vault, upload an archive, initiate a job to get an inventory details or download an archive, and delete the archive and vault with the AWS CLI.
+
+### Create a vault
+
+You can create a vault using the [`CreateVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-put.html) API.
+Run the follow command to create a Glacier Vault named `sample-vault`.
+
+{{< command >}}
+$ awslocal glacier create-vault --vault-name sample-vault --account-id -
+{{< /command >}}
+
+You can get the details from your vault using the [`DescribeVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-get.html) API.
+Run the following command to describe your vault.
+
+{{< command >}}
+$ awslocal glacier describe-vault --vault-name sample-vault --account-id -
+{{< /command >}}
+
+On successful creation of the Glacier vault, you will see the following output:
+
+```bash
+{
+ "VaultARN": "arn:aws:glacier:us-east-1:000000000000:vaults/sample-vault",
+ "VaultName": "sample-vault",
+ "CreationDate": "2023-09-11T15:07:28.000Z",
+ "LastInventoryDate": "2023-09-11T15:07:28.000Z",
+ "NumberOfArchives": 0,
+ "SizeInBytes": 0
+}
+```
+
+### Upload an archive to a vault
+
+You can upload an archive or an individual file to a vault using the [`UploadArchive`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-archive-post.html) API.
+Download a random image from the internet and save it as `image.jpg`.
+Run the following command to upload the file to your Glacier vault:
+
+{{< command >}}
+$ awslocal glacier upload-archive --vault-name sample-vault --account-id - --body image.jpg
+{{< /command >}}
+
+On successful upload of the Glacier archive, you will see the following output:
+
+```bash
+{
+ "location": "/000000000000/vaults/sample-vault/archives/d41d8cd98f00b204e9800998ecf8427e",
+ "checksum": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
+ "archiveId": "d41d8cd98f00b204e9800998ecf8427e"
+}
+```
+
+### Initiate the retrieval of an archive from a vault
+
+You can initiate the retrieval of an archive from a vault using the [`InitiateJob`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html) API.
+
+To download an archive, you will need to initiate an `archive-retrieval` job first to make the Archive available for download.
+{{< command >}}
+$ awslocal glacier initiate-job --vault-name sample-vault --account-id - --job-parameters '{"Type":"archive-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}'
+{{< /command >}}
+
+On successful execution of the job, you will see the following output:
+
+```bash
+{
+ "location": "//vaults/sample-vault/jobs/25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW",
+ "jobId": "25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW"
+}
+```
+
+### List the jobs
+
+You can list the current and previous processes, called Jobs, to monitor the requests sent to the Glacier API using the [`ListJobs`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-jobs-get.html) API.
+
+{{< command >}}
+$ awslocal glacier list-jobs --vault-name sample-vault --account-id -
+{{< /command >}}
+
+On successful execution of the command, you will see the following output:
+
+```bash
+{
+ "JobList": [
+ {
+ "JobId": "25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW",
+ "Action": "ArchiveRetrieval",
+ "ArchiveId": "d41d8cd98f00b204e9800998ecf8427e",
+ "VaultARN": "arn:aws:glacier:us-east-1:000000000000:vaults/sample-vault",
+ "CreationDate": "2023-09-11T15:25:54.000Z",
+ "Completed": true,
+ "StatusCode": "Succeeded",
+ "ArchiveSizeInBytes": 0,
+ "InventorySizeInBytes": 10000,
+ "CompletionDate": "2023-09-11T15:25:59.000Z",
+ "Tier": "Standard"
+ }
+ ]
+}
+```
+
+### Download the result of an archive retrieval
+
+You can download the output of an `ArchiveRetrieval` job with the [`GetJobOutput`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-job-output-get.html) API.
+
+The data download process can be verified through the previous `ListJobs` call to check progress.
+Once the `ArchiveRetrieval` Job is complete, the data can be downloaded.
+You can use the `JobId` of the Job to download your archive with the following command:
+
+{{< command >}}
+$ awslocal glacier get-job-output --vault-name sample-vault --account-id - --job-id 25CEOTJ7ZUR5Q7YY0B1O55AE4C3L1502EOHWMNY10IIYEBWEQB73D23S8BVYO9RTRTPLRK2LJLUCCRM52GDV87C9A4JW my-archive.jpg
+{{< /command >}}
+
+{{< callout >}}
+Please not that currently, this operation is only mocked, and will create an empty file named `my-archive.jpg`, not containing the contents of your archive.
+{{< /callout >}}
+
+### Retrieve the inventory information
+
+You can also initiate the retrieval of the inventory of a vault using the same [`InitiateJob`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-initiate-job-post.html) API.
+
+Initiate a job of the specified type to get the details of the individual inventory items inside a Vault using the `initiate-job` command:
+{{< command >}}
+$ awslocal glacier initiate-job --vault-name sample-vault --account-id - --job-parameters '{"Type":"inventory-retrieval","ArchiveId":"d41d8cd98f00b204e9800998ecf8427e"}'
+{{< /command >}}
+
+On successful execution of the command, you will see the following output:
+
+```bash
+{
+ "location": "//vaults/sample-vault/jobs/P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F",
+ "jobId": "P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F"
+}
+```
+
+In the same fashion as the archive retrieval, you can now download the result of the inventory retrieval job using `GetJobOutput` using the `JobId` from the result of the previous command:
+{{< command >}}
+$ awslocal glacier get-job-output \
+ --vault-name sample-vault --account-id - --job-id P5972CSWFR803BHX48OD1A7JWNBFJUMYVWCMZWY55ZJPIJMG1XWFV9ISZPZH1X3LBF0UV3UG6ORETM0EHE5R86Z47B1F inventory.json
+{{< /command >}}
+
+Inspecting the content of the `inventory.json` file, we can find an inventory of the vault:
+
+```json
+{
+ "VaultARN": "arn:aws:glacier:us-east-1:000000000000:vaults/sample-vault",
+ "InventoryDate": "2023-09-11T17:20:48.000Z",
+ "ArchiveList": [
+ {
+ "ArchiveId": "d41d8cd98f00b204e9800998ecf8427e",
+ "ArchiveDescription": "",
+ "CreationDate": "2023-09-11T15:13:41.000Z",
+ "Size": 0,
+ "SHA256TreeHash": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"
+ }
+ ]
+}
+```
+
+### Delete an archive
+
+You can delete a Glacier archive using the [`DeleteArchive`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-archive-delete.html) API.
+
+Run the following command to delete the previously created archive:
+
+{{< command >}}
+$ awslocal glacier delete-archive \
+ --vault-name sample-vault --account-id - --archive-id d41d8cd98f00b204e9800998ecf8427e
+{{< /command >}}
+
+### Delete a vault
+
+You can delete a Glacier vault with the [`DeleteVault`](https://docs.aws.amazon.com/amazonglacier/latest/dev/api-vault-delete.html) API.
+
+Run the following command to delete the vault:
+{{< command >}}
+$ awslocal glacier delete-vault --vault-name sample-vault --account-id -
+{{< /command >}}
diff --git a/src/content/docs/aws/services/glue.md b/src/content/docs/aws/services/glue.md
new file mode 100644
index 00000000..9b148398
--- /dev/null
+++ b/src/content/docs/aws/services/glue.md
@@ -0,0 +1,460 @@
+---
+title: Glue
+linkTitle: Glue
+description: Get started with Glue on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+The Glue API in LocalStack Pro allows you to run ETL (Extract-Transform-Load) jobs locally, maintaining table metadata in the local Glue data catalog, and using the Spark ecosystem (PySpark/Scala) to run data processing workflows.
+
+LocalStack allows you to use the Glue APIs in your local environment.
+The supported APIs are available on our [API coverage page](/references/coverage/coverage_glue/), which provides information on the extent of Glue's integration with LocalStack.
+
+{{< callout >}}
+LocalStack now includes a container-based Glue Job executor, enabling Glue jobs to run within a Docker environment.
+Previously, LocalStack relied on a pre-packaged binary that included Spark and other required components.
+The new executor leverages the `aws-glue-libs` Docker image, provides better production parity, faster startup times, and more reliable execution.
+
+Key enhancements include:
+
+- Running Glue jobs inside Docker containers
+- Providing isolated execution environments per job
+- Executing multiple jobs in parallel
+- Ensuring correct versioning of Spark, Hadoop, Python, Java, and related libraries
+- Improving startup times and offline execution support
+
+To use it, set `GLUE_JOB_EXECUTOR=docker` and `GLUE_JOB_EXECUTOR_PROVIDER=v2` in your LocalStack configuration.
+The new executor additionally deprecates older versions of Glue (`0.9`, `1.0`, `2.0`).
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to Glue and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create databases and table metadata in Glue, run Glue ETL jobs, import databases from Athena, and run Glue Crawlers with the AWS CLI.
+
+{{< callout >}}
+In order to run Glue jobs, some additional dependencies have to be fetched from the network, including a Docker image of apprx.
+1.5GB which includes Spark, Presto, Hive and other tools.
+These dependencies are automatically fetched when you start up the service, so please make sure you're on a decent internet connection when pulling the dependencies for the first time.
+{{< /callout >}}
+
+### Creating Databases and Table Metadata
+
+The commands below illustrate the creation of some very basic entries (databases, tables) in the Glue data catalog:
+{{< command >}}
+$ awslocal glue create-database --database-input '{"Name":"db1"}'
+$ awslocal glue create-table --database db1 --table-input '{"Name":"table1"}'
+$ awslocal glue get-tables --database db1
+{{< /command >}}
+
+You should see the following output:
+
+```json
+{
+ "TableList": [
+ {
+ "Name": "table1",
+ "DatabaseName": "db1"
+ }
+ ]
+}
+```
+
+### Running Scripts with Scala and PySpark
+
+Create a new PySpark script named `job.py` with the following code:
+
+```python
+from pyspark.sql import SparkSession
+
+def init_spark():
+ spark = SparkSession.builder.appName("HelloWorld").getOrCreate()
+ sc = spark.sparkContext
+ return spark,sc
+
+def main():
+ spark,sc = init_spark()
+ nums = sc.parallelize([1,2,3,4])
+ print(nums.map(lambda x: x*x).collect())
+
+
+if __name__ == '__main__':
+ main()
+```
+
+You can now copy the script to an S3 bucket:
+{{< command >}}
+$ awslocal s3 mb s3://glue-test
+$ awslocal s3 cp job.py s3://glue-test/job.py
+{{< / command >}}
+
+Next, you can create a job definition:
+
+{{< command >}}
+$ awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/glue-role \
+ --command '{"Name": "pythonshell", "ScriptLocation": "s3://glue-test/job.py"}'
+{{< / command >}}
+
+You can finally start the job execution:
+
+{{< command >}}
+$ awslocal glue start-job-run --job-name job1
+{{< / command >}}
+The returned `JobRunId` can be used to query the status job the job execution, until it becomes `SUCCEEDED`:
+{{< command >}}
+$ awslocal glue get-job-run --job-name job1 --run-id
+{{< / command >}}
+
+You should see the following output:
+
+```json
+{
+ "JobRun": {
+ "Id": "733b76d0",
+ "Attempt": 1,
+ "JobRunState": "SUCCEEDED"
+ }
+}
+```
+
+For a more detailed example illustrating how to run a local Glue PySpark job, please refer to this [sample repository](https://github.com/localstack/localstack-pro-samples/tree/master/glue-etl-jobs).
+
+### Importing Athena Tables into Glue Data Catalog
+
+The Glue data catalog is integrated with Athena, and the database/table definitions can be imported via the `import-catalog-to-glue` API.
+
+Assume you are running the following Athena queries to create databases and table definitions:
+
+```sql
+CREATE DATABASE db2
+CREATE EXTERNAL TABLE db2.table1 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table1'
+CREATE EXTERNAL TABLE db2.table2 (a1 Date, a2 STRING, a3 INT) LOCATION 's3://test/table2'
+```
+
+Then this command will import these DB/table definitions into the Glue data catalog:
+{{< command >}}
+$ awslocal glue import-catalog-to-glue
+{{< /command >}}
+
+Afterwards, the databases and tables will be available in Glue.
+You can query the databases with the `get-databases` operation:
+
+{{< command >}}
+$ awslocal glue get-databases
+{{< /command >}}
+
+You should see the following output:
+
+```json
+{
+ "DatabaseList": [
+ ...
+ {
+ "Name": "db2",
+ "Description": "Database db2 imported from Athena",
+ "TargetDatabase": {
+ "CatalogId": "000000000000",
+ "DatabaseName": "db2"
+ }
+ }
+ ]
+}
+```
+
+And you can query the databases with the `get-databases` operation:
+{{< command >}}
+$ awslocal glue get-tables --database-name db2
+{{< / command >}}
+You should see the following output:
+
+```json
+{
+ "TableList": [
+ {
+ "Name": "table1",
+ "DatabaseName": "db2",
+ "Description": "Table db2.table1 imported from Athena",
+ "CreateTime": ...
+ },
+ {
+ "Name": "table2",
+ "DatabaseName": "db2",
+ "Description": "Table db2.table2 imported from Athena",
+ "CreateTime": ...
+ }
+ ]
+}
+```
+
+### Crawlers
+
+Glue crawlers allow extracting metadata from structured data sources.
+
+LocalStack Glue currently supports S3 targets (configurable via `S3Targets`), as well as JDBC targets (configurable via `JdbcTargets`).
+Support for other target types is in our pipeline and will be added soon.
+
+#### S3 Crawler Example
+
+The example below illustrates crawling tables and partition metadata from S3 buckets.
+
+You can first create an S3 bucket with a couple of items:
+
+{{< command >}}
+$ awslocal s3 mb s3://test
+$ printf "1, 2, 3, 4\n5, 6, 7, 8" > /tmp/file.csv
+$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=1/file.csv
+$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Jan/day=2/file.csv
+$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=1/file.csv
+$ awslocal s3 cp /tmp/file.csv s3://test/table1/year=2021/month=Feb/day=2/file.csv
+{{< / command >}}
+
+You can then create and trigger the crawler:
+
+{{< command >}}
+$ awslocal glue create-database --database-input '{"Name":"db1"}'
+$ awslocal glue create-crawler --name c1 --database-name db1 --role arn:aws:iam::000000000000:role/glue-role --targets '{"S3Targets": [{"Path": "s3://test/table1"}]}'
+$ awslocal glue start-crawler --name c1
+{{< / command >}}
+
+Finally, you can query the table metadata that has been created by the crawler:
+
+{{< command >}}
+$ awslocal glue get-tables --database-name db1
+{{< / command >}}
+You should see the following output:
+
+```json
+{
+ "TableList": [{
+ "Name": "table1",
+ "DatabaseName": "db1",
+ "PartitionKeys": [ ... ]
+...
+```
+
+You can also query the created table partitions:
+{{< command >}}
+$ awslocal glue get-partitions --database-name db1 --table-name table1
+{{< / command >}}
+You should see the following output:
+
+```json
+{
+ "Partitions": [{
+ "Values": ["2021", "Jan", "1"],
+ "DatabaseName": "db1",
+ "TableName": "table1",
+...
+```
+
+#### JDBC Crawler Example
+
+When using JDBC crawlers, you can point your crawler towards a Redshift database created in LocalStack.
+
+Below is a rough outline of the steps required to get the integration for the JDBC crawler working.
+You can first create the local Redshift cluster via:
+{{< command >}}
+$ awslocal redshift create-cluster --cluster-identifier c1 --node-type dc1.large --master-username test --master-user-password test --db-name db1
+{{< / command >}}
+The output of this command contains the endpoint address of the created Redshift database:
+
+```json
+...
+ "Endpoint": {
+ "Address": "localhost.localstack.cloud",
+ "Port": 4510
+ },
+...
+```
+
+Then you can use any JDBC or Postgres client to create a table `mytable1` in the Redshift database, and fill the table with some data.
+
+Next, you're creating the Glue database, the JDBC connection, as well as the crawler:
+
+{{< command >}}
+$ awslocal glue create-database --database-input '{"Name":"gluedb1"}'
+$ awslocal glue create-connection --connection-input \
+ {"Name":"conn1","ConnectionType":"JDBC","ConnectionProperties":{"USERNAME":"test","PASSWORD":"test","JDBC_CONNECTION_URL":"jdbc:redshift://localhost.localstack.cloud:4510/db1"}}'
+$ awslocal glue create-crawler --name c1 --database-name gluedb1 --role arn:aws:iam::000000000000:role/glue-role --targets '{"JdbcTargets":[{"ConnectionName":"conn1","Path":"db1/%/mytable1"}]}'
+$ awslocal glue start-crawler --name c1
+{{< / command >}}
+
+Once the crawler has started, you have to wait until the `State` turns to `READY` when querying the current state:
+{{< command >}}
+$ awslocal glue get-crawler --name c1
+{{< /command >}}
+
+Once the crawler has finished running and is back in `READY` state, the Glue table within the `gluedb1` DB should have been populated and can be queried via the API.
+
+### Schema Registry
+
+The Glue Schema Registry allows you to centrally discover, control, and evolve data stream schemas.
+With the Schema Registry, you can manage and enforce schemas and schema compatibilities in your streaming applications.
+It integrates nicely with [Managed Streaming for Kafka (MSK)](../managed-streaming-for-kafka).
+
+{{< callout >}}
+Currently, LocalStack supports the AVRO dataformat for the Glue Schema Registry.
+Support for other dataformats will be added in the future.
+{{< /callout >}}
+
+You can create a schema registry with the following command:
+{{< command >}}
+$ awslocal glue create-registry --registry-name demo-registry
+{{< /command >}}
+
+You can create a schema in the newly created registry with the `create-schema` command:
+{{< command >}}
+$ awslocal glue create-schema --schema-name demo-schema --registry-id RegistryName=demo-registry --data-format AVRO --compatibility FORWARD \
+ --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}]}'
+{{< /command >}}
+You should see the following output:
+
+```json
+{
+ "RegistryName": "demo-registry",
+ "RegistryArn": "arn:aws:glue:us-east-1:000000000000:file-registry/demo-registry",
+ "SchemaName": "demo-schema",
+ "SchemaArn": "arn:aws:glue:us-east-1:000000000000:schema/demo-registry/demo-schema",
+ "DataFormat": "AVRO",
+ "Compatibility": "FORWARD",
+ "SchemaCheckpoint": 1,
+ "LatestSchemaVersion": 1,
+ "NextSchemaVersion": 2,
+ "SchemaStatus": "AVAILABLE",
+ "SchemaVersionId": "546d3220-6ab8-452c-bb28-0f1f075f90dd",
+ "SchemaVersionStatus": "AVAILABLE"
+}
+```
+
+Once the schema has been created, you can create a new version:
+{{< command >}}
+$ awslocal glue register-schema-version --schema-id SchemaName=demo-schema,RegistryName=demo-registry \
+ --schema-definition '{"type":"record","namespace":"Demo","name":"Person","fields":[{"name":"Name","type":"string"}, {"name":"Address","type":"string"}]}'
+{{< /command >}}
+
+You should see the following output:
+
+```json
+{
+ "SchemaVersionId": "ee38732b-b299-430d-a88b-4c429d9e1208",
+ "VersionNumber": 2,
+ "Status": "AVAILABLE"
+}
+```
+
+You can find a more advanced sample in our [localstack-pro-samples repository on GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/glue-msk-schema-registry), which showcases the integration with AWS MSK and automatic schema registrations (including schema rejections based on the compatibilities).
+
+### Delta Lake Tables
+
+LocalStack Glue supports [Delta Lake](https://delta.io), an open-source storage framework that extends Parquet data files with a file-based transaction log for ACID transactions and scalable metadata handling.
+
+{{< callout >}}
+Please note that Delta Lake tables are only [supported for Glue versions `3.0` and `4.0`](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-format-delta-lake.html).
+{{< /callout >}}
+
+To illustrate this feature, we take a closer look at a Glue sample job that creates a Delta Lake table, puts some data into it, and then queries data from the table.
+
+First, we define the PySpark job in a file named `job.py` (see below).
+The job first creates a database `db1` and table `table1`, then inserts data into the table via both a dataframe and an `INSERT INTO` query, and finally fetches the inserted rows via a `SELECT` query:
+
+```python
+from awsglue.context import GlueContext
+from pyspark import SparkContext, SparkConf
+
+conf = SparkConf()
+conf.set("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
+conf.set("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
+glue_context = GlueContext(SparkContext.getOrCreate(conf=conf))
+spark = glue_context.spark_session
+
+# create database and table
+spark.sql("CREATE DATABASE db1")
+spark.sql("CREATE TABLE db1.table1 (name string, key long) USING delta PARTITIONED BY (key) LOCATION 's3a://test/data/'")
+
+# create dataframe and write to table in S3
+df = spark.createDataFrame([("test1", 123)], ["name", "key"])
+df.write.format("delta").options(path="s3a://test/data/") \
+ .mode("append").partitionBy("key").saveAsTable("db1.table1")
+
+# insert data via 'INSERT' query
+spark.sql("INSERT INTO db1.table1 (name, key) VALUES ('test2', 456)")
+
+# get and print results, to run assertions further below
+result = spark.sql("SELECT * FROM db1.table1")
+print("SQL result:", result.toJSON().collect())
+```
+
+You can now run the following commands to create and start the Glue job:
+
+{{< command >}}
+$ awslocal s3 mb s3://test
+$ awslocal s3 cp job.py s3://test/job.py
+$ awslocal glue create-job --name job1 --role arn:aws:iam::000000000000:role/test \
+ --glue-version 4.0 --command '{"Name": "pythonshell", "ScriptLocation": "s3://test/job.py"}'
+$ awslocal glue start-job-run --job-name job1
+
+{
+ "JobRunId": "c9471f40"
+}
+
+{{< / command >}}
+
+The execution of the Glue job can take a few moments - once the job has finished executing, you should see a log line with the query results in the LocalStack container logs, similar to the output below:
+
+```text
+2023-10-17 12:59:20,088 INFO scheduler.DAGScheduler: Job 15 finished: collect at /private/tmp/script-90e5371e.py:28, took 0,158257 s
+SQL result: ['{"name":"test1","key":123}', '{"name":"test2","key":456}']
+```
+
+In order to see the logs above, make sure to enable `DEBUG=1` in the LocalStack container environment.
+Alternatively, you can also retrieve the job logs programmatically via the CloudWatch Logs API - for example, using the job run ID `c9471f40` from above:
+{{< command >}}
+$ awslocal logs get-log-events --log-group-name /aws-glue/jobs/logs-v2 --log-stream-name c9471f40
+
+{ "events": [ ... ] }
+
+{{< / command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for Glue.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Glue** under the **Analytics** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Manage Databases**: Create, view, and delete databases in your Glue catalog **Databases** tab.
+- **Manage Tables**: Create, view, edit, and delete tables in a database in your Glue catalog clicking on the **Tables** tab.
+- **Manage Connections**: Create, view, and delete Connections in your Glue catalog by clicking on the **Connections** tab.
+- **Manage Crawlers**: Create, view, and delete Crawlers in your Glue catalog by clicking on the **Crawlers** tab.
+- **Manage Jobs**: Create, view, and delete Jobs in your Glue catalog by clicking on the **Jobs** tab.
+- **Manage Schema Registries**: Create, view, and delete Schema Registries in your Glue catalog by clicking on the **Schema Registries** tab.
+- **Manage Schemas**: Create, view, and delete Schemas in your Glue catalog by clicking on the **Schemas** tab.
+
+## Examples
+
+The following Developer Hub applications are using Glue:
+{{< applications service_filter="glu">}}
+
+The following tutorials are using Glue:
+{{< tutorials "/tutorials/schema-evolution-glue-msk">}}
+
+The following code snippets and sample applications provide practical examples of how to use Glue in LocalStack for various use cases:
+
+- [localstack-pro-samples/glue-etl-jobs](https://github.com/localstack/localstack-pro-samples/tree/master/glue-etl-jobs)
+ - Simple demo application illustrating the use of the Glue API to run local ETL jobs using LocalStack.
+- [localstack-pro-samples/glue-redshift-crawler](https://github.com/localstack/localstack-pro-samples/tree/master/glue-redshift-crawler)
+ - Simple demo application illustrating the use of AWS Glue Crawler to populate the Glue metastore from a Redshift database.
+
+## Further Reading
+
+The AWS Glue API is a fairly comprehensive service - more details can be found in the official [AWS Glue Developer Guide](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html).
+
+## Current Limitations
+
+Support for triggers is currently limited - the basic API endpoints are implemented, but triggers are currently still under development (more details coming soon).
diff --git a/src/content/docs/aws/services/iam.md b/src/content/docs/aws/services/iam.md
new file mode 100644
index 00000000..a209b42f
--- /dev/null
+++ b/src/content/docs/aws/services/iam.md
@@ -0,0 +1,119 @@
+---
+title: "Identity and Access Management (IAM)"
+linkTitle: "Identity and Access Management (IAM)"
+description: Get started with AWS Identity and Access Management (IAM) on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Identity and Access Management (IAM) is a web service provided by Amazon Web Services (AWS) that enables users to control access to AWS resources securely.
+IAM allows organizations to create and manage AWS users, groups, and roles, defining granular permissions to access specific AWS services and resources.
+By centralizing access control, administrators can enforce the principle of least privilege, ensuring users have only the necessary permissions for their tasks.
+
+LocalStack allows you to use the IAM APIs in your local environment to create and manage users, groups, and roles, granting permissions that adhere to the principle of least privilege.
+The supported APIs are available on our [API coverage page]({{< ref "references/coverage/coverage_iam" >}}), which provides information on the extent of IAM's integration with LocalStack.
+The policy coverage is documented in the [IAM coverage documentation]({{< ref "iam-coverage" >}}).
+
+## Getting started
+
+This guide is designed for users new to IAM and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a new user named `test`, create an access key pair for the user, and assert that the user is recognized after the access keys are configured in the environment.
+
+By default, in the absence of custom credentials configuration, all requests to LocalStack run under the administrative root user.
+Run the following command to use the [`GetCallerIdentity`](https://docs.aws.amazon.com/cli/latest/reference/sts/get-caller-identity.html) API to confirm that the request is running under the root user:
+
+{{< command >}}
+$ awslocal sts get-caller-identity
+{{< / command >}}
+
+You can see an output similar to the following:
+
+```bash
+{
+ "UserId": "AKIAIOSFODNN7EXAMPLE",
+ "Account": "000000000000",
+ "Arn": "arn:aws:iam::000000000000:root"
+}
+```
+
+You can now create a new user named `test` using the [`CreateUser`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-user.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal iam create-user --user-name test
+{{< / command >}}
+
+You can now create an access key pair for the user using the [`CreateAccessKey`](https://docs.aws.amazon.com/cli/latest/reference/iam/create-access-key.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal iam create-access-key --user-name test
+{{< / command >}}
+
+You can see an output similar to the following:
+
+```bash
+{
+ "AccessKey": {
+ "UserName": "test",
+ "AccessKeyId": "LKIAQAAAAAAAGFWKCM5F",
+ "Status": "Active",
+ "SecretAccessKey": "DUulXk2N2yD6rgoBBR9A/5iXa6dBcLyDknr925Q5",
+ "CreateDate": "2023-07-25T09:36:51+00:00"
+ }
+}
+...
+```
+
+You can save the `AccessKeyId` and `SecretAccessKey` values, and export them in the environment to run commands under the `test` user.
+Run the following command:
+
+{{< command >}}
+$ export AWS_ACCESS_KEY_ID=LKIAQAAAAAAAGFWKCM5F AWS_SECRET_ACCESS_KEY=DUulXk2N2yD6rgoBBR9A/5iXa6dBcLyDknr925Q5
+$ awslocal sts get-caller-identity
+{
+ "UserId": "b2yxf5g824zklfx5ry8o",
+ "Account": "000000000000",
+ "Arn": "arn:aws:iam::000000000000:user/test"
+}
+{{< / command >}}
+
+You can see that the request is now running under the `test` user.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing IAM users, groups, and roles.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **IAM** under the **Security Identity Compliance** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create User, Group, Role, and Policy**: Create a new IAM user, group, or role by clicking the top-level **Create** button and filling out the form.
+- **View User, Group, Role, and Policy Details**: Click on any of the listed resources to view its details by clicking on the desired User, Group, Role, or Policy.
+- **Edit User, Group, Role, and Policy Details**: Click on any listed resources to edit its details by clicking on the desired User, Group, Role, or Policy.
+- **Delete User, Group, Role, and Policy**: Select any listed resources to delete them by clicking the **Actions** button and selecting **Remove Selected**.
+
+## Special Tools
+
+LocalStack provides various tools to help you generate, test, and enforce IAM policies more efficiently.
+
+- **IAM Policy Stream**: IAM Policy Stream provides a real-time view of API calls and the corresponding IAM policies they generate, simplifying permission management and ensuring correct permissions are assigned.
+ Learn more in the [IAM Policy Stream documentation]({{< ref "user-guide/security-testing/iam-policy-stream" >}}).
+- **IAM Policy Enforcement**: This configuration enforces IAM policies when interacting with local cloud APIs, simulating a real AWS environment.
+ For additional information, refer to the [IAM Policy Enforcement documentation]({{< ref "iam-enforcement" >}}).
+- **Explainable IAM**: Explainable IAM logs outputs related to failed policy evaluations directly to LocalStack logs, aiding in the identification of necessary policies for successful requests.
+ More details are available in the [Explainable IAM documentation]({{< ref "explainable-iam" >}}).
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use IAM in LocalStack for various use cases:
+
+- [Serverless Container-based APIs with Amazon ECS & API Gateway](https://github.com/localstack/serverless-api-ecs-apigateway-sample)
+- [Event-driven architecture with Amazon SNS FIFO, DynamoDB, Lambda, and S3](https://github.com/localstack/event-driven-architecture-with-amazon-sns-fifo)
+- [Full-Stack application with AWS Lambda, DynamoDB & S3 for shipment validation](https://github.com/localstack/shipment-list-demo)
+- [Enforcement of IAM policies when working with local cloud APIs](https://github.com/localstack/localstack-pro-samples/tree/master/iam-policy-enforcement)
diff --git a/src/content/docs/aws/services/identitystore.md b/src/content/docs/aws/services/identitystore.md
new file mode 100644
index 00000000..a5503799
--- /dev/null
+++ b/src/content/docs/aws/services/identitystore.md
@@ -0,0 +1,79 @@
+---
+title: "Identity Store"
+linkTitle: "Identity Store"
+description: Get started with Identity Store on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Identity Store is a managed service that enables the creation and management of groups within your AWS environment.
+Groups are used to manage access to AWS resources, and Identity Store provides a central location to create and manage groups across your AWS accounts.
+
+LocalStack allows you to use the Identity Store APIs to create and manage groups in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_identitystore" >}}), which provides information on the extent of Identity Store integration with LocalStack.
+
+## Getting started
+
+This guide is aimed at users who are familiar with the AWS CLI and [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+It will walk you through the basics of setting up and managing groups within the AWS Identity Store using LocalStack.
+
+Start your LocalStack container using your preferred method.
+This guide will demonstrate how to create a group within Identity Store, list all groups, and describe a specific group.
+
+### Create a Group in Identity Store
+
+You can create a new group in the Identity Store using the [`CreateGroup`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_CreateGroup.html) API.
+Execute the following command to create a group with an identity store ID of `testls`:
+
+{{< command >}}
+$ awslocal identitystore create-group --identity-store-id testls
+
+{
+ "GroupId": "38cec731-de22-45bf-9af7-b74457bba884",
+ "IdentityStoreId": "testls"
+}
+
+{{< / command >}}
+
+Copy the `GroupId` value from the output, as it will be needed in subsequent steps.
+
+### List all Groups in Identity Store
+
+After creating groups, you might want to list all groups within the Identity Store to manage or review them.
+Run the following command to list all groups using the [`ListGroups`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_ListGroups.html) API:
+
+{{< command >}}
+$ awslocal identitystore list-groups --identity-store-id testls
+
+{
+ "Groups": [
+ {
+ "GroupId": "38cec731-de22-45bf-9af7-b74457bba884",
+ "ExternalIds": [],
+ "IdentityStoreId": "testls"
+ }
+ ]
+}
+
+{{< / command >}}
+
+This command returns a list of all groups, including the group you created in the previous step.
+
+### Describe a Group in Identity Store
+
+To view details about a specific group, use the [`DescribeGroup`](https://docs.aws.amazon.com/singlesignon/latest/IdentityStoreAPIReference/API_DescribeGroup.html) API.
+Run the following command to describe the group you created in the previous step:
+
+{{< command >}}
+$ awslocal describe-group --identity-store-id testls --group-id 38cec731-de22-45bf-9af7-b74457bba884
+
+{
+ "GroupId": "38cec731-de22-45bf-9af7-b74457bba884",
+ "ExternalIds": [],
+ "IdentityStoreId": "testls"
+}
+
+{{< / command >}}
+
+This command provides detailed information about the specific group, including its ID and any external IDs associated with it.
diff --git a/src/content/docs/aws/services/iot.md b/src/content/docs/aws/services/iot.md
new file mode 100644
index 00000000..49c1c789
--- /dev/null
+++ b/src/content/docs/aws/services/iot.md
@@ -0,0 +1,194 @@
+---
+title: "IoT"
+linkTitle: "IoT"
+tags: ["Base"]
+description: >
+ Get started with AWS IoT on LocalStack
+---
+
+## Introduction
+
+AWS IoT provides cloud services to manage IoT devices and integrate them with other AWS services.
+
+LocalStack supports IoT Core, IoT Data, IoT Analytics.
+Common operations for creating and updating things, groups, policies, certificates and other entities are implemented with full CloudFormation support.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_iot" >}}).
+
+LocalStack ships a [Message Queuing Telemetry Transport (MQTT)](https://mqtt.org/) broker powered by [Eclipse Mosquitto](https://mosquitto.org/) which supports both pure MQTT and MQTT-over-WSS (WebSockets Secure) protocols.
+
+## Getting Started
+
+This guide is for users that are new to IoT and assumes a basic knowledge of the AWS CLI and LocalStack [`awslocal`](https://github.com/localstack/awscli-local) wrapper.
+
+Start LocalStack using your preferred method.
+
+To retrieve the MQTT endpoint, use the [`DescribeEndpoint`](https://docs.aws.amazon.com/iot/latest/apireference/API_DescribeEndpoint.html) operation.
+
+{{< command >}}
+$ awslocal iot describe-endpoint
+
+{
+ "endpointAddress": "000000000000.iot.eu-central-1.localhost.localstack.cloud:4510"
+}
+
+{{< / command >}}
+
+{{< callout "tip" >}}
+LocalStack lazy-loads services by default.
+The MQTT broker may not be automatically available on a fresh launch of LocalStack.
+You can make a `DescribeEndpoint` call to start the broker and identify the port.
+{{< /callout >}}
+
+This endpoint can then be used with any MQTT client to publish and subscribe to topics.
+In this example, we will use the [Hive MQTT CLI](https://hivemq.github.io/mqtt-cli/docs/installation/).
+
+Run the following command to subscribe to an MQTT topic.
+
+{{< command >}}
+$ mqtt subscribe \
+ --host 000000000000.iot.eu-central-1.localhost.localstack.cloud \
+ --port 4510 \
+ --topic climate
+{{< /command >}}
+
+In a separate terminal session, publish a message to this topic.
+
+{{< command >}}
+$ mqtt publish \
+ --host 000000000000.iot.eu-central-1.localhost.localstack.cloud \
+ --port 4510 \
+ --topic climate \
+ -m "temperature=30°C;humidity=60%"
+{{< /command >}}
+
+This message will be pushed to all subscribers of this topic, including the one in the first terminal session.
+
+## Authentication
+
+LocalStack IoT maintains its own root certificate authority which is regenerated at every run.
+The root CA certificate can be retrieved from .
+
+{{< callout "tip" >}}
+AWS provides its root CA certificate at .
+[This section](https://docs.aws.amazon.com/iot/latest/developerguide/server-authentication.html#server-authentication-certs) contains information about CA certificates.
+{{< /callout >}}
+
+When connecting to the endpoints, you will need to provide this root CA certificate for authentication.
+This is illustrated below with Python [AWS IoT SDK](https://docs.aws.amazon.com/iot/latest/developerguide/iot-sdks.html),
+
+```py
+import awscrt
+import boto3
+from awsiot import mqtt_connection_builder
+
+region = 'eu-central-1'
+iot_client = boto3.client('iot', region=region)
+endpoint = aws_client.iot.describe_endpoint()["endpointAddress"]
+endpoint, port = endpoint.split(':')
+
+event_loop_group = io.EventLoopGroup(1)
+host_resolver = io.DefaultHostResolver(event_loop_group)
+client_bootstrap = io.ClientBootstrap(event_loop_group, host_resolver)
+
+credentials_provider = awscrt.auth.AwsCredentialsProvider.new_static(
+ access_key_id='...',
+ secret_access_key='...',
+)
+
+client_id = 'example-client'
+
+# Path to root CA certificate downloaded from `/_aws/iot/LocalStackIoTRootCA.pem`
+ca_filepath = '...'
+
+mqtt_over_wss = mqtt_connection_builder.websockets_with_default_aws_signing(
+ region=region,
+ credentials_provider=credentials_provider,
+ client_bootstrap=client_bootstrap,
+ client_id=client_id,
+ endpoint=endpoint,
+ port=port,
+ ca_filepath=ca_filepath,
+)
+
+mqtt_over_wss.connect().result()
+mqtt_over_wss.subscribe(...)
+```
+
+If you are using pure MQTT, you also need to set the client-side X509 certificates and Application Layer Protocol Negotiation (ALPN) for a successful mutual TLS (mTLS) authentication.
+This is not required for MQTT-over-WSS since it does not use mTLS.
+
+AWS IoT SDKs automatically set the ALPN when the endpoint port is 443.
+However, because LocalStack does not use this port, this must be done manually.
+For details on how ALPN works with AWS, see [this page](https://docs.aws.amazon.com/iot/latest/developerguide/protocols.html).
+
+The client certificate and key can be retrieved using `CreateKeysAndCertificate` operation.
+The certificate is signed by the LocalStack root CA.
+
+```py
+result = iot_client.create_keys_and_certificate(setAsActive=True)
+
+# Path to file with saved content `result["certificatePem"]`
+cert_file = '...'
+
+# Path to file with saved content `result["keyPair"]["PrivateKey"]`
+priv_key_file = '...'
+
+tls_ctx_options = awscrt.io.TlsContextOptions.create_client_with_mtls_from_path(
+ cert_file, priv_key_file
+)
+tls_ctx_options.alpn_list = ["x-amzn-mqtt-ca"]
+
+mqtt = mqtt_connection_builder._builder(
+ tls_ctx_options,
+ cert_filepath=cert_file,
+ pri_key_filepath=priv_key_file,
+ client_bootstrap=client_bootstrap,
+ client_id=client_id,
+ endpoint=endpoint,
+ port=port,
+ ca_filepath=ca_filepath,
+)
+
+mqtt.connect().result()
+mqtt.subscribe(...)
+```
+
+## Lifecycle Events
+
+LocalStack publishes the [lifecycle events](https://docs.aws.amazon.com/iot/latest/developerguide/life-cycle-events.html) to the standard endpoints.
+
+- `$aws/events/presence/connected/clientId`: when a client connects
+- `$aws/events/presence/disconnected/clientId`: when a client disconnects
+- `$aws/events/subscriptions/subscribed/clientId`: when a client subscribes to a topic
+- `$aws/events/subscriptions/unsubscribed/clientId`: when a client unsubscribes from a topic
+
+Currently the `principalIdentifier` and `sessionIdentifier` fields in event payload contain dummy values.
+
+## Registry Events
+
+LocalStack can publish the [registry events](https://docs.aws.amazon.com/iot/latest/developerguide/registry-events.html), if [you enable it](https://docs.aws.amazon.com/iot/latest/developerguide/iot-events.html#iot-events-enable).
+
+{{< command >}}
+$ awslocal iot update-event-configurations \
+ --event-configurations '{"THING":{"Enabled": true}}'
+{{< / command >}}
+
+You can then subscribe or use topic rules on the follow topics:
+
+- `$aws/events/thing//created`: when a new thing is created
+- `$aws/events/thing//updated`: when a thing is updated
+- `$aws/events/thing//deleted`: when a thing is deleted
+
+## Topic Rules
+
+It is possible to use actions with SQL queries for IoT Topic Rules.
+
+For example, you can use the [`CreateTopicRule`](https://docs.aws.amazon.com/iot/latest/apireference/API_CreateTopicRule.html) operation to define a topic rule with a SQL query `SELECT * FROM 'my/topic' where attr=123` which will execute a trigger whenever a message with attribute `attr=123` is received on the MQTT topic `my/topic`.
+
+The following actions are supported:
+- [Lambda](https://docs.aws.amazon.com/iot/latest/developerguide/lambda-rule-action.html)
+- [SQS](https://docs.aws.amazon.com/iot/latest/developerguide/sqs-rule-action.html)
+- [Kinesis](https://docs.aws.amazon.com/iot/latest/developerguide/kinesis-rule-action.html)
+- [Firehose](https://docs.aws.amazon.com/iot/latest/developerguide/kinesis-firehose-rule-action.html)
+- [DynamoDBv2](https://docs.aws.amazon.com/iot/latest/developerguide/dynamodb-v2-rule-action.html)
+- [HTTP](https://docs.aws.amazon.com/iot/latest/developerguide/https-rule-action.html) (URL confirmation and substitution templating is not implemented)
diff --git a/src/content/docs/aws/services/iotanalytics.md b/src/content/docs/aws/services/iotanalytics.md
new file mode 100644
index 00000000..51b77372
--- /dev/null
+++ b/src/content/docs/aws/services/iotanalytics.md
@@ -0,0 +1,130 @@
+---
+title: "IoT Analytics"
+linkTitle: "IoT Analytics"
+tags: ["Ultimate"]
+description: Get started with IoT Analytics on LocalStack
+---
+
+{{< callout "warning" >}}
+IoT Analytics will be [retired on 15 December 2025](https://docs.aws.amazon.com/iotanalytics/latest/userguide/iotanalytics-end-of-support.html).
+It will be removed from LocalStack soon after this date.
+{{< /callout >}}
+
+## Introduction
+
+IoT Analytics is a managed service that enables you to collect, store, process, and analyze data generated by your IoT devices.
+It provides a set of tools to build IoT applications without having to manage the underlying infrastructure.
+
+LocalStack allows you to use the IoT Analytics APIs to create and manage channels, data stores, and pipelines in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_iotanalytics" >}}), which provides information on the extent of IoT Analytics integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Iot Analytics and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a channel, data store, and pipeline within IoT Analytics using LocalStack.
+
+### Create a channel
+
+You can create a channel using the [`CreateChannel`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreateChannel.html) API.
+Run the following command to create a channel named `mychannel`:
+
+{{< command >}}
+$ awslocal iotanalytics create-channel --channel-name mychannel
+{{< /command >}}
+
+You can use the [`DescribeChannel`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribeChannel.html) API to check the status of the channel:
+
+{{< command >}}
+$ awslocal iotanalytics describe-channel --channel-name mychannel
+{{< /command >}}
+
+The following output is displayed:
+
+```json
+{
+ "channel": {
+ "name": "mychannel",
+ "status": "ACTIVE"
+ }
+}
+```
+
+### Create a data store
+
+You can create a data store using the [`CreateDatastore`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreateDatastore.html) API.
+Run the following command to create a data store named `mydatastore`:
+
+{{< command >}}
+$ awslocal iotanalytics create-datastore --datastore-name mydatastore
+{{< /command >}}
+
+You can use the [`DescribeDatastore`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribeDatastore.html) API to check the status of the data store:
+
+{{< command >}}
+$ awslocal iotanalytics describe-datastore --datastore-name mydatastore
+{{< /command >}}
+
+The following output is displayed:
+
+```json
+{
+ "datastore": {
+ "name": "mydatastore",
+ "status": "ACTIVE"
+ }
+}
+```
+
+### Create a pipeline
+
+You can create a pipeline using the [`CreatePipeline`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_CreatePipeline.html) API.
+Run the following command to create a pipeline named `mypipeline`:
+
+{{< command >}}
+$ awslocal iotanalytics create-pipeline --cli-input-json file://mypipeline.json
+{{< /command >}}
+
+The `mypipeline.json` file contains the following content:
+
+```json
+{
+ "pipelineName": "mypipeline",
+ "pipelineActivities": [
+ {
+ "channel": {
+ "name": "mychannelactivity",
+ "channelName": "mychannel",
+ "next": "mystoreactivity"
+ }
+ },
+ {
+ "datastore": {
+ "name": "mystoreactivity",
+ "datastoreName": "mydatastore"
+ }
+ }
+ ]
+}
+```
+
+You can use the [`DescribePipeline`](https://docs.aws.amazon.com/iotanalytics/latest/APIReference/API_DescribePipeline.html) API to check the status of the pipeline:
+
+{{< command >}}
+$ awslocal iotanalytics describe-pipeline --pipeline-name mypipeline
+{{< /command >}}
+
+The following output is displayed:
+
+```json
+{
+ "pipeline": {
+ "name": "mypipeline"
+ }
+}
+```
+
+## Current Limitations
+
+The IoT Analytics service provider is currently mocked in LocalStack, and the service does not interface with the IoT Core service.
diff --git a/src/content/docs/aws/services/iotdata.md b/src/content/docs/aws/services/iotdata.md
new file mode 100644
index 00000000..175d8d04
--- /dev/null
+++ b/src/content/docs/aws/services/iotdata.md
@@ -0,0 +1,95 @@
+---
+title: "IoT Data"
+linkTitle: "IoT Data"
+tags: ["Ultimate"]
+description: Get started with IoT Data on LocalStack
+---
+
+## Introduction
+
+IoT Data provides secure, bi-directional communication between Internet-connected things, such as sensors, actuators, embedded devices, or smart appliances, and the AWS Cloud.
+It allows you to connect your devices to the cloud and interact with them using the AWS Management Console, AWS CLI, or AWS SDKs.
+
+LocalStack allows you to use the IoT Data APIs to update, get, and delete the shadow of a thing in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_iot-data" >}}), which provides information on the extent of IoT Data integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to IoT Data and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a thing, update its shadow, get its shadow, and delete its shadow using IoT Data.
+
+### Update the shadow
+
+You can update the shadow of a thing using the [`UpdateThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_UpdateThingShadow.html) API.
+Run the following command to update the shadow of a thing named `MyRPi`:
+
+{{< command >}}
+$ awslocal iot-data update-thing-shadow \
+ --thing-name "MyRPi" \
+ --payload "{\"state\":{\"reported\":{\"moisture\":\"okay\"}}}" \
+ output.txt --cli-binary-format raw-in-base64-out
+{{< /command >}}
+
+The `output.txt` file contains the following output:
+
+```json
+{
+ "state": {
+ "reported": {
+ "moisture": "okay"
+ }
+ },
+ "metadata": {
+ "reported": {
+ "moisture": {
+ "timestamp": 1724226109
+ }
+ }
+ },
+ "version": 1,
+ "timestamp": 1724226109
+}
+```
+
+### Get the shadow
+
+You can get the shadow of a thing using the [`GetThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_GetThingShadow.html) API.
+Run the following command to get the shadow:
+
+{{< command >}}
+$ awslocal iot-data get-thing-shadow \
+ --thing-name "MyRPi" \
+ output.txt
+{{< /command >}}
+
+The `output.txt` will contain the same output as the previous command.
+
+### Delete the shadow
+
+You can delete the shadow of a thing using the [`DeleteThingShadow`](https://docs.aws.amazon.com/iot/latest/apireference/API_DeleteThingShadow.html) API.
+Run the following command to delete the shadow:
+
+{{< command >}}
+$ awslocal iot-data delete-thing-shadow \
+ --thing-name "MyRPi" \
+ output.txt
+{{< /command >}}
+
+The `output.txt` will contain the following output:
+
+```json
+{
+ "version": 1,
+ "timestamp": 1724226371
+}
+```
+
+## Device Shadows
+
+LocalStack supports both unnamed (classic) and named device shadows.
+
+You can use AWS CLI and [MQTT topics](https://docs.aws.amazon.com/iot/latest/developerguide/device-shadow-mqtt.html) to get, update or delete device shadow state information.
+
+The endpoint as returned by `DescribeEndpoint` currently does not support the [device shadow REST API](https://docs.aws.amazon.com/iot/latest/developerguide/device-shadow-rest-api.html#API_GetThingShadow)
diff --git a/src/content/docs/aws/services/iotwireless.md b/src/content/docs/aws/services/iotwireless.md
new file mode 100644
index 00000000..ccae3523
--- /dev/null
+++ b/src/content/docs/aws/services/iotwireless.md
@@ -0,0 +1,158 @@
+---
+title: "IoT Wireless"
+linkTitle: "IoT Wireless"
+description: Get started with IoT Wireless on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+AWS IoT Wireless is a managed service that enables customers to connect and manage wireless devices.
+The service provides a set of APIs to manage wireless devices, gateways, and destinations.
+
+LocalStack allows you to use the IoT Wireless APIs in your local environment from creating wireless devices and gateways.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_iotwireless" >}}), which provides information on the extent of IoT Wireless's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to IoT Wireless and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to use IoT Wireless to create wireless devices and gateways with the AWS CLI.
+
+### Create a Wireless Device
+
+You can create a wireless device using the [`CreateWirelessDevice`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessDevice.html) API.
+Run the following command to create a wireless device:
+
+{{< command >}}
+$ awslocal iotwireless create-device-profile
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Id": "b8a8e3a8"
+}
+```
+
+You can list the device profiles using the [`ListDeviceProfiles`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListDeviceProfiles.html) API.
+Run the following command to list the device profiles:
+
+{{< command >}}
+$ awslocal iotwireless list-device-profiles
+{{< / command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "DeviceProfileList": [
+ {
+ "Id": "b8a8e3a8"
+ }
+ ]
+}
+```
+
+### Create a Wireless device
+
+You can create a wireless device using the [`CreateWirelessDevice`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessDevice.html) API.
+Run the following command to create a wireless device:
+
+{{< command >}}
+$ awslocal iotwireless create-wireless-device \
+ --cli-input-json file://input.json
+{{< / command >}}
+
+The `input.json` file contains the following content:
+
+```json
+{
+ "Description": "My LoRaWAN wireless device",
+ "DestinationName": "IoTWirelessDestination",
+ "LoRaWAN": {
+ "DeviceProfileId": "ab0c23d3-b001-45ef-6a01-2bc3de4f5333",
+ "ServiceProfileId": "fe98dc76-cd12-001e-2d34-5550432da100",
+ "OtaaV1_1": {
+ "AppKey": "3f4ca100e2fc675ea123f4eb12c4a012",
+ "JoinEui": "b4c231a359bc2e3d",
+ "NwkKey": "01c3f004a2d6efffe32c4eda14bcd2b4"
+ },
+ "DevEui": "ac12efc654d23fc2"
+ },
+ "Name": "SampleIoTWirelessThing",
+ "Type": "LoRaWAN"
+}
+```
+
+You can list the wireless devices using the [`ListWirelessDevices`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListWirelessDevices.html) API.
+Run the following command to list the wireless devices:
+
+{{< command >}}
+$ awslocal iotwireless list-wireless-devices
+{{< / command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "WirelessDeviceList": [
+ {
+ "Id": "0bca2fe2",
+ "Type": "LoRaWAN",
+ "Name": "SampleIoTWirelessThing",
+ "DestinationName": "IoTWirelessDestination",
+ "LoRaWAN": {
+ "DevEui": "ac12efc654d23fc2"
+ }
+ }
+ ]
+}
+```
+
+### Create a Wireless Gateway
+
+You can create a wireless gateway using the [`CreateWirelessGateway`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_CreateWirelessGateway.html) API.
+Run the following command to create a wireless gateway:
+
+{{< command >}}
+$ awslocal iotwireless create-wireless-gateway \
+ --lorawan GatewayEui="a1b2c3d4567890ab",RfRegion="US915" \
+ --name "myFirstLoRaWANGateway" \
+ --description "Using my first LoRaWAN gateway"
+{{< / command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "Id": "e519dc4e"
+}
+```
+
+You can list the wireless gateways using the [`ListWirelessGateways`](https://docs.aws.amazon.com/iot-wireless/2020-11-22/API_ListWirelessGateways.html) API.
+Run the following command to list the wireless gateways:
+
+{{< command >}}
+$ awslocal iotwireless list-wireless-gateways
+{{< / command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "WirelessGatewayList": [
+ {
+ "Id": "e519dc4e",
+ "Name": "myFirstLoRaWANGateway",
+ "Description": "Using my first LoRaWAN gateway",
+ "LoRaWAN": {
+ "GatewayEui": "a1b2c3d4567890ab",
+ "RfRegion": "US915"
+ }
+ }
+ ]
+}
+```
diff --git a/src/content/docs/aws/services/kinesis.md b/src/content/docs/aws/services/kinesis.md
new file mode 100644
index 00000000..d3725daf
--- /dev/null
+++ b/src/content/docs/aws/services/kinesis.md
@@ -0,0 +1,218 @@
+---
+title: "Kinesis Data Streams"
+linkTitle: "Kinesis Data Streams"
+description: Get started with Kinesis Data Streams on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Kinesis Data Streams is an AWS service for ingesting, buffering, and processing data in high throughput data streams.
+It is used for applications that require real-time processing and deriving insights from data streams such as logs, metrics, user interactions, and sensor readings.
+
+LocalStack allows you to use the Kinesis Data Streams APIs in your local environment from setting up data streams and configuring data processing to building real-time applications.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_kinesis" >}}).
+
+Emulation for Kinesis is powered by [Kinesis Mock](https://github.com/etspaceman/kinesis-mock).
+
+## Getting started
+
+This guide is designed for users new to Kinesis Data Streams and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Lambda function to consume events from a Kinesis stream with the AWS CLI.
+
+### Create a Lambda function
+
+You need to create a Lambda function that receives a Kinesis event input and processes the messages that it contains.
+Create a file named `index.mjs` with the following content:
+
+```javascript
+console.log('Loading function');
+
+export const handler = (event, context) => {
+ event.Records.forEach(record => {
+ let payload = Buffer.from(record.kinesis.data, 'base64').toString('ascii');
+ console.log('Decoded payload:', payload);
+ });
+};
+```
+
+You can create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API.
+Run the following command to create a Lambda function named `ProcessKinesisRecords`:
+
+{{< command >}}
+$ zip function.zip index.mjs
+$ awslocal lambda create-function \
+ --function-name ProcessKinesisRecords \
+ --zip-file fileb://function.zip \
+ --handler index.handler \
+ --runtime nodejs18.x \
+ --role arn:aws:iam::000000000000:role/lambda-kinesis-role
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "FunctionName": "ProcessKinesisRecords",
+ "FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:ProcessKinesisRecords",
+ "Runtime": "nodejs18.x",
+ "Role": "arn:aws:iam::000000000000:role/lambda-kinesis-role",
+ "Handler": "index.handler",
+ ...
+}
+```
+
+### Invoke the Lambda function
+
+Create a file named `input.txt` with the following JSON content:
+
+```text
+{
+ "Records": [
+ {
+ "kinesis": {
+ "kinesisSchemaVersion": "1.0",
+ "partitionKey": "1",
+ "sequenceNumber": "49590338271490256608559692538361571095921575989136588898",
+ "data": "SGVsbG8sIHRoaXMgaXMgYSB0ZXN0Lg==",
+ "approximateArrivalTimestamp": 1545084650.987
+ },
+ "eventSource": "aws:kinesis",
+ "eventVersion": "1.0",
+ "eventID": "shardId-000000000006:49590338271490256608559692538361571095921575989136588898",
+ "eventName": "aws:kinesis:record",
+ "invokeIdentityArn": "arn:aws:iam::000000000000:role/lambda-kinesis-role",
+ "awsRegion": "us-east-1",
+ "eventSourceARN": "arn:aws:kinesis:us-east-1:000000000000:stream/lambda-stream"
+ }
+ ]
+}
+```
+
+The JSON contains a sample Kinesis event.
+You can use the [`Invoke`](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html) API to invoke the Lambda function with the Kinesis event as input.
+Execute the following command:
+
+{{< command >}}
+$ awslocal lambda invoke \
+ --function-name ProcessKinesisRecords \
+ --payload file://input.txt outputfile.txt
+{{< / command >}}
+
+### Create a Kinesis Stream
+
+You can create a Kinesis Stream using the [`CreateStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_CreateStream.html) API.
+Run the following command to create a Kinesis Stream named `lambda-stream`:
+
+{{< command >}}
+$ awslocal kinesis create-stream \
+ --stream-name lambda-stream \
+ --shard-count 1
+{{< / command >}}
+
+You can retrieve the Stream ARN using the [`DescribeStream`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_DescribeStream.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal kinesis describe-stream \
+ --stream-name lambda-stream
+{{< / command >}}
+
+The following output would be retrieved:
+
+```json
+{
+ "StreamDescription": {
+ "Shards": [
+ {
+ "ShardId": "shardId-000000000000",
+ "HashKeyRange": {
+ "StartingHashKey": "0",
+ "EndingHashKey": "340282366920938463463374607431768211455"
+ ...
+ }
+ ],
+ "StreamARN": "arn:aws:kinesis:us-east-1:000000000000:stream/lambda-stream",
+ "StreamName": "lambda-stream",
+ "StreamStatus": "ACTIVE",
+ ...
+}
+```
+
+You can save the `StreamARN` value for later use.
+
+### Add an Event Source in Lambda
+
+You can add an Event Source to your Lambda function using the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API.
+Run the following command to add the Kinesis Stream as an Event Source to your Lambda function:
+
+{{< command >}}
+$ awslocal lambda create-event-source-mapping \
+ --function-name ProcessKinesisRecords \
+ --event-source arn:aws:kinesis:us-east-1:000000000000:stream/lambda-stream \
+ --batch-size 100 \
+ --starting-position LATEST
+{{< / command >}}
+
+### Test the Event Source mapping
+
+You can test the event source mapping by adding a record to the Kinesis Stream using the [`PutRecord`](https://docs.aws.amazon.com/kinesis/latest/APIReference/API_PutRecord.html) API.
+Run the following command to add a record to the Kinesis Stream:
+
+{{< command >}}
+$ awslocal kinesis put-record \
+ --stream-name lambda-stream \
+ --partition-key 1 \
+ --data "Hello, this is a test."
+{{< / command >}}
+
+You can fetch the CloudWatch logs for your Lambda function reading records from the stream, using AWS CLI or LocalStack Resource Browser.
+
+### Performance Tuning
+
+For high-volume workloads or large payloads, we recommend switching to the Scala engine via the `KINESIS_MOCK_PROVIDER_ENGINE=scala` flag, delivering up to 10x better performance compared to the default Node.js engine.
+
+Additionally, the following parameters can be tuned:
+
+- Increase `KINESIS_MOCK_MAXIMUM_HEAP_SIZE` beyond the default `512m` to reduce JVM memory pressure.
+- Increase `KINESIS_MOCK_INITIAL_HEAP_SIZE` beyond the default `256m` to pre-allocate more JVM heap memory.
+- Reduce `KINESIS_LATENCY` artificial response delays from the default `500` milliseconds (or disable entirely with `0`).
+
+Refer to our [Kinesis configuration documentation](https://docs.localstack.cloud/references/configuration/#kinesis) for more details on these parameters.
+
+{{< callout "note" >}}
+`KINESIS_MOCK_MAXIMUM_HEAP_SIZE` and `KINESIS_MOCK_INITIAL_HEAP_SIZE` are only applicable when using the Scala engine.
+Future versions of LocalStack will likely default to using the `scala` engine over the less-performant `node` version currently in use.
+{{< /callout >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing Kinesis Streams & Kafka Clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Kinesis** under the **Analytics** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Stream**: Create a Kinesis Stream by specifying the **Stream Name**, **Shard Count**, and **Stream Mode**.
+- **Create Cluster**: Create a Kafka Cluster by specifying the **Cluster Name**, **Kafka Version**, **Number Of Broker Nodes**, **Instance Type**, and more.
+- **View Streams & Clusters**: Click on any of the listed resources to view its details by clicking on the desired Stream & Cluster.
+- **Edit Streams & Clusters**: Click on any listed resources to edit its details by clicking on the desired Stream & Cluster.
+- **Delete Streams & Clusters**: Select any listed resources to delete them by clicking the **Actions** button and selecting **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Kinesis in LocalStack for various use cases:
+
+- [Search application with Lambda, Kinesis, Firehose, ElasticSearch, S3](https://github.com/localstack/sample-fuzzy-movie-search-lambda-kinesis-elasticsearch)
+- [Streaming Data Pipeline with Kinesis, Tinybird, CloudWatch, Lambda](https://github.com/localstack/serverless-streaming-data-pipeline)
+
+## Limitations
+
+In multi-account setups, each AWS account launches a separate instance of Kinesis Mock, which is very resource intensive when a large number of AWS accounts are used.
+[This Kinesis Mock issue](https://github.com/etspaceman/kinesis-mock/issues/377) is being used to keep track of this feature.
diff --git a/src/content/docs/aws/services/kinesisanalytics.md b/src/content/docs/aws/services/kinesisanalytics.md
new file mode 100644
index 00000000..53e715ad
--- /dev/null
+++ b/src/content/docs/aws/services/kinesisanalytics.md
@@ -0,0 +1,110 @@
+---
+title: "Kinesis Data Analytics for SQL Applications"
+linkTitle: "Kinesis Data Analytics for SQL Applications"
+description: >
+ Get started with Kinesis Data Analytics for SQL Applications on LocalStack
+tags: ["Ultimate"]
+---
+
+{{< callout "warning" >}}
+Amazon Kinesis Data Analytics for SQL Applications will be [retired on 27 January 2026](https://docs.aws.amazon.com/kinesisanalytics/latest/dev/discontinuation.html).
+It will be removed from LocalStack soon after this date.
+{{< /callout >}}
+
+## Introduction
+
+Kinesis Data Analytics for SQL Applications is a service offered by Amazon Web Services (AWS) that enables you to process and analyze streaming data in real-time.
+It allows you to apply transformations, filtering, and enrichment to streaming data using standard SQL syntax.
+
+LocalStack allows you to use the Kinesis Data Analytics APIs in your local environment.
+The supported APIs is available on our [API coverage page]({{< ref "coverage_kinesisanalytics" >}}).
+
+## Getting started
+
+This guide is designed for users new to Kinesis Data Analytics and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Kinesis Analytics application using AWS CLI.
+
+### Create an application
+
+You can create a Kinesis Analytics application using the [`CreateApplication`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_CreateApplication.html) API by running the following command:
+
+{{< command >}}
+$ awslocal kinesisanalytics create-application \
+ --application-name test-analytics-app
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ApplicationSummary": {
+ "ApplicationName": "test-analytics-app",
+ "ApplicationARN": "arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app",
+ "ApplicationStatus": "READY"
+ }
+}
+```
+
+### Describe the application
+
+You can describe the application using the [`DescribeApplication`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_DescribeApplication.html) API by running the following command:
+
+{{< command >}}
+$ awslocal kinesisanalytics describe-application \
+ --application-name test-analytics-app
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ApplicationDetail": {
+ "ApplicationName": "test-analytics-app",
+ "ApplicationARN": "arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app",
+ "ApplicationStatus": "READY",
+ "CreateTimestamp": 1718194721.567,
+ "InputDescriptions": [],
+ "OutputDescriptions": [],
+ "ReferenceDataSourceDescriptions": [],
+ "CloudWatchLoggingOptionDescriptions": [],
+ "ApplicationVersionId": 1
+ }
+}
+```
+
+### Tag the application
+
+Add tags to the application using the [`TagResource`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_TagResource.html) API by running the following command:
+
+{{< command >}}
+$ awslocal kinesisanalytics tag-resource \
+ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app \
+ --tags Key=test,Value=test
+{{< /command >}}
+
+You can list the tags for the application using the [`ListTagsForResource`](https://docs.aws.amazon.com/kinesisanalytics/latest/APIReference/API_ListTagsForResource.html) API by running the following command:
+
+{{< command >}}
+$ awslocal kinesisanalytics list-tags-for-resource \
+ --resource-arn arn:aws:kinesisanalytics:us-east-1:000000000000:application/test-analytics-app
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Tags": [
+ {
+ "Key": "test",
+ "Value": "test"
+ }
+ ]
+}
+```
+
+## Limitations
+
+* LocalStack supports basic emulation for Kinesis Data Analytics for SQL Applications.
+ However, the queries are not fully supported and lack parity with AWS.
diff --git a/src/content/docs/aws/services/kms.md b/src/content/docs/aws/services/kms.md
new file mode 100644
index 00000000..47de0f4b
--- /dev/null
+++ b/src/content/docs/aws/services/kms.md
@@ -0,0 +1,190 @@
+---
+title: "Key Management Service (KMS)"
+linkTitle: "Key Management Service (KMS)"
+description: Get started with Key Management Service (KMS) on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Key Management Service (KMS) is a managed service that allows users to handle encryption keys within the Amazon Web Services ecosystem.
+KMS allows users to create, control, and utilize keys to encrypt and decrypt data, as well as to sign and verify messages.
+KMS allows you to create, delete, list, and update aliases, friendly names for your KMS keys, and tag them for identification and automation.
+You can check [the official AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) to understand the basic terms and concepts used in the KMS.
+
+LocalStack allows you to use the KMS APIs in your local environment to create, edit, and view symmetric and asymmetric KMS keys, including HMAC keys.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_kms" >}}), which provides information on the extent of KMS's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to KMS and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a simple symmetric encryption key and use it to encrypt/decrypt data.
+
+### Create a key
+
+To generate a new key within the KMS, you can use the [`CreateKey`](https://docs.aws.amazon.com/kms/latest/APIReference/API_CreateKey.html) API.
+Execute the following command to create a new key:
+
+{{< command >}}
+$ awslocal kms create-key
+{{ command >}}
+
+By default, this command generates a symmetric encryption key, eliminating the need for any additional arguments.
+You can take a look at the `KeyId` of the freshly generated key in the output, and save it for future use.
+
+In case the key ID is misplaced, it is possible to retrieve a comprehensive list of IDs and [Amazon Resource Names](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) (ARNs) for all available keys through the following command:
+
+{{< command >}}
+$ awslocal kms list-keys
+{{ command >}}
+
+Additionally, if needed, you can obtain extensive details about a specific key by providing its key ID or ARN using the subsequent command:
+
+{{< command >}}
+$ awslocal kms describe-key --key-id
+{{ command >}}
+
+### Encrypt the data
+
+You can now leverage the generated key for encryption purposes.
+For instance, let's consider encrypting "_some important stuff_".
+To do so, you can use the [`Encrypt`](https://docs.aws.amazon.com/kms/latest/APIReference/API_Encrypt.html) API.
+Execute the following command to encrypt the data:
+
+{{< command >}}
+$ awslocal kms encrypt \
+ --key-id 010a4301-4205-4df8-ae52-4c2895d47326 \
+ --plaintext "some important stuff" \
+ --output text \
+ --query CiphertextBlob \
+ | base64 --decode > my_encrypted_data
+{{ command >}}
+
+You will notice that a new file named `my_encrypted_data` has been created in your current directory.
+This file contains the encrypted data, which can be decrypted using the same key.
+
+### Decrypt the data
+
+To decrypt the data, you can use the [`Decrypt`](https://docs.aws.amazon.com/kms/latest/APIReference/API_Decrypt.html) API.
+You don't need to specify the `KEY_ID` while decrypting the file, since AWS includes the Key ID into the encrypted data.
+However, with asymmetric keys the `KEY_ID` has to be specified.
+
+Execute the following command to decrypt the data:
+
+{{< command >}}
+$ awslocal kms decrypt \
+ --ciphertext-blob fileb://my_encrypted_data \
+ --output text \
+ --query Plaintext \
+ | base64 --decode
+{{ command >}}
+
+Similar to the previous `Encrypt` operation, to retrieve the actual data, it's necessary to decode the Base64-encoded output.
+To achieve this, employ the `output` and `query` parameters along with the `base64` tool as before.
+Upon successful execution, the output will correspond to our original text:
+
+```sh
+some important stuff
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing KMS keys.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **KMS** under the **Security Identity Compliance** section.
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Key**: Create a new KMS key by specifying the **Policy**, **Key Usage**, **Tags**, **Multi Region**, **Customer Master Key Spec**, and more.
+- **Edit Key**: Edit an existing KMS key by specifying the **Description**, after clicking the key in the list and clicking **EDIT KEY**.
+- **View Key**: View the details of an existing KMS key by clicking the key in the list.
+- **Enable & Disable Key**: Select any listed keys to enable or disable them by clicking the **Actions** button and select **Enable Selected** or **Disable Selected**.
+- **Delete Key**: Select any listed keys to delete them by clicking the **Actions** button and selecting **Schedule Deletion**.
+
+## Custom IDs for KMS keys via tags
+
+You can assign custom IDs to KMS keys using the `_custom_id_` tag during key creation.
+This can be useful to pre-seed a test environment and use a static `KeyId` for your keys.
+
+Below is a simple example to create a key with a custom `KeyId` (note that the `KeyId` should have the format of a UUID):
+
+{{< command >}}
+$ awslocal kms create-key --tags '[{"TagKey":"_custom_id_","TagValue":"00000000-0000-0000-0000-000000000001"}]'
+{{< / command >}}
+
+The following output will be displayed:
+
+```json
+{
+ "KeyMetadata": {
+ "AWSAccountId": "000000000000",
+ "KeyId": "00000000-0000-0000-0000-000000000001",
+ ....
+}
+```
+
+## Custom Key Material for KMS Keys via Tags
+
+You can seed a KMS key with custom key material using the `_custom_key_material_` tag during creation.
+This can be useful to pre-seed a development environment so values encrypted with KMS can be decrypted later.
+
+Here is an example of using custom key material with the value being base64 encoded:
+
+{{< command >}}
+$ echo 'dGhpc2lzYXNlY3VyZWtleQ==' | base64 -d
+
+thisisasecurekey
+
+$ awslocal kms create-key --tags '[{"TagKey":"_custom_key_material_","TagValue":"dGhpc2lzYXNlY3VyZWtleQ=="}]'
+
+{
+ "KeyMetadata": {
+ "AWSAccountId": "000000000000",
+ "KeyId": "00000000-0000-0000-0000-000000000001",
+ ....
+}
+
+{{< / command >}}
+
+## Current Limitations
+
+### Encryption data format
+
+In LocalStack's KMS implementation, the encryption process is uniformly symmetric, even when an asymmetric key is requested.
+Furthermore, LocalStack utilizes an encrypted data format distinct from that employed by AWS.
+
+This could lead to decryption failures if a key is manually generated outside the local KMS environment, imported to KMS using the [ImportKeyMaterial](https://docs.aws.amazon.com/kms/latest/APIReference/API_ImportKeyMaterial.html) API, utilized for encryption within local KMS, and later decryption is attempted externally using the self-generated key.
+However, conventional setups are likely to function seamlessly.
+
+### Key states
+
+In AWS KMS, cryptographic keys exhibit [multiple states](https://docs.aws.amazon.com/kms/latest/developerguide/key-state.html).
+However, LocalStack's KMS implementation provides only a subset of these states
+
+- `Enabled`
+- `Disabled`
+- `Creating`
+- `PendingImport`
+- `PendingDeletion`
+
+### Multi-region keys
+
+LocalStack's KMS implementation is equipped to facilitate [multi-region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html), but there's a distinct behavior compared to AWS KMS.
+Unlike AWS KMS, the replication of multi-region key replicas in LocalStack KMS isn't automatically synchronized with their corresponding primary key.
+Consequently, adjustments made to the primary key's settings won't propagate automatically to the replica.
+
+### Key aliases
+
+While AWS KMS conveniently establishes [aliases](https://docs.aws.amazon.com/kms/latest/developerguide/kms-alias.html), LocalStack follows suit by supporting these pre-configured aliases.
+However, it's important to note that in LocalStack, these aliases come into picture after the initial access attempt.
+Until that point, they are not visible.
+
+### Key specs
+
+In AWS KMS, [SM2](https://docs.aws.amazon.com/kms/latest/developerguide/asymmetric-key-specs.html#key-spec-sm:~:text=the%20message%20digest.-,SM2%20key%20spec%20(China%20Regions%20only),-The%20SM2%20key) is a supported key spec for asymmetric keys.
+However, LocalStack's KMS implementation doesn't support this key spec.
diff --git a/src/content/docs/aws/services/lakeformation.md b/src/content/docs/aws/services/lakeformation.md
new file mode 100644
index 00000000..feac678b
--- /dev/null
+++ b/src/content/docs/aws/services/lakeformation.md
@@ -0,0 +1,109 @@
+---
+title: "Lake Formation"
+linkTitle: "Lake Formation"
+description: Get started with Lake Formation on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Lake Formation is a managed service that allows users to build, secure, and manage data lakes.
+Lake Formation allows users to define and enforce fine-grained access controls, manage metadata, and discover and share data across multiple data sources.
+
+LocalStack allows you to use the Lake Formation APIs in your local environment to register resources, grant permissions, and list resources and permissions.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_lakeformation" >}}), which provides information on the extent of Lake Formation's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Lake Formation and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to register an S3 bucket as a resource in Lake Formation, grant permissions to a user, and list the resources and permissions.
+
+### Register the resource
+
+Create a new S3 bucket named `test-bucket` using the `mb` command:
+
+{{< command >}}
+$ awslocal s3 mb s3://test-bucket
+{{ command >}}
+
+You can now register the S3 bucket as a resource in Lake Formation using the [`RegisterResource`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_RegisterResource.html) API.
+Create a file named `input.json` with the following content:
+
+```json
+{
+ "ResourceArn": "arn:aws:s3:::test-bucket",
+ "UseServiceLinkedRole": true
+}
+```
+
+Run the following command to register the resource:
+
+{{< command >}}
+awslocal lakeformation register-resource \
+ --cli-input-json file://input.json
+{{ command >}}
+
+### List resources
+
+You can list the registered resources using the [`ListResources`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_ListResources.html) API.
+Execute the following command to list the resources:
+
+{{< command >}}
+awslocal lakeformation list-resources
+{{ command >}}
+
+The following output is displayed:
+
+```bash
+{
+ "ResourceInfoList": [
+ {
+ "ResourceArn": "arn:aws:s3:::test-bucket",
+ "LastModified": "2024-07-11T23:27:30.699312+05:30"
+ }
+ ]
+}
+```
+
+### Grant permissions
+
+You can grant permissions to a user or group using the [`GrantPermissions`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_GrantPermissions.html) API.
+Create a file named `permissions.json` with the following content:
+
+```json
+{
+ "CatalogId": "000000000000",
+ "Principal": {
+ "DataLakePrincipalIdentifier": "arn:aws:iam::000000000000:user/lf-developer"
+ },
+ "Resource": {
+ "Table": {
+ "CatalogId": "000000000000",
+ "DatabaseName": "tpc",
+ "TableWildcard": {}
+ }
+ },
+ "Permissions": [
+ "SELECT"
+ ],
+ "PermissionsWithGrantOption": []
+}
+```
+
+Run the following command to grant permissions:
+
+{{< command >}}
+$ awslocal lakeformation grant-permissions \
+ --cli-input-json file://check.json
+{{ command >}}
+
+### List permissions
+
+You can list the permissions granted to a user or group using the [`ListPermissions`](https://docs.aws.amazon.com/lake-formation/latest/dg/API_ListPermissions.html) API.
+Execute the following command to list the permissions:
+
+{{< command >}}
+$ awslocal lakeformation list-permissions
+{{ command >}}
diff --git a/src/content/docs/aws/services/lambda.md b/src/content/docs/aws/services/lambda.md
new file mode 100644
index 00000000..05cf56ba
--- /dev/null
+++ b/src/content/docs/aws/services/lambda.md
@@ -0,0 +1,485 @@
+---
+title: "Lambda"
+linkTitle: "Lambda"
+description: Get started with Lambda on LocalStack
+tags: ["Free"]
+persistence: supported with limitations
+---
+
+## Introduction
+
+AWS Lambda is a Serverless Function as a Service (FaaS) platform that lets you run code in your preferred programming language on the AWS ecosystem.
+AWS Lambda automatically scales your code to meet demand and handles server provisioning, management, and maintenance.
+AWS Lambda allows you to break down your application into smaller, independent functions that integrate seamlessly with AWS services.
+
+LocalStack allows you to use the Lambda APIs to create, deploy, and test your Lambda functions.
+The supported APIs are available on our [Lambda coverage page]({{< ref "coverage_lambda" >}}), which provides information on the extent of Lambda's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Lambda and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Lambda function with a Function URL.
+With the Function URL property, you can call a Lambda Function via an HTTP API call.
+
+### Create a Lambda function
+
+To create a new Lambda function, create a new file called `index.js` with the following code:
+
+```javascript
+exports.handler = async (event) => {
+ let body = JSON.parse(event.body)
+ const product = body.num1 * body.num2;
+ const response = {
+ statusCode: 200,
+ body: "The product of " + body.num1 + " and " + body.num2 + " is " + product,
+ };
+ return response;
+};
+```
+
+Enter the following command to create a new Lambda function:
+
+{{< command >}}
+$ zip function.zip index.js
+$ awslocal lambda create-function \
+ --function-name localstack-lambda-url-example \
+ --runtime nodejs18.x \
+ --zip-file fileb://function.zip \
+ --handler index.handler \
+ --role arn:aws:iam::000000000000:role/lambda-role
+{{< / command >}}
+
+{{< callout "note">}}
+To create a predictable URL for the function, you can assign a custom ID by specifying the `_custom_id_` tag on the function itself.
+{{< command >}}
+$ awslocal lambda create-function \
+ --function-name localstack-lambda-url-example \
+ --runtime nodejs18.x \
+ --zip-file fileb://function.zip \
+ --handler index.handler \
+ --role arn:aws:iam::000000000000:role/lambda-role \
+ --tags '{"_custom_id_":"my-custom-subdomain"}'
+{{< / command >}}
+You must specify the `_custom_id_` tag **before** creating a Function URL.
+After the URL configuration is set up, any modifications to the tag will not affect it.
+LocalStack supports assigning custom IDs to both the `$LATEST` version of the function or to an existing version alias.
+{{< /callout >}}
+
+{{< callout >}}
+In the old Lambda provider, you could create a function with any arbitrary string as the role, such as `r1`.
+However, the new provider requires the role ARN to be in the format `arn:aws:iam::000000000000:role/lambda-role` and validates it using an appropriate regex. However, it currently does not check whether the role exists.
+{{< /callout >}}
+
+### Invoke the Function
+
+To invoke the Lambda function, you can use the [`Invoke` API](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html).
+Run the following command to invoke the function:
+
+{{< tabpane text=true persist=false >}}
+ {{% tab header="AWS CLI v1" lang="shell" %}}
+ {{< command >}}
+ $ awslocal lambda invoke --function-name localstack-lambda-url-example \
+ --payload '{"body": "{\"num1\": \"10\", \"num2\": \"10\"}" }' output.txt
+ {{< /command >}}
+ {{% /tab %}}
+ {{% tab header="AWS CLI v2" lang="shell" %}}
+ {{< command >}}
+ $ awslocal lambda invoke --function-name localstack-lambda-url-example \
+ --cli-binary-format raw-in-base64-out \
+ --payload '{"body": "{\"num1\": \"10\", \"num2\": \"10\"}" }' output.txt
+ {{< /command >}}
+ {{% /tab %}}
+{{< /tabpane >}}
+
+### Create a Function URL
+
+{{< callout >}}
+[Response streaming](https://docs.aws.amazon.com/lambda/latest/dg/configuration-response-streaming.html) is currently not supported, so it will still return a synchronous/full response instead.
+{{< /callout >}}
+
+With the Function URL property, there is now a new way to call a Lambda Function via HTTP API call using the [`CreateFunctionURLConfig` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunctionUrlConfig.html).
+To create a URL for invoking the function, run the following command:
+
+{{< command >}}
+$ awslocal lambda create-function-url-config \
+ --function-name localstack-lambda-url-example \
+ --auth-type NONE
+{{< / command >}}
+
+This will generate a HTTP URL that can be used to invoke the Lambda function.
+The URL will be in the format `http://.lambda-url.us-east-1.localhost.localstack.cloud:4566`.
+
+{{< callout "note">}}
+As previously mentioned, when a Lambda Function has a `_custom_id_` tag, LocalStack sets this tag's value as the subdomain in the Function's URL.
+
+{{< command >}}
+$ awslocal lambda create-function-url-config \
+ --function-name localstack-lambda-url-example \
+ --auth-type NONE
+{
+ "FunctionUrl": "http://my-custom-subdomain.lambda-url....",
+ ....
+}
+{{< / command >}}
+
+In addition, if you pass an an existing version alias as a `Qualifier` to the request, the created URL will combine the custom ID and the alias in the form `-`.
+
+{{< command >}}
+$ awslocal lambda create-function-url-config \
+ --function-name localstack-lambda-url-example \
+ --auth-type NONE
+ --qualifier test-alias
+{
+ "FunctionUrl": "http://my-custom-subdomain-test-alias.lambda-url....",
+ ....
+}
+{{< / command >}}
+{{< /callout >}}
+
+### Trigger the Lambda function URL
+
+You can now trigger the Lambda function by sending a HTTP POST request to the URL using [curl](https://curl.se/) or your REST HTTP client:
+
+{{< command >}}
+$ curl -X POST \
+ 'http://.lambda-url.us-east-1.localhost.localstack.cloud:4566/' \
+ -H 'Content-Type: application/json' \
+ -d '{"num1": "10", "num2": "10"}'
+{{< / command >}}
+
+The following output would be retrieved:
+
+```sh
+The product of 10 and 10 is 100%
+```
+
+## Lambda Event Source Mappings
+
+[Lambda event source mappings](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventsourcemapping.html) allows you to connect Lambda functions to other AWS services.
+The following event sources are supported in LocalStack:
+
+- [Simple Queue Service (SQS)](https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html)
+- [DynamoDB](https://docs.aws.amazon.com/lambda/latest/dg/with-ddb.html)
+- [Kinesis](https://docs.aws.amazon.com/lambda/latest/dg/with-kinesis.html)
+- [Managed Streaming for Apache Kafka (MSK)](https://docs.aws.amazon.com/lambda/latest/dg/with-msk.html) ⭐️
+- [Self-Managed Apache Kafka](https://docs.aws.amazon.com/lambda/latest/dg/with-kafka.html) ⭐️
+
+### Behaviour Coverage
+
+The table below shows feature coverage for all supported event sources for the latest version of LocalStack.
+
+Unlike [API operation coverage]({{< ref "coverage_lambda" >}}), this table illustrates the **functional and behavioural coverage** of LocalStack's Lambda Event Source Mapping implementation.
+
+Where necessary, footnotes are used to provide additional context.
+
+{{< callout >}}
+Feature availability and coverage is categorized with the following system:
+- ⭐️ Only Available in LocalStack licensed editions
+- 🟢 Fully Implemented
+- 🟡 Partially Implemented
+- 🟠 Not Implemented
+- ➖ Not Applicable (Not Supported by AWS)
+{{}}
+
+| | SQS Stream Kafka ⭐️
+|--------------------------------|-------------------------------------------------|:--------:|:----:|:---------:|:----------:|:----------:|:------------:|
+| **Parameter** | **Description** | **Standard** | **FIFO** | **Kinesis** | **DynamoDB** | **Amazon MSK** | **Self-Managed** |
+| BatchSize | Batching events by count. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
+| *Not Configurable* | Batch when ≥ 6 MB limit. | 🟠 | 🟠 | 🟠 | 🟠 | 🟢 | 🟢 |
+| MaximumBatchingWindowInSeconds | Batch by Time Window. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
+| MaximumRetryAttempts | Discard after N retries. | ➖ | ➖ | 🟢 | 🟢 | ➖ | ➖ |
+| MaximumRecordAgeInSeconds | Discard records older than time `t`. | ➖ | ➖ | 🟢 | 🟢 | ➖ | ➖ |
+| Enabled | Enabling/Disabling. | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
+| FilterCriteria | Filter pattern evaluating. [^1] [^2] | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 |
+| FunctionResponseTypes | Enabling ReportBatchItemFailures. | 🟢 | 🟢 | 🟢 | 🟢 | ➖ | ➖ |
+| BisectBatchOnFunctionError | Bisect a batch on error and retry. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ |
+| ScalingConfig | The scaling configuration for the event source. | 🟠 | 🟠 | ➖ | ➖ | ➖ | ➖ |
+| ParallelizationFactor | Parallel batch processing by shard. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ |
+| DestinationConfig.OnFailure | SQS Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 |
+| | SNS Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 |
+| | S3 Failure Destination. | ➖ | ➖ | 🟢 | 🟢 | 🟠 | 🟠 |
+| DestinationConfig.OnSuccess | Success Destinations. | ➖ | ➖ | ➖ | ➖ | ➖ | ➖ |
+| MetricsConfig | CloudWatch metrics. | 🟠 | 🟠 | 🟠 | 🟠 | 🟠 | 🟠 |
+| ProvisionedPollerConfig | Control throughput via min-max limits. | ➖ | ➖ | ➖ | ➖ | 🟠 | 🟠 |
+| StartingPosition | Position to start reading from. | ➖ | ➖ | 🟢 | 🟢 | 🟢 | 🟢 |
+| StartingPositionTimestamp | Timestamp to start reading from. | ➖ | ➖ | 🟢 | ➖ | 🟢 | 🟢 |
+| TumblingWindowInSeconds | Duration (seconds) of a processing window. | ➖ | ➖ | 🟠 | 🟠 | ➖ | ➖ |
+| Topics ⭐️ | Kafka topics to read from. | ➖ | ➖ | ➖ | ➖ | 🟢 | 🟢 |
+
+[^1]: Read more at [Control which events Lambda sends to your function](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html)
+[^2]: The available Metadata properties may not have full parity with AWS depending on the event source (read more at [Understanding event filtering basics](https://docs.aws.amazon.com/lambda/latest/dg/invocation-eventfiltering.html#filtering-basics)).
+
+Create a [GitHub issue](https://github.com/localstack/localstack/issues/new/choose) or reach out to [LocalStack support]({{< ref "/getting-started/help-and-support" >}}) if you experience any challenges.
+
+## Lambda Layers (Pro)
+
+[Lambda layers](https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html) let you include additional code and dependencies in your Lambda functions.
+With a valid LocalStack license, you can deploy Lambda Layers locally to streamline your development and testing process.
+The Community image also allows creating, updating, and deleting Lambda Layers, but they are not applied when invoking a Lambda function.
+
+### Creating and using a Lambda Layer Locally
+
+To create a Lambda Layer locally, you can use the [`PublishLayerVersion` API](https://docs.aws.amazon.com/lambda/latest/dg/API_PublishLayerVersion.html) in LocalStack.
+Here's a simple example using Python:
+
+{{< command >}}
+$ mkdir -p /tmp/python/
+$ echo 'def util():' > /tmp/python/testlayer.py
+$ echo ' print("Output from Lambda layer util function")' >> /tmp/python/testlayer.py
+$ (cd /tmp; zip -r testlayer.zip python)
+$ LAYER_ARN=$(awslocal lambda publish-layer-version --layer-name layer1 --zip-file fileb:///tmp/testlayer.zip | jq -r .LayerVersionArn)
+{{< / command >}}
+
+Next, define a Lambda function that uses our layer:
+
+{{< command >}}
+$ echo 'def handler(*args, **kwargs):' > /tmp/testlambda.py
+$ echo ' import testlayer; testlayer.util()' >> /tmp/testlambda.py
+$ echo ' print("Debug output from Lambda function")' >> /tmp/testlambda.py
+$ (cd /tmp; zip testlambda.zip testlambda.py)
+$ awslocal lambda create-function \
+ --function-name func1 \
+ --runtime python3.8 \
+ --role arn:aws:iam::000000000000:role/lambda-role \
+ --handler testlambda.handler \
+ --timeout 30 \
+ --zip-file fileb:///tmp/testlambda.zip \
+ --layers $LAYER_ARN
+{{< / command >}}
+
+Here, we've defined a Lambda function called `handler()` that imports the `util()` function from our `layer1` Lambda Layer.
+We then used the [`CreateFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) to create this Lambda function in LocalStack, specifying the `layer1` Lambda Layer as a dependency.
+
+To test our Lambda function and see the output from the Lambda Layer, we can invoke the function and check the logs (with `DEBUG=1` enabled).
+Here's an example:
+
+```shell
+> START RequestId: a8bc4ce6-e2e8-189e-cf58-c2eb72827c23 Version: $LATEST
+> Output from Lambda layer util function
+> Debug output from Lambda function
+> END RequestId: a8bc4ce6-e2e8-189e-cf58-c2eb72827c23
+```
+
+### Referencing Lambda layers from AWS
+
+If your Lambda function references a layer in real AWS, you can integrate it into your local dev environment by making it accessible to the `886468871268` AWS account ID.
+This account is managed by LocalStack on AWS.
+
+To grant access to your layer, run the following command:
+
+{{< command >}}
+$ aws lambda add-layer-version-permission \
+ --layer-name test-layer \
+ --version-number 1 \
+ --statement-id layerAccessFromLocalStack \
+ --principal 886468871268 \
+ --action lambda:GetLayerVersion
+{{< / command >}}
+
+Replace `test-layer` and `1` with the name and version number of your layer, respectively.
+
+After granting access, the next time you reference the layer in one of your local Lambda functions using the AWS Lambda layer ARN, the layer will be automatically pulled down and integrated into your local dev environment.
+
+## LocalStack Lambda Runtime Interface Emulator (RIE)
+
+LocalStack uses a [custom implementation](https://github.com/localstack/lambda-runtime-init/) of the
+[AWS Lambda Runtime Interface Emulator](https://github.com/aws/aws-lambda-runtime-interface-emulator)
+to match the behavior of AWS Lambda as closely as possible while providing additional features
+such as [hot reloading]({{< ref "hot-reloading" >}}).
+We ship our custom implementation as a Golang binary, which gets copied into each Lambda container under `/var/rapid/init`.
+This init binary is used as the entry point for every Lambda container.
+
+Our custom implementation offers additional configuration options,
+but these configurations are primarily intended for LocalStack developers and could change in the future.
+The LocalStack [configuration]({{< ref "configuration" >}}) `LAMBDA_DOCKER_FLAGS` can be used to configure all Lambda containers,
+for example `LAMBDA_DOCKER_FLAGS=-e LOCALSTACK_INIT_LOG_LEVEL=debug`.
+Some noteworthy configurations include:
+- `LOCALSTACK_INIT_LOG_LEVEL` defines the log level of the Golang binary.
+ Values: `trace`, `debug`, `info`, `warn` (default), `error`, `fatal`, `panic`
+- `LOCALSTACK_USER` defines the system user executing the Lambda runtime.
+ Values: `sbx_user1051` (default), `root` (skip dropping root privileges)
+
+The full list of configurations is defined in the Golang function
+[InitLsOpts](https://github.com/localstack/lambda-runtime-init/blob/localstack/cmd/localstack/main.go#L43).
+
+## Special Tools
+
+LocalStack provides various tools to help you develop, debug, and test your AWS Lambda functions more efficiently.
+
+- **Hot reloading**: With Lambda hot reloading, you can continuously apply code changes to your Lambda functions without needing to redeploy them manually.
+ To learn more about how to use hot reloading with LocalStack, check out our [hot reloading documentation]({{< ref "hot-reloading" >}}).
+- **Remote debugging**: LocalStack's remote debugging functionality allows you to attach a debugger to your Lambda function using your preferred IDE.
+ To get started with remote debugging in LocalStack, see our [debugging documentation]({{< ref "debugging" >}}).
+- **Lambda VS Code Extension**: LocalStack's Lambda VS Code Extension supports deploying and invoking Python Lambda functions through AWS SAM or AWS CloudFormation.
+ To get started with the Lambda VS Code Extension, see our [Lambda VS Code Extension documentation]({{< ref "user-guide/lambda-tools/vscode-extension" >}}).
+- **API for querying Lambda runtimes**: LocalStack offers a metadata API to query the list of Lambda runtimes via `GET http://localhost.localstack.cloud:4566/_aws/lambda/runtimes`.
+ It returns the [Supported Runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html) matching AWS parity (i.e., excluding deprecated runtimes) and offers additional filters for `deprecated` runtimes and `all` runtimes (`GET /_aws/lambda/runtimes?filter=all`).
+
+## Resource Browser
+
+The LocalStack Web Application provides a [Resource Browser]({{< ref "/user-guide/web-application/resource-browser/" >}}) for managing Lambda resources.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Lambda** under the **Compute** section.
+
+The Resource Browser displays [Functions](https://app.localstack.cloud/resources/lambda/functions) and [Layers](https://app.localstack.cloud/resources/lambda/layers) resources.
+You can click on individual resources to view their details.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Functions & Layers**: Create a new [Lambda function](https://app.localstack.cloud/resources/lambda/functions/new) or a new [Lambda Layer](https://app.localstack.cloud/resources/lambda/layers/new) by clicking on **Create API** button on top-right and creating a new configuration by clicking on **Submit** button.
+- **View Function & Layer Details**: Click on any function or layer to view detailed information such as the resource's name, ARN, runtime, handler, and more.
+ You can also navigate across different versions of the resource.
+- **Delete Functions & Layers**: To delete a function or layer, select the resource from the Resource Browser, click on the **Remove Selected** button at the top-right of the screen, and confirm the deletion by clicking on the **Continue** button.
+
+## Migrating to Lambda v2
+
+{{< callout >}}
+The legacy Lambda implementation has been removed since LocalStack 3.0 (Docker `latest` since 2023-11-09).
+{{}}
+
+As part of the [LocalStack 2.0 release](https://discuss.localstack.cloud/t/new-lambda-implementation-in-localstack-2-0/258), the Lambda provider has been migrated to `v2` (formerly known as `asf`).
+With the new implementation, the following changes have been introduced:
+
+- To run Lambda functions in LocalStack, mount the Docker socket into the LocalStack container.
+ Add the following Docker volume mount to your LocalStack startup configuration: `/var/run/docker.sock:/var/run/docker.sock`.
+ You can find an example of this configuration in our official [`docker-compose.yml` file]({{< ref "/getting-started/installation/#starting-localstack-with-docker-compose" >}}).
+- The `v2` provider discontinues Lambda Executor Modes such as `LAMBDA_EXECUTOR=local`.
+ Previously, this mode was used as a fallback when the Docker socket was unavailable in the LocalStack container, but many users unintentionally used it instead of the configured `LAMBDA_EXECUTOR=docker`.
+ The new provider now behaves similarly to the old `docker-reuse` executor and does not require such configuration.
+- The Lambda containers are now reused between invocations.
+ The changes made to the filesystem (such as in `/tmp`) will persist between subsequent invocations if the function is dispatched to the same container.
+ This is known as a **warm start** (see [Operating Lambda](https://aws.amazon.com/blogs/compute/operating-lambda-performance-optimization-part-1/) for more information).
+ To ensure that each invocation starts with a fresh container, you can set the `LAMBDA_KEEPALIVE_MS` configuration option to 0 milliseconds, to force **cold starts**.
+- The platform uses [official Docker base images](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-images.html) pulled from `public.ecr.aws/lambda/`, instead of `lambci`, and supports both `arm64` and `x86_64` architectures.
+ The Lambda functions filesystem now matches the AWS Lambda production environment.
+ The ARM containers for compatible runtimes are based on Amazon Linux 2, and ARM-compatible hosts can create functions with the `arm64` architecture.
+- Lambda functions in LocalStack resolve AWS domains, such as `s3.amazonaws.com`, to the LocalStack container.
+ This domain resolution is DNS-based and can be disabled by setting `DNS_ADDRESS=0`.
+ For more information, refer to [Transparent Endpoint Injection]({{< ref "user-guide/tools/transparent-endpoint-injection" >}}).
+ Previously, LocalStack provided patched AWS SDKs to redirect AWS API calls transparently to LocalStack.
+- The new provider may generate more exceptions due to invalid input.
+ For instance, while the old provider accepted arbitrary strings (such as `r1`) as Lambda roles when creating a function, the new provider validates role ARNs using a regular expression that requires them to be in the format `arn:aws:iam::000000000000:role/lambda-role`.
+ However, it currently does not verify whether the role actually exists.
+- The new Lambda provider now follows the [AWS Lambda state model](https://aws.amazon.com/blogs/compute/tracking-the-state-of-lambda-functions/), while creating and updating Lambda functions, which allows for asynchronous processing.
+ Functions are always created in the `Pending state` and move to `Active` once they are ready to accept invocations.
+ Previously, the functions were created synchronously by blocking until the function state was active.
+ The configuration `LAMBDA_SYNCHRONOUS_CREATE=1` can force synchronous function creation, but it is not recommended.
+- LocalStack's Lambda implementation, allows you to customize the Lambda execution environment using the [Lambda Extensions API](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-extensions-api.html).
+ This API allows for advanced monitoring, observability, or developer tooling, providing greater control and flexibility over your Lambda functions.
+ Lambda functions can also be run on hosts with [multi-architecture support]({{< ref "/references/arm64-support/#lambda-multi-architecture-support" >}}), allowing you to leverage LocalStack's Lambda API to develop and test Lambda functions with high parity.
+
+The following configuration options from the old provider are discontinued in the new provider:
+
+- The `LAMBDA_EXECUTOR` and specifically, the `LAMBDA_EXECUTOR=local` options are no longer supported.
+- The `LAMBDA_STAY_OPEN_MODE` is now the default behavior and can be removed.
+ Instead, use the `LAMBDA_KEEPALIVE_MS` option to configure how long containers should be kept running in between invocations.
+- The `LAMBDA_REMOTE_DOCKER` option is not used anymore since the new provider automatically copies zip files and configures hot reloading.
+- The `LAMBDA_CODE_EXTRACT_TIME` option is no longer used because function creation is now asynchronous.
+- The `LAMBDA_FALLBACK_URL`, `SYNCHRONOUS_KINESIS_EVENTS`, `SYNCHRONOUS_SNS_EVENTS` and `LAMBDA_FORWARD_URL` options are currently not supported.
+- The `LAMBDA_CONTAINER_REGISTRY` option is not used anymore.
+ Instead, use the more flexible `LAMBDA_RUNTIME_IMAGE_MAPPING` option to customize individual runtimes.
+- The `LAMBDA_XRAY_INIT` option is no longer needed because the X-Ray daemon is always initialized.
+
+However, the new provider still supports the following configuration options:
+
+- The `BUCKET_MARKER_LOCAL` option has a new default value, `hot-reload`.
+ The former default value `__local__` is an invalid bucket name.
+- The `LAMBDA_TRUNCATE_STDOUT` option.
+- The `LAMBDA_DOCKER_NETWORK` option.
+- The `LAMBDA_DOCKER_FLAGS` option.
+- The `LAMBDA_REMOVE_CONTAINERS` option.
+- The `LAMBDA_DOCKER_DNS` option since LocalStack 2.2.
+- The `HOSTNAME_FROM_LAMBDA` option since LocalStack 3.0.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Lambda in LocalStack for various use cases:
+
+- [Lambda Hot Reloading](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-hot-reloading) shows how to use hot reloading to update function code and layers without having to redeploy them.
+- [Lambda Code Mounting and Debugging](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-mounting-and-debugging) demonstrates how to debug Lambda functions locally using code mounting.
+- [Lambda Function URL](https://github.com/localstack-samples/localstack-pro-samples/tree/master/lambda-function-urls-javascript) shows how to use HTTP to invoke a Lambda function via its Function URL.
+- [Lambda Layers](https://github.com/localstack/localstack-pro-samples/blob/master/serverless-lambda-layers) demonstrates how to use Lambda layers, which are reusable packages of code that can be shared across multiple functions.
+- [Lambda PHP/Bref](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-php-bref-cdk-app) shows how to use PHP/Bref with and without fpm, using the Serverless framework and AWS CDK.
+- [Lambda Container Images](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-container-image) demonstrates how to use Lambda functions packaged as container images, which can be built using Docker and pushed to a local ECR registry.
+- [Lambda X-Ray](https://github.com/localstack/localstack-pro-samples/tree/master/lambda-xray) shows how to instrument Lambda functions for X-Ray using Powertools and the X-Ray SDK.
+
+## Troubleshooting
+
+### Docker not available
+
+In the old Lambda provider, Lambda functions were executed within the LocalStack container using the local executor mode.
+This mode was used as a fallback if the Docker socket was unavailable in the LocalStack container.
+However, many users inadvertently used the local executor mode instead of the intended Docker executor mode, which caused unexpected behavior.
+
+If you encounter the following error message, you may be using the local executor mode:
+
+{{< tabpane lang="bash" >}}
+{{< tab header="LocalStack Logs" lang="shell" >}}
+Lambda 'arn:aws:lambda:us-east-1:000000000000:function:my-function:$LATEST' changed to failed.
+Reason: Docker not available
+...
+raise DockerNotAvailable("Docker not available")
+{{< /tab >}}
+{{< tab header="AWS CLI" lang="shell" >}}
+An error occurred (ResourceConflictException) when calling the Invoke operation (reached max retries: 0): The operation cannot be performed at this time.
+The function is currently in the following state: Failed
+{{< /tab >}}
+{{< tab header="SAM" lang="shell" >}}
+Error: Failed to create/update the stack: sam-app, Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression "Stacks[].StackStatus" we matched expected path: "CREATE_FAILED" at least once
+{{< /tab >}}
+{{< /tabpane >}}
+
+To fix this issue, add the Docker volume mount `/var/run/docker.sock:/var/run/docker.sock` to your LocalStack startup.
+Refer to our [sample `docker-compose.yml` file](https://github.com/localstack/localstack/blob/master/docker-compose.yml) as an example.
+
+### Function in Pending state
+
+If you receive a `ResourceConflictException` when trying to invoke a function, it is currently in a `Pending` state and cannot be executed yet.
+To wait until the function becomes `active`, you can use the following command:
+
+{{< command >}}
+$ awslocal lambda get-function --function-name my-function
+An error occurred (ResourceConflictException) when calling the Invoke operation (reached max retries: 0):
+The operation cannot be performed at this time.
+The function is currently in the following state: Pending
+
+$ awslocal lambda wait function-active-v2 --function-name my-function
+{{< / command >}}
+
+Alternatively, you can check the function state using the [`GetFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_GetFunction.html):
+
+{{< command >}}
+$ awslocal lambda get-function --function-name my-function
+{
+ "Configuration": {
+ ...
+ "RevisionId": "c61d6139-1441-4ad5-983a-5a1cec7a1847",
+ "State": "Pending",
+ "StateReason": "The function is being created.",
+ "StateReasonCode": "Creating",
+ ...
+ }
+}
+
+$ awslocal lambda get-function --function-name my-function
+{
+ "Configuration": {
+ ...
+ "RevisionId": "c6633a28-b8d2-40f7-b8e1-02f6f32e8473",
+ "State": "Active",
+ "LastUpdateStatus": "Successful",
+ ...
+ }
+}
+{{< / command >}}
+
+If the function is still in the `Pending` state, the output will include a `"State": "Pending"` field and a `"StateReason": "The function is being created."` message.
+Once the function is active, the `"State"` field will change to `"Active"` and the `"LastUpdateStatus"` field will indicate the status of the last update.
+
+### Not implemented error
+
+If you are using LocalStack versions prior to 2.0, and encounter a `NotImplementedError` in the LocalStack logs and an `InternalFailure (501) error` in the client while creating a Lambda function using the [`CreateFunction` API](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html), check your `PROVIDER_OVERRIDE_LAMBDA` configuration.
+You might encounter this error if it is set to `legacy`.
diff --git a/src/content/docs/aws/services/managedblockchain.md b/src/content/docs/aws/services/managedblockchain.md
new file mode 100644
index 00000000..c87ea6f0
--- /dev/null
+++ b/src/content/docs/aws/services/managedblockchain.md
@@ -0,0 +1,129 @@
+---
+title: "Managed Blockchain (AMB)"
+linkTitle: "Managed Blockchain (AMB)"
+description: >
+ Get started with Managed Blockchain (AMB) on LocalStack
+tags: ["Ultimate"]
+---
+
+Managed Blockchain (AMB) is a managed service that enables the creation and management of blockchain networks, such as Hyperledger Fabric, Bitcoin, Polygon and Ethereum.
+Blockchain enables the development of applications in which multiple entities can conduct transactions and exchange data securely and transparently, eliminating the requirement for a central, trusted authority.
+
+LocalStack allows you to use the AMB APIs to develop and deploy decentralized applications in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_managedblockchain" >}}), which provides information on the extent of AMB integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to AMB and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a blockchain network, a node, and a proposal.
+
+### Create a blockchain network
+
+You can create a blockchain network using the [`CreateNetwork`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateNetwork.html) API.
+Run the following command to create a network named `OurBlockchainNet` which uses the Hyperledger Fabric with the following configuration:
+
+{{< command >}}
+$ awslocal managedblockchain create-network \
+ --cli-input-json '{
+ "Name": "OurBlockchainNet",
+ "Description": "OurBlockchainNetDesc",
+ "Framework": "HYPERLEDGER_FABRIC",
+ "FrameworkVersion": "1.2",
+ "FrameworkConfiguration": {
+ "Fabric": {
+ "Edition": "STARTER"
+ }
+ },
+ "VotingPolicy": {
+ "ApprovalThresholdPolicy": {
+ "ThresholdPercentage": 50,
+ "ProposalDurationInHours": 24,
+ "ThresholdComparator": "GREATER_THAN"
+ }
+ },
+ "MemberConfiguration": {
+ "Name": "org1",
+ "Description": "Org1 first member of network",
+ "FrameworkConfiguration": {
+ "Fabric": {
+ "AdminUsername": "MyAdminUser",
+ "AdminPassword": "Password123"
+ }
+ },
+ "LogPublishingConfiguration": {
+ "Fabric": {
+ "CaLogs": {
+ "Cloudwatch": {
+ "Enabled": true
+ }
+ }
+ }
+ }
+ }
+ }'
+
+{
+ "NetworkId": "n-X24AF1AK2GC6MDW11HYW5I5DQC",
+ "MemberId": "m-6VWBWHP2Y15F7TQ2DS093RTCW2"
+}
+
+{{< / command >}}
+
+Copy the `NetworkId` and `MemberId` values from the output of the above command, as we will need them in the next step.
+
+### Create a node
+
+You can create a node using the [`CreateNode`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateNode.html) API.
+Run the following command to create a node with the following configuration:
+
+{{< command >}}
+$ awslocal managedblockchain create-node \
+ --node-configuration '{
+ "InstanceType": "bc.t3.small",
+ "AvailabilityZone": "us-east-1a",
+ "LogPublishingConfiguration": {
+ "Fabric": {
+ "ChaincodeLogs": {
+ "Cloudwatch": {
+ "Enabled": true
+ }
+ },
+ "PeerLogs": {
+ "Cloudwatch": {
+ "Enabled": true
+ }
+ }
+ }
+ }
+ }' \
+ --network-id n-X24AF1AK2GC6MDW11HYW5I5DQC \
+ --member-id m-6VWBWHP2Y15F7TQ2DS093RTCW2
+
+{
+ "NodeId": "nd-77K8AI0O5BEQD1IW4L8OGKMXV7"
+}
+
+{{< / command >}}
+
+Replace the `NetworkId` and `MemberId` values in the above command with the values you copied in the previous step.
+
+### Create a proposal
+
+You can create a proposal using the [`CreateProposal`](https://docs.aws.amazon.com/managed-blockchain/latest/APIReference/API_CreateProposal.html) API.
+Run the following command to create a proposal with the following configuration:
+
+{{< command >}}
+$ awslocal managedblockchain create-proposal \
+ --actions "Invitations=[{Principal=000000000000}]" \
+ --network-id n-X24AF1AK2GC6MDW11HYW5I5DQC \
+ --member-id m-6VWBWHP2Y15F7TQ2DS093RTCW2
+
+{
+ "ProposalId": "p-NK0PSLDPETJQX01Q4OLBRHP8CZ"
+}
+
+{{< / command >}}
+
+Replace the `NetworkId` and `MemberId` values in the above command with the values you copied in the previous step.
diff --git a/src/content/docs/aws/services/mediastore.md b/src/content/docs/aws/services/mediastore.md
new file mode 100644
index 00000000..1e4c0704
--- /dev/null
+++ b/src/content/docs/aws/services/mediastore.md
@@ -0,0 +1,99 @@
+---
+title: Elemental MediaStore
+linkTitle: Elemental MediaStore
+description: Get started with Elemental MediaStore on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+MediaStore is a scalable and highly available object storage service designed specifically for media content.
+It provides a reliable way to store, manage, and serve media assets, such as audio, video, and images, with low latency and high performance.
+MediaStore seamlessly integrates with other AWS services like Elemental MediaConvert, Elemental MediaLive, Elemental MediaPackage, and CloudFront.
+
+LocalStack allows you to use the Elemental MediaStore APIs as a high-performance storage solution for media content in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_mediastore" >}}), which provides information on the extent of Elemental MediaStore integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Elemental MediaStore and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a MediaStore container, upload an asset, and download the asset.
+
+### Create a container
+
+You can create a container using the [`CreateContainer`](https://docs.aws.amazon.com/mediastore/latest/apireference/API_CreateContainer.html) API.
+Run the following command to create a container and retrieve the the `Endpoint` value which should be used in subsequent requests:
+
+{{< command >}}
+$ awslocal mediastore create-container --container-name mycontainer
+{{< / command >}}
+
+You should see the following output:
+
+```bash
+{
+ "Container": {
+ "Endpoint": "http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566",
+ "CreationTime": "2023-08-11T09:43:19.982754+01:00",
+ "ARN": "arn:aws:mediastore:us-east-1:000000000000:container/mycontainer",
+ "Name": "mycontainer"
+ }
+}
+```
+
+### Upload an asset
+
+To upload a file named `myfile.txt` to the container, utilize the [`PutObject`](https://docs.aws.amazon.com/mediastore/latest/apireference/API_PutObject.html) API.
+This action will transfer the file to the specified path, `/myfolder/myfile.txt`, within the container.
+Provide the `endpoint` obtained in the previous step for the operation to be successful.
+Run the following command to upload the file:
+
+{{< command >}}
+$ awslocal mediastore-data put-object \
+ --endpoint http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566 \
+ --body myfile.txt \
+ --path /myfolder/myfile.txt \
+ --content-type binary/octet-stream
+{{< / command >}}
+
+You should see the following output:
+
+```bash
+{
+ "ContentSHA256": "",
+ "ETag": "\"111d787cdcfcc358fd15684131f586d8\""
+}
+```
+
+### Download an asset
+
+To retrieve the file from the container, utilize the [`GetObject`](https://docs.aws.amazon.com/mediastore/latest/apireference/API_GetObject.html) API.
+In this process, you need to specify the endpoint, the path for downloading the file, and the location where the output file, such as `/tmp/out.txt`, will be stored.
+The downloaded file will then be accessible at the specified output path.
+Run the following command to download the file:
+
+{{< command >}}
+$ awslocal mediastore-data get-object \
+ --endpoint http://mediastore-mycontainer.mediastore.localhost.localstack.cloud:4566 \
+ --path /myfolder/myfile.txt \
+ /tmp/out.txt
+{{< / command >}}
+
+You should see the following output:
+
+```bash
+{
+ "ContentLength": "716",
+ "ContentType": "binary/octet-stream",
+ "ETag": "\"111d787cdcfcc358fd15684131f586d8\"",
+ "LastModified": "2023-08-11T08:43:20+00:00",
+ "StatusCode": 200
+}
+```
+
+## Troubleshooting
+
+The Elemental MediaStore service requires the use of a custom HTTP/HTTPS endpoint.
+In case you encounter any issues, please consult our [Networking documentation]({{< ref "references/network-troubleshooting" >}}) for assistance.
diff --git a/src/content/docs/aws/services/memorydb.md b/src/content/docs/aws/services/memorydb.md
new file mode 100644
index 00000000..18222654
--- /dev/null
+++ b/src/content/docs/aws/services/memorydb.md
@@ -0,0 +1,88 @@
+---
+title: "MemoryDB for Redis"
+linkTitle: "MemoryDB for Redis"
+tags: ["Ultimate"]
+description: Get started with MemoryDB on LocalStack
+---
+
+## Introduction
+
+MemoryDB is a fully managed, Redis-compatible, in-memory database tailored for workloads demanding ultra-fast, primary database functionality.
+It streamlines the deployment and management of in-memory databases within the AWS cloud environment, acting as a replacement for using a cache in front of a database for improved durability and performance.
+
+LocalStack provides support for the main MemoryDB APIs surrounding cluster creation, allowing developers to utilize the MemoryDB functionalities in their local development environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_memorydb" >}}), which provides information on the extent of MemoryDB's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to MemoryDB and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a MemoryDB cluster and connect to it.
+
+### Basic cluster creation
+
+You can create a MemoryDB cluster using the [`CreateCluster`](https://docs.aws.amazon.com/memorydb/latest/APIReference/API_CreateCluster.html) API.
+Run the following command to create a cluster:
+
+{{< command >}}
+$ awslocal memorydb create-cluster \
+ --cluster-name my-redis-cluster \
+ --node-type db.t4g.small \
+ --acl-name open-access
+{{< /command>}}
+
+Once it becomes available, you will be able to use the cluster endpoint for Redis operations.
+Run the following command to retrieve the cluster endpoint using the [`DescribeClusters`](https://docs.aws.amazon.com/memorydb/latest/APIReference/API_DescribeClusters.html) API:
+
+{{< command >}}
+$ awslocal memorydb describe-clusters --query "Clusters[0].ClusterEndpoint"
+{
+ "Address": "127.0.0.1",
+ "Port": 36739
+}
+{{< /command >}}
+
+The cache cluster uses a random port of the [external service port range]({{< ref "external-ports" >}}) in regular execution and a port between 36739 and 46738 in container mode.
+Use this port number to connect to the Redis instance using the `redis-cli` command line tool:
+
+{{< command >}}
+$ redis-cli -p 4510 ping
+PONG
+$ redis-cli -p 4510 set foo bar
+OK
+$ redis-cli -p 4510 get foo
+"bar"
+{{< / command >}}
+
+You can also check the cluster configuration using the [`cluster nodes`](https://redis.io/commands/cluster-nodes) command:
+
+{{< command >}}
+$ redis-cli -c -p 4510 cluster nodes
+...
+{{< / command >}}
+
+## Container mode
+
+To start Redis clusters of a specific version, enable container mode for Redis-based services in LocalStack.
+This approach directs LocalStack to launch Redis instances in distinct containers, utilizing your chosen image tag.
+Additionally, container mode is beneficial for independently examining the logs of each Redis instance.
+To activate this, set the `REDIS_CONTAINER_MODE` configuration variable to `1`.
+
+## Current Limitations
+
+LocalStack's emulation support for MemoryDB primarily focuses on the creation and termination of Redis servers in cluster mode.
+Essential resources for running a cluster, such as parameter groups, security groups, and subnet groups, are mocked but have no effect on the Redis servers' operation.
+
+LocalStack currently doesn't support MemoryDB snapshots, failovers, users/passwords, service updates, replication scaling, SSL, migrations, service integration (like CloudWatch/Kinesis log delivery, SNS notifications) or tests.
+
+At present, LocalStack does not support features such as:
+
+- MemoryDB snapshots
+- Failovers
+- User/password management
+- Service updates
+- Replication scaling
+- SSL
+- Migrations
+- Service integration (e.g., CloudWatch/Kinesis log delivery, SNS notifications) or facilitate related testing.
diff --git a/src/content/docs/aws/services/mq.md b/src/content/docs/aws/services/mq.md
new file mode 100644
index 00000000..86ab8fe5
--- /dev/null
+++ b/src/content/docs/aws/services/mq.md
@@ -0,0 +1,118 @@
+---
+title: "MQ"
+linkTitle: "MQ"
+description: Get started with MQ on LocalStack
+tags: ["Base"]
+---
+
+## Introduction
+
+MQ is a managed message broker service offered by Amazon Web Services (AWS).
+It facilitates the exchange of messages between various components of distributed applications, enabling reliable and scalable communication.
+AWS MQ supports popular messaging protocols like MQTT, AMQP, and STOMP, making it suitable for a wide range of messaging use cases.
+
+LocalStack allows you to use the MQ APIs to implement pub/sub messaging, request/response patterns, or distributed event-driven architectures in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_mq" >}}), which provides information on the extent of MQ integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to MQ and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an MQ broker and send a message to a sample queue.
+
+### Create a broker
+
+You can create a broker using the [`CreateBroker`](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/brokers.html#brokerspost) API.
+Run the following command to create a broker named `test-broker` with the following configuration:
+
+{{< command >}}
+$ awslocal mq create-broker \
+ --broker-name test-broker \
+ --deployment-mode SINGLE_INSTANCE \
+ --engine-type ACTIVEMQ \
+ --engine-version='5.16.6' \
+ --host-instance-type 'mq.t2.micro' \
+ --auto-minor-version-upgrade \
+ --publicly-accessible \
+ --users='{"ConsoleAccess": true, "Groups": ["testgroup"],"Password": "QXwV*$iUM9USHnVv&!^7s3c@", "Username": "admin"}'
+
+{
+ "BrokerArn": "arn:aws:mq:us-east-1:000000000000:broker:test-broker:b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545",
+ "BrokerId": "b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545"
+}
+
+{{< / command >}}
+
+### Describe the broker
+
+You can use the [`DescribeBroker`](https://docs.aws.amazon.com/amazon-mq/latest/api-reference/brokers.html#brokersget) API to get more detailed information about the broker.
+Run the following command to get information about the broker we created above:
+
+{{< command >}}
+$ awslocal mq describe-broker --broker-id
+
+b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545
+{
+ "BrokerArn": "arn:aws:mq:us-east-1:000000000000:broker:test-broker:b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545",
+ "BrokerId": "b-f503abb7-66bc-47fb-b1a9-8d8c51ef6545",
+ "BrokerInstances": [
+ {
+ "ConsoleURL": "http://localhost:4513",
+ "Endpoints": [
+ "stomp://localhost:4515",
+ "tcp://localhost:4514"
+ ]
+ }
+ ],
+ "BrokerName": "test-broker",
+ "BrokerState": "RUNNING",
+ "Created": "2022-10-17T07:14:21.065527Z",
+ "DeploymentMode": "SINGLE_INSTANCE",
+ "EngineType": "ACTIVEMQ",
+ "HostInstanceType": "mq.t2.micro",
+ "Tags": {}
+}
+
+{{< / command >}}
+
+### Send a message
+
+Now that the broker is actively listening, we can use curl to send a message to a sample queue.
+Run the following command to send a message to the `orders.input` queue:
+
+{{< command >}}
+$ curl -XPOST -d "body=message" http://admin:admin@localhost:4513/api/message\?destination\=queue://orders.input
+{{< / command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing MQ brokers.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **MQ** under the **App Integration** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Broker**: Create a new MQ broker by clicking on the **Create Broker** button and providing the required parameters.
+- **View Broker**: View details of an existing MQ broker by clicking on the broker name.
+- **Delete Broker**: Select the broker name and click on the **Actions** button followed by **Remove Selected** button.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use MQ in LocalStack for various use cases:
+
+- [Demo application illustrating the use of MQ with LocalStack](https://github.com/localstack/localstack-pro-samples/tree/master/mq-broker)
+
+## Current Limitations
+
+Currently, our MQ emulation offers only fundamental capabilities, and it comes with certain limitations:
+
+- **ActiveMQ Version Limitation:** Presently, only ActiveMQ version 5.16.6 is supported.
+ RabbitMQ is not supported at this time.
+- **IAM User Management:** IAM Users are not actively enforced, although they are necessary for making correct calls within the system.
+- **Configuration Enforcement:** While it is feasible to create configurations, they are not actively enforced within the broker.
+- **Persistence and Cloud Pods:** LocalStack does not provide support for Persistence and Cloud Pods at this time.
+- **API Coverage:** Please note that there is limited API coverage available as part of the current emulation capabilities.
diff --git a/src/content/docs/aws/services/msk.md b/src/content/docs/aws/services/msk.md
new file mode 100644
index 00000000..e1c7e33b
--- /dev/null
+++ b/src/content/docs/aws/services/msk.md
@@ -0,0 +1,267 @@
+---
+title: "Managed Streaming for Kafka (MSK)"
+linkTitle: "Managed Streaming for Kafka (MSK)"
+description: Get started with Managed Streaming for Kafka (MSK) on LocalStack
+tags: ["Ultimate"]
+persistence: supported with limitations
+---
+
+## Introduction
+
+Managed Streaming for Apache Kafka (MSK) is a fully managed Apache Kafka service that allows you to build and run applications that process streaming data.
+MSK offers a centralized platform to facilitate seamless communication between various AWS services and applications through event-driven architectures, facilitating data ingestion, processing, and analytics for various applications.
+MSK also features automatic scaling and built-in monitoring, allowing users to build robust, high-throughput data pipelines.
+
+LocalStack allows you to use the MSK APIs in your local environment to spin up Kafka clusters on the local machine, create topics for exchanging messages, and define event source mappings that trigger Lambda functions when messages are received on a certain topic.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_kafka" >}}), which provides information on the extent of MSK's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Managed Streaming for Kafka and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to configure an MSK Cluster locally, create a Kafka topic, and produce and consume messages.
+
+### Create a local MSK Cluster
+
+To set up a local MSK (Managed Streaming for Apache Kafka) cluster, you can use the [`CreateCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#CreateCluster) API to create a cluster named `EventsCluster` with three broker nodes.
+
+In this process, you'll need a JSON file named `brokernodegroupinfo.json` which specifies the three subnets where you want your local Amazon MSK to distribute the broker nodes.
+Create the file and add the following content to it:
+
+```json
+{
+ "InstanceType": "kafka.m5.xlarge",
+ "BrokerAZDistribution": "DEFAULT",
+ "ClientSubnets": [
+ "subnet-0123456789111abcd",
+ "subnet-0123456789222abcd",
+ "subnet-0123456789333abcd"
+ ]
+}
+```
+
+Run the following command to create the cluster:
+
+{{< command >}}
+$ awslocal kafka create-cluster \
+ --cluster-name "EventsCluster" \
+ --broker-node-group-info file://brokernodegroupinfo.json \
+ --kafka-version "2.8.0" \
+ --number-of-broker-nodes 3
+{{< / command >}}
+
+The output of the command looks similar to this:
+
+```bash
+{
+ "ClusterArn": "arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster/b154d18a-8ecb-4691-96b2-50348357fc2f-25",
+ "ClusterName": "EventsCluster",
+ "State": "CREATING"
+}
+```
+
+The cluster creation process might take a few minutes.
+You can describe the cluster using the [`DescribeCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#DescribeCluster) API.
+Run the following command, replacing `ClusterArn` with the Amazon Resource Name (ARN) you obtained above when you created cluster.
+
+{{< command >}}
+$ awslocal kafka describe-cluster \
+ --cluster-arn "arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster/b154d18a-8ecb-4691-96b2-50348357fc2f-25"
+{{< / command >}}
+
+The output of the command looks similar to this:
+
+```bash
+{
+ "ClusterInfo": {
+ "BrokerNodeGroupInfo": {
+ "BrokerAZDistribution": "DEFAULT",
+ "ClientSubnets": [
+ "subnet-01",
+ "subnet-02",
+ "subnet-03"
+ ],
+ "InstanceType": "kafka.m5.xlarge"
+ },
+ "ClusterArn": "arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster/b154d18a-8ecb-4691-96b2-50348357fc2f-25",
+ "ClusterName": "EventsCluster",
+ "CreationTime": "2022-06-29T02:45:16.848000Z",
+ "CurrentBrokerSoftwareInfo": {
+ "KafkaVersion": "2.5.0"
+ },
+ "CurrentVersion": "K5OWSPKW0IK7LM",
+ "NumberOfBrokerNodes": 3,
+ "State": "ACTIVE",
+ "ZookeeperConnectString": "localhost:4510"
+ }
+}
+```
+
+### Create a Kafka topic
+
+To use LocalStack MSK, you can download and utilize the Kafka command line interface (CLI) to create a topic for producing and consuming data.
+
+To download Apache Kafka, execute the following commands.
+
+{{< command >}}
+$ wget https://archive.apache.org/dist/kafka/2.8.0/kafka_2.12-2.8.0.tgz
+$ tar -xzf kafka_2.12-2.8.0.tgz
+{{< / command >}}
+
+Navigate to the **kafka_2.12-2.8.0** directory.
+Execute the following command, replacing `ZookeeperConnectString` with the value you saved after running the [`DescribeCluster`](https://docs.aws.amazon.com/msk/1.0/apireference/clusters.html#DescribeCluster) API:
+
+{{< command >}}
+$ bin/kafka-topics.sh \
+ --create \
+ --zookeeper localhost:4510 \
+ --replication-factor 1 \
+ --partitions 1 \
+ --topic LocalMSKTopic
+{{< / command >}}
+
+After executing the command, your output should resemble the following:
+
+```bash
+Created topic LocalMSKTopic.
+```
+
+### Interacting with the topic
+
+You can now utilize the JVM truststore to establish communication with the MSK cluster.
+Create a folder named `/tmp` on the client machine, and navigate to the bin folder of the Apache Kafka installation.
+
+Run the following command, replacing `java_home` with the path of your `java_home`.
+For this instance, the java_home path is `/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home`.
+
+{{< callout >}}
+The following step is optional and may not be required, depending on the operating system environment being used.
+{{< /callout >}}
+
+{{< command >}}
+$ cp java_home/lib/security/cacerts /tmp/kafka.client.truststore.jks
+{{< / command >}}
+
+While you are still in the `bin` folder of the Apache Kafka installation on the client machine, create a text file named `client.properties` with the following contents:
+
+```txt
+ssl.truststore.location=/tmp/kafka.client.truststore.jks
+```
+
+Run the following command, replacing `ClusterArn` with the Amazon Resource Name (ARN) you have.
+
+{{< command >}}
+$ awslocal kafka get-bootstrap-brokers \
+ --cluster-arn ClusterArn
+{{< / command >}}
+
+To proceed with the following commands, save the value associated with the string named `BootstrapBrokerStringTls` from the JSON result obtained from the previous command.
+It should look like this:
+
+```bash
+{
+ "BootstrapBrokerString": "localhost:4511"
+}
+```
+
+Now, navigate to the bin folder and run the next command, replacing `BootstrapBrokerStringTls` with the value you obtained:
+
+{{< command >}}
+$ ./kafka-console-producer.sh \
+ --broker-list BootstrapBrokerStringTls \
+ --producer.config client.properties \
+ --topic LocalMSKTopic
+{{< / command >}}
+
+To send messages to your Apache Kafka cluster, enter any desired message and press Enter.
+You can repeat this process twice or thrice, sending each line as a separate message to the Kafka cluster.
+
+Keep the connection to the client machine open, and open a separate connection to the same machine in a new window.
+
+In this new connection, navigate to the `bin` folder and run a command, replacing `BootstrapBrokerStringTls` with the value you saved earlier.
+This command will allow you to interact with the Apache Kafka cluster using the saved value for secure communication.
+
+{{< command >}}
+$ ./kafka-console-consumer.sh \
+ --bootstrap-server BootstrapBrokerStringTls \
+ --consumer.config client.properties \
+ --topic LocalMSKTopic \
+ --from-beginning
+{{< / command >}}
+
+You should start seeing the messages you entered earlier when you used the console producer command.
+These messages are TLS encrypted in transit.
+Enter more messages in the producer window, and watch them appear in the consumer window.
+
+### Adding a local MSK trigger
+
+You can add a Lambda Event Source Mapping API to create a mapping between a Lambda function, named `my-kafka-function`, and a Kafka topic called `LocalMSKTopic`.
+The configuration for this mapping sets the starting position of the topic to `LATEST`.
+
+Run the following command to use the [`CreateEventSourceMapping`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateEventSourceMapping.html) API by specifying the Event Source ARN, the topic name, the starting position, and the Lambda function name.
+
+{{< command >}}
+$ awslocal lambda create-event-source-mapping \
+ --event-source-arn arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster \
+ --topics LocalMSKTopic \
+ --starting-position LATEST \
+ --function-name my-kafka-function
+{{< / command >}}
+
+Upon successful completion of the operation to create the Lambda Event Source Mapping, you can expect the following response:
+
+```bash
+{
+ "UUID": "9c353a2b-bc1a-48b5-95a6-04baf67f01e4",
+ "StartingPosition": "LATEST",
+ "BatchSize": 100,
+ "ParallelizationFactor": 1,
+ "EventSourceArn": "arn:aws:kafka:us-east-1:000000000000:cluster/EventsCluster",
+ "FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:my-kafka-function",
+ "LastModified": "2021-11-21T20:55:49.438914+01:00",
+ "LastProcessingResult": "OK",
+ "State": "Enabled",
+ "StateTransitionReason": "User action",
+ "Topics": [
+ "LocalMSKTopic"
+ ]
+}
+```
+
+With the event source mapping feature, LocalStack offers an automated process for spawning Lambda functions whenever a message is published to the designated Kafka topic.
+
+You can use the `kafka-console-producer.sh` client script to publish messages to the topic.
+By doing so, you can closely monitor the execution of Lambda functions within Docker containers as new messages arrive by simply observing the LocalStack log output.
+
+## Delete the local MSK cluster
+
+You can delete the local MSK cluster using the [`DeleteCluster`](https://docs.aws.amazon.com/cli/latest/reference/kafka/delete-cluster.html) API.
+To do so, you must first obtain the ARN of the cluster you want to delete.
+Run the following command to list all the clusters in the region:
+
+{{< command >}}
+$ awslocal kafka list-clusters --region us-east-1
+{{< / command >}}
+
+To initiate the deletion of a cluster, select the corresponding `ClusterARN` from the list of clusters, and then execute the following command:
+
+{{< command >}}
+awslocal kafka delete-cluster --cluster-arn ClusterArn
+{{< / command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing MSK clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Kafka** under the **Analytics** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Cluster**: Create a new MSK cluster by clicking on the **Create Cluster** button and specifying the required parameters.
+- **View Cluster**: View the details of an existing MSK cluster by clicking on the cluster name.
+- **Edit Cluster**: Edit the configuration of an existing MSK cluster by clicking on the **Edit** button in the cluster details page.
+- **Delete Cluster**: Delete an existing MSK cluster by selecting the cluster name and clicking on the **Actions** dropdown menu, then selecting **Remove Selected**.
diff --git a/src/content/docs/aws/services/mwaa.md b/src/content/docs/aws/services/mwaa.md
new file mode 100644
index 00000000..457b9801
--- /dev/null
+++ b/src/content/docs/aws/services/mwaa.md
@@ -0,0 +1,159 @@
+---
+title: "Managed Workflows for Apache Airflow (MWAA)"
+linkTitle: "Managed Workflows for Apache Airflow (MWAA)"
+description: >
+ Get started with Managed Workflows for Apache Airflow (MWAA) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Managed Workflows for Apache Airflow (MWAA) is a fully managed service by AWS that simplifies the deployment, management, and scaling of [Apache Airflow](https://airflow.apache.org/) workflows in the cloud.
+MWAA leverages the familiar Airflow features and integrations while integrating with S3, Glue, Redshift, Lambda, and other AWS services to build data pipelines and orchestrate data processing workflows in the cloud.
+
+LocalStack allows you to use the MWAA APIs in your local environment to allow the setup and operation of data pipelines.
+The supported APIs are available on the [API coverage page]({{< ref "coverage_mwaa" >}}).
+
+## Getting started
+
+This guide is designed for users new to MWAA and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an Airflow environment and access the Airflow UI.
+
+### Create a S3 bucket
+
+Create a S3 bucket that will be used for Airflow resources.
+Run the following command to create a bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command.
+
+{{< command >}}
+$ awslocal s3 mb s3://my-mwaa-bucket
+{{< /command >}}
+
+### Create an Airflow environment
+
+You can now create an Airflow environment, using the [`CreateEnvironment`](https://docs.aws.amazon.com/mwaa/latest/API/API_CreateEnvironment.html) API.
+Run the following command, by specifying the bucket ARN we created earlier:
+
+{{< command >}}
+$ awslocal mwaa create-environment --dag-s3-path /dags \
+ --execution-role-arn arn:aws:iam::000000000000:role/airflow-role \
+ --network-configuration {} \
+ --source-bucket-arn arn:aws:s3:::my-mwaa-bucket \
+ --airflow-version 2.10.1 \
+ --airflow-configuration-options agent.code=007,agent.name=bond \
+ --name my-mwaa-env
+{{< /command >}}
+
+### Access the Airflow UI
+
+The Airflow UI can be accessed via the URL in the `WebserverUrl` attribute of the response of the `GetEnvironment` operation.
+The username and password are always set to `localstack`.
+
+{{< command >}}
+$ awslocal mwaa get-environment --name my-mwaa-env --query Environment.WebserverUrl
+"http://localhost.localstack.cloud:4510"
+{{< /command >}}
+
+LocalStack also prints this information in the logs:
+
+```bash
+2024-03-06T14:54:47.070 INFO --- [functhread10] l.services.mwaa.provider : Airflow environment 'my-mwaa-env' available at http://localhost.localstack.cloud:4510 with username 'localstack' and password 'localstack'
+```
+
+## Airflow versions
+
+LocalStack supports the following versions of Apache Airflow:
+
+- `2.4.3`
+- `2.5.1`
+- `2.6.3`
+- `2.7.2`
+- `2.8.1`
+- `2.9.2`
+- `2.10.1` (default)
+
+## Airflow configuration options
+
+To configure Airflow environments effectively, you can utilize the `AirflowConfigurationOptions` argument.
+These options are transformed into corresponding environment variables and passed to Airflow.
+For instance:
+
+- `agent.code`:`007` is transformed into `AIRFLOW__AGENT__CODE:007`.
+- `agent.name`:`bond` is transformed into `AIRFLOW__AGENT__NAME:bond`.
+
+This transformation process ensures that your configuration settings are easily applied within the Airflow environment.
+
+## Adding or updating DAGs
+
+When it comes to adding or updating DAGs in Airflow, the process is simple and efficient.
+Just upload your DAGs to the designated S3 bucket path, configured by the `DagS3Path` argument.
+
+For example, the command below uploads a sample DAG named `sample_dag.py` to your S3 bucket named `my-mwaa-bucket`:
+
+{{< command >}}
+$ awslocal s3 cp sample_dag.py s3://my-mwaa-bucket/dags
+{{< /command >}}
+
+LocalStack syncs new and changed objects in the S3 bucket to the Airflow container every 30 seconds.
+The polling interval can be changed using the [`MWAA_S3_POLL_INTERVAL`]({{< ref "configuration#mwaa" >}}) config option.
+
+## Installing custom plugins
+
+You can extend the capabilities of Airflow by incorporating custom plugins, which introduce new operators, interfaces, or hooks.
+LocalStack seamlessly supports plugins packaged according to [AWS specifications](https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-dag-import-plugins.html#configuring-dag-plugins-test-create).
+
+To integrate your custom plugins into the MWAA environment, upload the packaged `plugins.zip` file to the designated S3 bucket path:
+
+{{< command >}}
+$ awslocal s3 cp plugins.zip s3://my-mwaa-bucket/plugins.zip
+{{< /command >}}
+
+## Installing Python dependencies
+
+LocalStack streamlines the process of installing Python dependencies for Apache Airflow within your environments.
+To get started, create a `requirements.txt` file that lists the required dependencies.
+For example:
+
+```txt
+boto3==1.17.54
+boto==2.49.0
+botocore==1.20.54
+```
+
+Once you have your `requirements.txt` file ready, upload it to the designated S3 bucket, configured for use by the MWAA environment.
+Make sure to upload the file to `/requirements.txt` in the bucket:
+
+{{< command >}}
+$ awslocal s3 cp requirements.txt s3://my-mwaa-bucket/requirements.txt
+{{< /command >}}
+
+After the upload, the environment will be automatically updated, and your Apache Airflow setup will be equipped with the new dependencies.
+It is important to note that, unlike [AWS](https://docs.aws.amazon.com/mwaa/latest/userguide/connections-packages.html), LocalStack does not install any provider packages by default.
+Therefore, you must follow the above steps to install any required provider packages.
+
+## Connections
+
+When incorporating connections to other AWS services within your DAGs, it is crucial to specify either the internal Docker IP address of the LocalStack container or utilize `host.docker.internal`.
+LocalStack currently does not use the credentials and region from `aws_conn_id`.
+This information must be explicitly passed in operators, hooks, and sensors.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing MWAA Environments.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **MWAA** under the **App Integration** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Environment**: Create a new MWAA environment by clicking on the **Create Environment** button and providing the required parameters.
+- **View Environment**: View details of an existing MWAA environment by clicking on the environment name.
+- **Edit Environment**: Edit an existing MWAA environment by clicking on the **Edit** button after clicking on the environment name.
+- **Delete Environment**: Select the environment name and click on the **Actions** button followed by **Remove Selected** button.
+
+## Current Limitations
+
+- LocalStack MWAA does not support [startup scripts](https://docs.aws.amazon.com/mwaa/latest/userguide/using-startup-script.html)
diff --git a/src/content/docs/aws/services/neptune.md b/src/content/docs/aws/services/neptune.md
new file mode 100644
index 00000000..58f84699
--- /dev/null
+++ b/src/content/docs/aws/services/neptune.md
@@ -0,0 +1,313 @@
+---
+title: "Neptune"
+linkTitle: "Neptune"
+description: >
+ Get started with Neptune on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Neptune is a fully managed, highly available, and scalable graph database service offered by AWS.
+It is designed for storing and querying highly connected data for applications that require complex relationship modeling, such as social networks, recommendation engines, and fraud detection.
+Neptune supports popular graph query languages like Gremlin and SPARQL, making it compatible with a wide range of graph applications and tools.
+
+LocalStack allows you to use the Neptune APIs in your local environment to support both property graph and RDF graph models.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_neptune" >}}), which provides information on the extent of Neptune's integration with LocalStack.
+
+ The following versions of Neptune engine are supported by LocalStack:
+
+| Engine Version | Tinkerpop Version |
+|-----------------|---------------------|
+| `1.1.0.0` | `3.4.11` |
+| `1.1.1.0` | `3.5.2` |
+| `1.2.0.0` | `3.5.2` |
+| `1.2.0.1` | `3.5.2` |
+| `1.2.0.2` | `3.5.2` |
+| `1.2.1.0` | `3.6.2` |
+| `1.2.1.1` | `3.6.2` |
+| `1.3.0.0` | `3.6.2` |
+| `1.3.1.0` | `3.6.2` |
+| `1.3.2.0` | `3.7.2` |
+| `1.3.2.1` | `3.7.2` |
+| `1.3.4.0` | `3.7.2` |
+| `1.4.0.0` | `3.7.2` |
+| `1.4.1.0` | `3.7.2` |
+| `1.4.2.0` | `3.7.2` |
+| `1.4.3.0` | `3.7.2` |
+
+## Getting started
+
+This guide is designed for users new to Neptune and assumes basic knowledge of the AWS CLI and our `awslocal` wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate the following with AWS CLI & Python:
+
+- Creating a Neptune cluster.
+- Starting a connection to the Neptune cluster.
+- Running a Python script to create nodes and edges and query the graph database.
+
+### Create a Neptune cluster
+
+To create a Neptune cluster you can use the [`CreateDBCluster`](https://docs.aws.amazon.com/neptune/latest/userguide/api-clusters.html#CreateDBCluster) API.
+Run the following command to create a Neptune cluster:
+
+{{< command >}}
+$ awslocal neptune create-db-cluster \
+ --engine neptune \
+ --db-cluster-identifier my-neptune-db
+{{< / command >}}
+
+You should see the following output:
+
+```json
+{
+ "DBCluster": {
+ ...
+ "Endpoint": "localhost",
+ "Port": 4510, # may vary
+ "DBClusterArn": "arn:aws:rds:us-east-1:000000000000:cluster:my-neptune-db",
+ ...
+ }
+}
+```
+
+### Add an instance to the cluster
+
+To add an instance you can use the [`CreateDBInstance`](https://docs.aws.amazon.com/neptune/latest/userguide/api-instances.html#CreateDBInstance) API.
+Run the following command to create a Neptune instance:
+
+{{< command >}}
+$ awslocal neptune create-db-instance \
+ --db-cluster-identifier my-neptune-db \
+ --db-instance-identifier my-neptune-instance \
+ --engine neptune \
+ --db-instance-class db.t3.medium
+{{< / command >}}
+
+In LocalStack the `Endpoint` for the `DBCluster` and the `Endpoint.Address` of the `DBInstance` will be the same and can be used to connect to the graph database.
+
+### Start a connection
+
+To start a connection you have to use the `ws` protocol.
+
+Here is an example that uses Python and [`gremlinpython`](https://pypi.org/project/gremlinpython/) to connect to the database:
+
+```python
+from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
+from gremlin_python.process.anonymous_traversal import traversal
+from gremlin_python.process.traversal import Bindings, T, gt
+
+ENDPOINT = "localhost:4510" # TODO change to your endpoint
+DATABASE_URL = f"ws://{ENDPOINT}/gremlin"
+
+
+if __name__ == '__main__':
+ conn = DriverRemoteConnection(
+ DATABASE_URL,
+ "g",
+ pool_size=1,
+ )
+
+ g = traversal().withRemote(conn)
+
+ # add some nodes
+ v1 = g.addV("person").property(T.id, "1").property("name", "marko").property("age", 29).next()
+ v2 = g.addV("person").property(T.id, "2").property("name", "stephen").property("age", 33).next()
+ v3 = g.addV("person").property(T.id, "3").property("name", "mia").property("age", 30).next()
+
+ # add edges/relation
+ g.V(Bindings.of("id", v1)).addE("knows").to(v2).property("weight", 0.75).iterate()
+ g.V(Bindings.of("id", v1)).addE("knows").to(v3).property("weight", 0.85).iterate()
+
+ # retrieve all names
+ names = g.V().values("name").to_list()
+
+ # list all names of persons that know "marko"
+ marko_knows = g.V("1").outE("knows").inV().values("name").order().to_list()
+
+ # all persons that "marko" know that are older than 30
+ marko_knows_older_30 = g.V("1").out("knows").has("age", gt(30)).values("name").to_list()
+
+ # reset everything
+ g.V().drop().iterate()
+
+ result = {
+ "names": names,
+ "marko_knows": marko_knows,
+ "marko_knows_older_30": marko_knows_older_30,
+ }
+ print(result)
+```
+
+## IAM Enforcement for Gremlin Queries
+
+Amazon Neptune resources with IAM DB authentication enabled require all requests to use AWS Signature Version 4.
+
+When LocalStack starts with [IAM enforcement enabled]({{< ref "/user-guide/security-testing" >}}), the Neptune database checks user permissions before granting access.
+ The following Gremlin query actions are available for database engine versions `1.3.2.0` and higher:
+
+```json
+{
+ "Action": [
+ "neptune-db:ReadDataViaQuery",
+ "neptune-db:WriteDataViaQuery",
+ "neptune-db:DeleteDataViaQuery"
+ ]
+}
+```
+
+Start LocalStack with `LOCALSTACK_ENFORCE_IAM=1` to create a Neptune cluster with IAM DB authentication enabled.
+
+{{< command >}}
+$ LOCALSTACK_ENFORCE_IAM=1 localstack start
+{{< /command >}}
+
+You can then create a cluster.
+
+{{< command >}}
+$ awslocal neptune create-db-cluster \
+ --engine neptune \
+ --db-cluster-identifier myneptune-db \
+ --enable-iam-database-authentication
+{{< /command >}}
+
+After the cluster is deployed, the Gremlin server will reject unsigned queries.
+
+{{< command >}}
+$ curl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V()" -v
+...
+- Request completely sent off
+< HTTP/1.1 403 Forbidden
+- no chunk, no close, no size.
+ Assume close to signal end
+...
+
+{{< /command >}}
+
+Use the Python package [awscurl](https://pypi.org/project/awscurl/) to make your first signed query.
+
+{{< command >}}
+$ awscurl "https://localhost.localstack.cloud:4510/gremlin?gremlin=g.V().count()" -H "Accept: application/json" | jq .
+
+{
+ "requestId": "729c3e7b-50b3-4df7-b0b6-d1123c4e81df",
+ "status": {
+ "message": "",
+ "code": 200,
+ "attributes": {
+ "@type": "g:Map",
+ "@value": []
+ }
+ },
+ "result": {
+ "data": {
+ "@type": "g:List",
+ "@value": [
+ {
+ "@type": "g:Int64",
+ "@value": 0
+ }
+ ]
+ },
+ "meta": {
+ "@type": "g:Map",
+ "@value": []
+ }
+ }
+}
+
+{{< /command >}}
+
+{{< callout "note" >}}
+If Gremlin Server is installed in your LocalStack environment, you must delete it and restart LocalStack.
+You can find your LocalStack volume location on the [LocalStack filesystem documentation]({{< ref "/references/filesystem/#localstack-volume" >}}).
+{{< command >}}
+$ rm -rf /lib/tinkerpop
+{{< /command >}}
+{{< /callout >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing Neptune databases and clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Neptune** under the **Database** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Cluster**: Create a new Neptune cluster by clicking on **Create Cluster** under the **Clusters** tab and providing the required parameters.
+- **List Clusters**: View a list of all Neptune clusters in your LocalStack environment by clicking on the **Clusters** tab.
+- **View Cluster Details**: Click on a cluster name to view detailed information about the cluster, including its status, endpoint, and other configuration details.
+- **Graph Browser**: Access the Neptune Graph Browser by clicking on the **Graph Browser** tab in the cluster details.
+ The Graph Browser allows you to interactively query and visualize the graph data stored in your Neptune cluster.
+- **Quick Actions**: Perform quick actions on the cluster, such as adding a new Node, modifying an existing one or creating a new Edge between 2 nodes.
+ You can access the **Quick Actions** by clicking in the respective tab from the cluster details page.
+- **Create instance**: Create a new Neptune database by clicking on **Create Instance** under the **Instances** tab and providing the required parameters.
+- **List Instances**: View a list of all Neptune databases in your LocalStack environment by clicking on the **Instances** tab.
+- **View Instance Details**: Click on a database name to view detailed information about the database, including its status, endpoint, and other configuration details.
+- **Edit Instance**: Edit the configuration of a Neptune database by clicking on the **Edit Instance** button in the instance details.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Neptune in LocalStack for various use cases:
+
+- [Neptune Graph Database Demo](https://github.com/localstack/localstack-pro-samples/tree/master/neptune-graph-db)
+
+## Preview Features
+
+### Gremlin Transactions
+
+Gremlin transactions can be enabled by setting the environment `NEPTUNE_ENABLE_TRANSACTION=1`.
+Be aware that the `engine_version` provided when creating your cluster will be ignored and LocalStack will use `3.7.2` Gremlin Server.
+This feature is in beta and any feedback is appreciated.
+
+#### Current Limitations
+
+- Fixed id
+ - Creating a Vertex with an id in a transaction, then deleting it.
+ Trying to recreate a vertex with the same id will fail.
+- Serializer considerations
+ - While it is possible to connect to the server with a lower version of Gremlin Language Variants, there are breaking changes to the default `GraphBinarySerializersV1` serializer used by most languages.
+ One possible fix is to use the matching version for your language variant.
+ Otherwise, using the `GraphSONSerializersV3d0` serializer also seems to be working.
+ See example below.
+ - If using Neptune <= `1.2.0.2`, the Gryo message serializer is no longer supported.
+ Only affects users explicitly using that serializer.
+
+Example using `gremlinpython==3.6.2`
+
+```python
+from gremlin_python.driver import serializer
+from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
+from gremlin_python.process.anonymous_traversal import traversal
+
+ENDPOINT = "localhost:4510" # TODO change to your endpoint
+DATABASE_URL = f"ws://{ENDPOINT}/gremlin"
+
+
+if __name__ == '__main__':
+ conn = DriverRemoteConnection(
+ DATABASE_URL,
+ "g",
+ # Note, the serializer is only required if using gremplin_python < 3.7.0
+ message_serializer=serializer.GraphSONSerializersV3d0(),
+ )
+
+ g = traversal().withRemote(conn)
+
+ tx = g.tx()
+ gtx = tx.begin()
+
+ try:
+ v1 = gtx.addV("person").property("name", "Mark").next()
+ v2 = gtx.addV("person").property("name", "Jane").next()
+ tx.commit()
+ except Exception:
+ tx.rollback()
+
+ nodes = g.V().valueMap().fold().next()
+ print(nodes)
+```
diff --git a/src/content/docs/aws/services/opensearch.md b/src/content/docs/aws/services/opensearch.md
new file mode 100644
index 00000000..9a94b81d
--- /dev/null
+++ b/src/content/docs/aws/services/opensearch.md
@@ -0,0 +1,393 @@
+---
+title: "OpenSearch Service"
+linkTitle: "OpenSearch Service"
+description: >
+ Get started with OpenSearch Service on LocalStack
+tags: ["Free"]
+---
+
+## Introduction
+
+OpenSearch Service is an open-source search and analytics engine, offering developers and organizations advanced search capabilities, robust data analysis, and insightful visualizations.
+OpenSearch Service also offers log analytics, real-time application monitoring, and clickstream analysis.
+
+LocalStack allows you to use the OpenSearch Service APIs in your local environment to create, manage, and operate the OpenSearch clusters.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_opensearch" >}}), which provides information on the extent of OpenSearch's integration with LocalStack.
+
+The following versions of OpenSearch Service are supported by LocalStack:
+
+- 1.0
+- 1.1
+- 1.2
+- 1.3
+- 2.3
+- 2.7
+- 2.9
+- 2.11 (**default**)
+
+OpenSearch is closely coupled with the [Elasticsearch Service](../elasticsearch).
+Clusters generated through the OpenSearch Service will be visible within the Elasticsearch Service interface, and vice versa.
+You can select an Elasticsearch version with the `--engine-version` parameter while creating an OpenSearch Service domain.
+
+## Getting started
+
+This guide is designed for users new to OpenSearch Service and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a new OpenSearch Service cluster and interact with it, using the AWS CLI.
+
+### Creating an OpenSearch cluster
+
+To create an OpenSearch Service cluster, you can use the [`CreateDomain`](https://docs.aws.amazon.com/opensearch-service/latest/APIReference/API_CreateDomain.html) API.
+OpenSearch Service domain is synonymous with an OpenSearch cluster.
+Execute the following command to create a new OpenSearch domain:
+
+{{< command >}}
+$ awslocal opensearch create-domain --domain-name my-domain
+{{< / command >}}
+
+Each time you establish a cluster using a new version of OpenSearch, the corresponding OpenSearch binary must be downloaded, a process that might require some time to complete.
+In the LocalStack log you will see something like, where you can see the cluster starting up in the background.
+
+You can open the LocalStack logs, to see that the OpenSearch Service cluster is being created in the background.
+You can use the [`DescribeDomain`](https://docs.aws.amazon.com/opensearch-service/latest/APIReference/API_DescribeDomain.html) API to check the status of the cluster:
+
+{{< command >}}
+$ awslocal opensearch describe-domain \
+ --domain-name my-domain | jq ".DomainStatus.Processing"
+{{< / command >}}
+
+The `Processing` attribute will be `false` once the cluster is up and running.
+Once the cluster is up, you can interact with the cluster.
+
+### Interact with the cluster
+
+You can now interact with the cluster at the cluster API endpoint for the domain, in this case `http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566`.
+
+Run the following command to get the cluster health:
+
+{{< command >}}
+$ curl http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566
+{{< / command >}}
+
+You can verify that the cluster is up and running by checking the cluster health:
+
+{{< command >}}
+$ curl -s http://my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq .
+{{< / command >}}
+
+The following output will be visible on your terminal:
+
+```json
+{
+ "cluster_name": "opensearch",
+ "status": "green",
+ "timed_out": false,
+ "number_of_nodes": 1,
+ "number_of_data_nodes": 1,
+ "discovered_master": true,
+ "active_primary_shards": 0,
+ "active_shards": 0,
+ "relocating_shards": 0,
+ "initializing_shards": 0,
+ "unassigned_shards": 0,
+ "delayed_unassigned_shards": 0,
+ "number_of_pending_tasks": 0,
+ "number_of_in_flight_fetch": 0,
+ "task_max_waiting_in_queue_millis": 0,
+ "active_shards_percent_as_number": 100
+}
+```
+
+## Domain Endpoints
+
+There are two configurable strategies that govern how domain endpoints are created.
+The strategy can be configured via the `OPENSEARCH_ENDPOINT_STRATEGY` environment variable.
+
+| Value | Format | Description |
+| ------- | ----------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
+| `domain` | `...localhost.localstack.cloud:4566` | The default strategy employing the `localhost.localstack.cloud` domain for routing to localhost. |
+| `path` | `localhost:4566///` | An alternative strategy useful if resolving LocalStack's localhost domain poses difficulties. |
+| `port` | `localhost:` | Directly exposes cluster(s) via ports from [the external service port range]({{< ref "external-ports" >}}). |
+
+Irrespective of the originating service for the clusters, the domain of each cluster consistently aligns with its engine type, be it OpenSearch or Elasticsearch.
+Consequently, OpenSearch clusters incorporate `opensearch` within their domains (e.g., `my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566`), while Elasticsearch clusters feature `es` in their domains (e.g., `my-domain.us-east-1.es.localhost.localstack.cloud:4566`).
+
+## Custom Endpoints
+
+LocalStack allows you to define arbitrary endpoints for your clusters within the domain endpoint options.
+This functionality can be used to overwrite the behavior of the aforementioned endpoint strategies.
+Moreover, you can opt for custom domains, though it's important to incorporate the edge port (80/443, or the default 4566).
+
+Run the following command to create a new OpenSearch domain with a custom endpoint:
+
+{{< command >}}
+$ awslocal opensearch create-domain --domain-name my-domain \
+ --domain-endpoint-options '{ "CustomEndpoint": "http://localhost:4566/my-custom-endpoint", "CustomEndpointEnabled": true }'
+{{< / command >}}
+
+After the domain processing is complete, you can access the cluster using the custom endpoint:
+
+{{< command >}}
+$ curl http://localhost:4566/my-custom-endpoint/_cluster/health
+{{< / command >}}
+
+## Re-using a single cluster instance
+
+In certain scenarios, creating a distinct cluster instance for each domain might not align with your use-case.
+For example, if your focus is solely on testing API interactions rather than actual OpenSearch functionality, individual clusters might be excessive.
+In such situations, the option to set `OPENSEARCH_MULTI_CLUSTER=0` exists, allowing all domains to be funneled into a single cluster instance.
+
+However, it's important to be aware that it can introduce unexpected complications.
+This is particularly true when dealing with data persistence within OpenSearch or when working with clusters of varying versions.
+As a result, we advise caution when considering this approach and generally recommend against it.
+
+## Storage Layout
+
+OpenSearch will be organized in your state directory as follows:
+
+{{< command >}}
+$ tree -L 4 ./volume/state
+./volume/state
+├── opensearch
+│ └── arn:aws:es:us-east-1:000000000000:domain
+│ ├── my-cluster-1
+│ │ ├── backup
+│ │ ├── data
+│ │ └── tmp
+│ ├── my-cluster-2
+│ │ ├── backup
+│ │ ├── data
+│ │ └── tmp
+{{< /command >}}
+
+## Advanced Security Options
+
+Both OpenSearch and Elasticsearch services offer **Advanced Security Options**.
+Presently, OpenSearch domains are equipped with support for an internal user database.
+However, Elasticsearch domains are not currently covered, whether through the OpenSearch or the Elasticsearch service.
+IAM support is also not yet available.
+
+A secure OpenSearch domain can be spawned with this example CLI input.
+Save it in a file named `opensearch_domain.json`.
+
+```json
+{
+ "DomainName": "secure-domain",
+ "ClusterConfig": {
+ "InstanceType": "r5.large.search",
+ "InstanceCount": 1,
+ "DedicatedMasterEnabled": false,
+ "ZoneAwarenessEnabled": false,
+ "WarmEnabled": false
+ },
+ "EBSOptions": {
+ "EBSEnabled": true,
+ "VolumeType": "gp2",
+ "VolumeSize": 10
+ },
+ "EncryptionAtRestOptions": {
+ "Enabled": true
+ },
+ "NodeToNodeEncryptionOptions": {
+ "Enabled": true
+ },
+ "DomainEndpointOptions": {
+ "EnforceHTTPS": true
+ },
+ "AdvancedSecurityOptions": {
+ "Enabled": true,
+ "InternalUserDatabaseEnabled": true,
+ "MasterUserOptions": {
+ "MasterUserName": "admin",
+ "MasterUserPassword": "really-secure-passwordAa!1"
+ }
+ }
+}
+```
+
+To provision it, use the following `awslocal` CLI command, assuming the aforementioned CLI input has been stored in a file named `opensearch_domain.json`:
+
+{{< command >}}
+$ awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json
+{{< /command >}}
+
+Once the domain setup is complete (`Processing: false`), the cluster can only be accessed with the given master user credentials, via HTTP basic authentication:
+
+{{< command >}}
+$ curl -u 'admin:really-secure-passwordAa!1' http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health
+{{< /command >}}
+
+The following output will be visible on your terminal:
+
+```json
+{"cluster_name":"opensearch","status":"green",...}
+```
+
+It's important to note that any unauthorized requests will yield an HTTP response with a status code of 401 (`Unauthorized`).
+
+## OpenSearch Dashboards
+
+[OpenSearch Dashboards](https://opensearch.org/docs/latest/dashboards/) is a great tool to analyze and visualize the data in your OpenSearch domain.
+And you can directly use the official OpenSearch Dashboards Docker image to analyze data in your OpenSearch domain within LocalStack!
+
+When using OpenSearch Dashboards with LocalStack, you need to make sure to:
+- Enable the [advanced security options]({{< ref "#advanced-security-options" >}}) and set a username and a password.
+ This is required by OpenSearch Dashboards.
+- Ensure that the OpenSearch Dashboards Docker container uses the LocalStack DNS.
+ You can find more information on how to connect your Docker container to Localstack in our [Network Troubleshooting guide]({{< ref "references/network-troubleshooting/endpoint-url/#from-your-container" >}}).
+
+First, you need to make sure to start LocalStack in a specific Docker network:
+{{< command >}}
+$ localstack start --network ls
+{{< /command >}}
+
+Now you can provision a new OpenSearch domain.
+Make sure to enable the [advanced security options]({{< ref "#advanced-security-options" >}}):
+
+{{< command >}}
+$ awslocal opensearch create-domain --cli-input-json file://./opensearch_domain.json
+{{< /command >}}
+
+Now you can start another container for OpenSearch Dashboards, which is configured such that:
+- The port for OpenSearch Dashboards is mapped (`5601`).
+- The container is in the same network as LocalStack.
+- The container uses the LocalStack DNS.
+- The OpenSearch Domain is set.
+- The OpenSearch credentials are set.
+- The version of OpenSearch Dashboards is the same as the OpenSearch domain.
+
+{{< command >}}
+docker inspect localstack-main | \
+ jq -r '.[0].NetworkSettings.Networks | to_entries | .[].value.IPAddress'
+# prints 172.22.0.2
+
+docker run --rm -p 5601:5601 \
+ --network ls \
+ --dns 172.22.0.2 \
+ -e "OPENSEARCH_HOSTS=http://secure-domain.us-east-1.opensearch.localhost.localstack.cloud:4566" \
+ -e "OPENSEARCH_USERNAME=admin" -e 'OPENSEARCH_PASSWORD=really-secure-passwordAa!1' \
+ opensearchproject/opensearch-dashboards:2.11.0
+{{< /command >}}
+
+Once the container is running, you can reach OpenSearch Dashboards at `http://localhost:5601` and you can log in with your OpenSearch domain credentials.
+
+## Custom OpenSearch backends
+
+LocalStack employs an asynchronous approach to download OpenSearch the first time you create an OpenSearch cluster.
+Consequently, you'll receive a prompt response from LocalStack initially, followed by the setup of your local OpenSearch cluster once the download and installation are completed.
+
+However, there might be scenarios where this behavior is not desirable.
+For instance, you may prefer to use an existing OpenSearch cluster that is already up and running.
+This approach can also prove beneficial when you require a cluster with a customized configuration that isn't supported by LocalStack.
+
+To tailor the OpenSearch backend according to your needs, you can initiate your own local OpenSearch cluster and then direct LocalStack to utilize it through the `OPENSEARCH_CUSTOM_BACKEND` environment variable.
+It's important to bear in mind that only a single backend configuration is possible, resulting in behavior akin to the approach of [re-using a single cluster instance](#re-using-a-single-cluster-instance).
+
+Here is a sample `docker-compose.yaml` file that contains a single-node OpenSearch cluster and a basic LocalStack setup.
+
+```yaml
+services:
+ opensearch:
+ container_name: opensearch
+ image: opensearchproject/opensearch:1.1.0
+ environment:
+ - node.name=opensearch
+ - cluster.name=opensearch-docker-cluster
+ - discovery.type=single-node
+ - bootstrap.memory_lock=true
+ - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
+ - "DISABLE_SECURITY_PLUGIN=true"
+ ports:
+ - "9200:9200"
+ ulimits:
+ memlock:
+ soft: -1
+ hard: -1
+ volumes:
+ - data01:/usr/share/opensearch/data
+
+ localstack:
+ container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
+ image: localstack/localstack
+ ports:
+ - "127.0.0.1:4566:4566" # LocalStack Gateway
+ - "127.0.0.1:4510-4559:4510-4559" # external services port range
+ depends_on:
+ - opensearch
+ environment:
+ - OPENSEARCH_CUSTOM_BACKEND=http://opensearch:9200
+ - DEBUG=${DEBUG:-0}
+ volumes:
+ - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
+ - "/var/run/docker.sock:/var/run/docker.sock"
+
+volumes:
+ data01:
+ driver: local
+```
+
+You can start the Docker Compose environment using the following command:
+
+{{< command >}}
+$ docker-compose up -d
+{{< /command >}}
+
+You can now create an OpenSearch cluster using the `awslocal` CLI:
+
+{{< command >}}
+$ awslocal opensearch create-domain --domain-name my-domain
+{{< /command >}}
+
+If the `Processing` status shows as `true`, the cluster isn't fully operational yet.
+You can use the `describe-domain` command to retrieve the current status:
+
+{{< command >}}
+$ awslocal opensearch describe-domain --domain-name my-domain
+{{< /command >}}
+
+You can now verify cluster health and set up indices:
+
+{{< command >}}
+$ curl my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/_cluster/health | jq
+{{< /command >}}
+
+The output will provide insights into the cluster's health and version information.
+
+Finally create an example index using the following command:
+
+{{< command >}}
+$ curl -X PUT my-domain.us-east-1.opensearch.localhost.localstack.cloud:4566/my-index
+{{< /command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing OpenSearch domains.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **OpenSearch Service** under the **Analytics** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Domain**: Create a new OpenSearch domain by clicking on the **Create Domain** button and providing the required details.
+- **View Domain Details**: Click on a domain to view its details, such as the domain name, status, endpoint, and configuration.
+- **Edit Domain**: Edit the configuration of a domain by clicking on domain name and then clicking on the **Edit Domain** button.
+- **Delete Domain**: Delete a domain by selecting the domain name and clicking on the **Actions** dropdown menu, then selecting **Remove Selected**.
+
+## Current Limitations
+
+Internally, LocalStack makes use of the [OpenSearch Python client 2.x](https://github.com/opensearch-project/opensearch-py).
+The functionalities marked as deprecated in OpenSearch 1.x and subsequently removed in OpenSearch 2.x may not operate reliably when interacting with OpenSearch 1.x clusters through LocalStack.
+You can refer to the [compatibility documentation](https://github.com/opensearch-project/opensearch-py/blob/main/COMPATIBILITY.md) provided by the [OpenSearch Python client repository](https://github.com/opensearch-project/opensearch-py).
+
+AWS typically populates the `Endpoint` attribute of the cluster status only after the cluster is fully operational.
+In contrast, LocalStack provides the endpoint information immediately but retains `Processing = "true"` until the cluster initialization is complete.
+
+The `CustomEndpointOptions` in LocalStack offers the flexibility to utilize arbitrary endpoint URLs, a feature that diverges from the constraints imposed by AWS.
+
+## Troubleshooting
+
+If you encounter difficulties resolving subdomains while employing the `OPENSEARCH_ENDPOINT_STRATEGY=domain` (the default setting), it's advisable to investigate whether your DNS configuration might be obstructing rebind queries.
+For further insights on addressing this issue, refer to the section on [DNS rebind protection]({{< ref "dns-server#dns-rebind-protection" >}}).
diff --git a/src/content/docs/aws/services/organizations.md b/src/content/docs/aws/services/organizations.md
new file mode 100644
index 00000000..73ab5ee2
--- /dev/null
+++ b/src/content/docs/aws/services/organizations.md
@@ -0,0 +1,86 @@
+---
+title: "Organizations"
+linkTitle: "Organizations"
+tags: ["Ultimate"]
+description: Get started with AWS Organizations on LocalStack
+---
+
+Amazon Web Services Organizations is an account management service that allows you to consolidate multiple different AWS accounts into an organization.
+It allows you to manage different accounts in a single organization and consolidate billing.
+With Organizations, you can also attach different policies to your organizational units (OUs) or individual accounts in your organization.
+
+Organizations is available over LocalStack Pro and the supported APIs are available over our [configuration page]({{< ref "configuration" >}}).
+
+## Getting started
+
+In this getting started guide, you'll learn how to create your local AWS Organization and configure it with member accounts.
+This guide is intended for users who wish to get more acquainted with Organizations, and assumes you have basic knowledge of the AWS CLI (and our `awslocal` wrapper script).
+To get started, start your LocalStack instance using your preferred method:
+
+1. Create a new local AWS Organization with the feature set flag set to `ALL`:
+ {{< command >}}
+ $ awslocal organizations create-organization --feature-set ALL
+ {{< /command >}}
+
+2. You can now run the `describe-organization` command to see the details of your organization:
+ {{< command >}}
+ $ awslocal organizations describe-organization
+ {{< /command >}}
+
+3. You can now create an AWS account that would be a member of your organization:
+ {{< command >}}
+ $ awslocal organizations create-account \
+ --email example@example.com \
+ --account-name "Test Account"
+ {{< /command >}}
+ Since LocalStack essentially mocks AWS, the account creation is instantaneous.
+ You can now run the `list-accounts` command to see the details of your organization:
+ {{< command >}}
+ $ awslocal organizations list-accounts
+ {{< /command >}}
+
+4. You can also remove a member account from your organization:
+ {{< command >}}
+ $ awslocal organizations remove-account-from-organization --account-id
+ {{< /command >}}
+
+5. To close an account in your organization, you can run the `close-account` command:
+ {{< command >}}
+ $ awslocal organizations close-account --account-id 000000000000
+ {{< /command >}}
+
+6. You can use organizational units (OUs) to group accounts together to administer as a single unit.
+ To create an OU, you can run:
+ {{< command >}}
+ $ awslocal organizations list-roots
+ $ awslocal organizations list-children \
+ --parent-id \
+ --child-type ORGANIZATIONAL_UNIT
+ $ awslocal organizations create-organizational-unit \
+ --parent-id \
+ --name New-Child-OU
+ {{< /command >}}
+
+7. Before you can create and attach a policy to your organization, you must enable a policy type.
+ To enable a policy type, you can run:
+ {{< command >}}
+ $ awslocal organizations enable-policy-type \
+ --root-id \
+ --policy-type BACKUP_POLICY
+ {{< /command >}}
+ To disable a policy type, you can run:
+ {{< command >}}
+ $ awslocal organizations disable-policy-type \
+ --root-id \
+ --policy-type BACKUP_POLICY
+ {{< /command >}}
+
+8. To view the policies that are attached to your organization, you can run:
+ {{< command >}}
+ $ awslocal organizations list-policies --filter SERVICE_CONTROL_POLICY
+ {{< /command >}}
+
+9. To delete an organization, you can run:
+ {{< command >}}
+ $ awslocal organizations delete-organization
+ {{< /command >}}
diff --git a/src/content/docs/aws/services/pca.md b/src/content/docs/aws/services/pca.md
new file mode 100644
index 00000000..ec778e85
--- /dev/null
+++ b/src/content/docs/aws/services/pca.md
@@ -0,0 +1,247 @@
+---
+title: "Private Certificate Authority (ACM PCA)"
+linkTitle: "Private Certificate Authority (ACM PCA)"
+description: Get started with Private Certificate Authority (ACM PCA) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+AWS Private Certificate Authority (ACM PCA) is a managed private Certificate Authority (CA) service that manages the lifecycle of your private certificates.
+ACM PCA extends ACM's certificate management capabilities to private certificates, enabling you to manage public and private certificates centrally.
+
+LocalStack allows you to use the ACM PCA APIs to create, list, and delete private certificates.
+You can creating, describing, tagging, and listing tags for a CA using ACM PCA.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_acm-pca" >}}), which provides information on the extent of ACM PCA's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users who are new to ACM PCA and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+We will follow the procedure to create and install a certificate for a single-level hierarchy CA hosted by ACM PCA.
+
+### Create a CA
+
+Start by creating a new Certificate Authority with ACM PCA using the [`CreateCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_CreateCertificateAuthority.html) API.
+This command sets up a new CA with specified configurations for key algorithm, signing algorithm, and subject information.
+
+{{< command >}}
+$ awslocal acm-pca create-certificate-authority \
+ --certificate-authority-configuration '{
+ "KeyAlgorithm":"RSA_2048",
+ "SigningAlgorithm":"SHA256WITHRSA",
+ "Subject":{
+ "Country":"CH",
+ "Organization":"LocalStack",
+ "OrganizationalUnit":"Engineering",
+ "CommonName":"test.localstack.cloud"
+ }
+ }' \
+ --certificate-authority-type "ROOT"
+
+{
+ "CertificateAuthorityArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff"
+}
+
+{{< /command >}}
+
+Note the `CertificateAuthorityArn` from the output as it will be needed for subsequent commands.
+
+To retrieve the detailed information about the created Certificate Authority, use the [`DescribeCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_DescribeCertificateAuthority.html) API.
+This command returns the detailed information about the CA, including the CA's ARN, status, and configuration.
+
+{{< command >}}
+$ awslocal acm-pca describe-certificate-authority \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff
+
+{
+ "CertificateAuthority": {
+ "Arn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff",
+ "OwnerAccount": "000000000000",
+ "CreatedAt": "2024-08-08T10:45:58.065504+05:30",
+ "Type": "ROOT",
+ "Status": "PENDING_CERTIFICATE",
+ "CertificateAuthorityConfiguration": {
+ "KeyAlgorithm": "RSA_2048",
+ "SigningAlgorithm": "SHA256WITHRSA",
+ "Subject": {
+ "Country": "CH",
+ "Organization": "LocalStack",
+ "OrganizationalUnit": "Engineering",
+ "CommonName": "test.localstack.cloud"
+ }
+ },
+ "RevocationConfiguration": {
+ "CrlConfiguration": {
+ "Enabled": false
+ }
+ },
+ "KeyStorageSecurityStandard": "FIPS_140_2_LEVEL_3_OR_HIGHER",
+ "UsageMode": "SHORT_LIVED_CERTIFICATE"
+ }
+}
+
+{{< /command >}}
+
+Note the `PENDING_CERTIFICATE` status.
+In the following steps, we will create and attach a certificate for this CA.
+
+### Issue CA Certificate
+
+Use the [`GetCertificateAuthorityCsr`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificateAuthorityCsr.html) operation to obtain the Certificate Signing Request (CSR) for the CA.
+
+{{< command >}}
+$ awslocal acm-pca get-certificate-authority-csr \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
+ --output text | tee ca.csr
+{{< /command >}}
+
+Next, issue the certificate for the CA using this CSR.
+
+{{< command >}}
+$ awslocal acm-pca issue-certificate \
+ --csr fileb://ca.csr \
+ --signing-algorithm SHA256WITHRSA \
+ --template-arn arn:aws:acm-pca:::template/RootCACertificate/V1 \
+ --validity Value=10,Type=YEARS \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff
+
+{
+ "CertificateArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/17ef7bbf3cc6471ba3ef0707119b8392"
+}
+
+{{< /command >}}
+
+The CA certificate is now created and its ARN is indicated by the `CertificateArn` parameter.
+
+### Import CA Certificate
+
+Finally, we retrieve the signed certificate with [`GetCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificate.html) and import it using [`ImportCertificateAuthorityCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_ImportCertificateAuthorityCertificate.html).
+
+{{< command >}}
+$ awslocal acm-pca get-certificate \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
+ --certificate-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/17ef7bbf3cc6471ba3ef0707119b8392 \
+ --output text | tee cert.pem
+{{< /command >}}
+
+{{< command >}}
+$ awslocal acm-pca import-certificate-authority-certificate \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
+ --certificate fileb://cert.pem
+{{< /command >}}
+
+The CA is now ready for use.
+You can verify this by checking its status:
+
+{{< command >}}
+$ awslocal acm-pca describe-certificate-authority \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
+ --query CertificateAuthority.Status \
+ --output text
+
+ACTIVE
+
+{{< /command >}}
+
+The CA certificate can be retrieved at a later point using [`GetCertificateAuthorityCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_GetCertificateAuthorityCertificate.html).
+In general, this operation returns both the certificate and the certificate chain.
+In this case however, only the certificate will be returned, because we used a single-level CA hierarchy and the certificate chain is null.
+For production setups, you must use a [multi-level CA hierarchy](https://docs.aws.amazon.com/privateca/latest/userguide/ca-hierarchy.html) for best security.
+
+### Issue End-entity Certificates
+
+With the private CA set up, you can now issue end-entity certificates.
+
+Using [OpenSSL](https://openssl-library.org/), create a CSR and the private key:
+
+{{< command >}}
+$ openssl req -out local-csr.pem -new -newkey rsa:2048 -nodes -keyout local-pkey.pem
+{{< /command >}}
+
+You may inspect the CSR using the following command.
+It should resemble the illustrated output.
+
+{{< command >}}
+$ openssl req -in local-csr.pem -text -noout
+
+Certificate Request:
+ Data:
+ Version: 1 (0x0)
+ Subject: C = IN, ST = GA, O = EvilCorp, OU = Engineering, CN = evilcorp.com
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ Public-Key: (2048 bit)
+ Modulus:
+ 00:a3:1d:5d:50:00:5c:4e:5d:79:a8:9a:d4:10:f4:
+ ...
+ Exponent: 65537 (0x10001)
+ Attributes:
+ (none)
+ Requested Extensions:
+ Signature Algorithm: sha256WithRSAEncryption
+ Signature Value:
+ 3e:23:12:26:45:af:39:35:5d:d7:b4:40:fb:1a:08:c7:16:c3:
+ ...
+
+{{< /command >}}
+
+Next, using [`IssueCertificate`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_IssueCertificate.html) you can generate the end-entity certificate.
+Note that there is no [certificate template](https://docs.aws.amazon.com/privateca/latest/userguide/UsingTemplates.html) specified which causes the end-entity certificate to be issued by default.
+
+{{< command >}}
+$ awslocal acm-pca issue-certificate \
+ --certificate-authority-arn arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff \
+ --csr fileb://local-csr.pem \
+ --signing-algorithm "SHA256WITHRSA" \
+ --validity Value=365,Type="DAYS"
+
+{
+ "CertificateArn": "arn:aws:acm-pca:eu-central-1:000000000000:certificate-authority/0b20353f-ce7a-4de4-9b82-e06903a893ff/certificate/079d0a13daf943f6802d365dd83658c7"
+}
+
+{{< /command >}}
+
+### Verify Certificates
+
+Using OpenSSL, you can verify that the end-entity certificate was indeed signed by the CA.
+In the following command, `local-cert.pem` refers to the end-entity certificate and `cert.pem` refers to the CA certificate.
+
+{{< command >}}
+$ openssl verify -CAfile cert.pem local-cert.pem
+local-cert.pem: OK
+{{< /command >}}
+
+### Tag the Certificate Authority
+
+Tagging resources in AWS helps in managing and identifying them.
+Use the [`TagCertificateAuthority`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_TagCertificateAuthority.html) API to tag the created Certificate Authority.
+This command adds the specified tags to the specified CA.
+
+{{< command >}}
+$ awslocal acm-pca tag-certificate-authority \
+ --certificate-authority-arn arn:aws:acm-pca:us-east-1:000000000000:certificate-authority/f38ee966-bc23-40f8-8143-e981aee73600 \
+ --tags Key=Admin,Value=Alice
+{{< /command >}}
+
+After tagging your Certificate Authority, you may want to view these tags.
+You can use the [`ListTags`](https://docs.aws.amazon.com/privateca/latest/APIReference/API_ListTags.html) API to list all the tags associated with the specified CA.
+
+{{< command >}}
+$ awslocal acm-pca list-tags \
+ --certificate-authority-arn arn:aws:acm-pca:us-east-1:000000000000:certificate-authority/f38ee966-bc23-40f8-8143-e981aee73600 \
+ --max-results 10
+
+{
+ "Tags": [
+ {
+ "Key": "Name",
+ "Value": "MyPCA"
+ },
+ {
+ "Key": "Admin",
+ "Value": "Alice"
+ }
+ ]
+}
+
+{{< /command >}}
diff --git a/src/content/docs/aws/services/pinpoint.md b/src/content/docs/aws/services/pinpoint.md
new file mode 100644
index 00000000..d11d67f1
--- /dev/null
+++ b/src/content/docs/aws/services/pinpoint.md
@@ -0,0 +1,181 @@
+---
+title: "Pinpoint"
+linkTitle: "Pinpoint"
+description: Get started with Pinpoint on LocalStack
+tags: ["Ultimate"]
+persistence: supported
+
+---
+
+{{< callout "warning" >}}
+Amazon Pinpoint will be [retired on 30 October 2026](https://docs.aws.amazon.com/pinpoint/latest/userguide/migrate.html).
+It will be removed from LocalStack soon after this date.
+{{< /callout >}}
+
+## Introduction
+
+Pinpoint is a customer engagement service to facilitate communication across multiple channels, including email, SMS, and push notifications.
+Pinpoint allows developers to create and manage customer segments based on various attributes, such as user behavior and demographics, while integrating with other AWS services to send targeted messages to customers.
+
+LocalStack allows you to mock the Pinpoint APIs in your local environment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_pinpoint" >}}), which provides information on the extent of Pinpoint's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Pinpoint and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Pinpoint application, retrieve all applications, and list tags for the resource.
+
+### Create an application
+
+Create a Pinpoint application using the [`CreateApp`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal pinpoint create-app \
+ --create-application-request Name=ExampleCorp,tags={"Stack"="Test"}
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ApplicationResponse": {
+ "Arn": "arn:aws:mobiletargeting:us-east-1:000000000000:apps/4487a55ac6fb4a2699a1b90727c978e7",
+ "Id": "4487a55ac6fb4a2699a1b90727c978e7",
+ "Name": "ExampleCorp",
+ "CreationDate": 1706609789.906863
+ }
+}
+```
+
+### List applications
+
+You can list all applications using the [`GetApps`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal pinpoint get-apps
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ApplicationsResponse": {
+ "Item": [
+ {
+ "Arn": "arn:aws:mobiletargeting:us-east-1:000000000000:apps/4487a55ac6fb4a2699a1b90727c978e7",
+ "Id": "4487a55ac6fb4a2699a1b90727c978e7",
+ "Name": "ExampleCorp",
+ "CreationDate": 1706609789.906863
+ }
+ ]
+ }
+}
+```
+
+### List tags for the application
+
+You can list all tags for the application using the [`GetApp`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal pinpoint list-tags-for-resource \
+ --resource-arn arn:aws:mobiletargeting:us-east-1:000000000000:apps/4487a55ac6fb4a2699a1b90727c978e7
+{{< /command >}}
+
+Replace the `resource-arn` with the ARN of the application you created earlier.
+The following output would be retrieved:
+
+```bash
+{
+ "TagsModel": {
+ "tags": {
+ "Stack": "Test"
+ }
+ }
+}
+```
+
+### OTP verification
+
+The operations [`SendOTPMessage`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-otp.html#SendOTPMessage) and [`VerifyOTPMessage`](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-verify-otp.html#VerifyOTPMessage) are used for one-time password (OTP) verification.
+
+On production AWS, `SendOTPMessage` sends an SMS text message with the OTP code.
+The OTP can then be verified against the reference ID using `VerifyOTPMessage`
+
+LocalStack however can not send real SMS text messages.
+Instead it provides alternative ways to retrieve the actual OTP code as illustrated below.
+
+Begin by making a OTP request:
+
+{{< command >}}
+$ awslocal pinpoint send-otp-message \
+ --application-id fff5a801e01643c18a13a763e22a8fbf \
+ --send-otp-message-request-parameters '{
+ "BrandName": "LocalStack Community",
+ "Channel": "SMS",
+ "DestinationIdentity": "+1224364860",
+ "ReferenceId": "liftoffcampaign",
+ "OriginationIdentity": "+1123581321",
+ "CodeLength": 6,
+ "AllowedAttempts": 3,
+ "ValidityPeriod": 2
+ }'
+
+{
+ "MessageResponse": {
+ "ApplicationId": "fff5a801e01643c18a13a763e22a8fbf"
+ }
+}
+
+{{< /command >}}
+
+You can use the debug endpoint `/_aws/pinpoint//` to retrieve the OTP message details:
+
+{{< command >}}
+$ curl http://localhost:4566/_aws/pinpoint/fff5a801e01643c18a13a763e22a8fbf/liftoffcampaign | jq .
+{
+ "AllowedAttempts": 3,
+ "BrandName": "LocalStack Community",
+ "CodeLength": 6,
+ "DestinationIdentity": "+1224364860",
+ "OriginationIdentity": "+1123581321",
+ "ReferenceId": "liftoffcampaign",
+ "ValidityPeriod": 2,
+ "Attempts": 0,
+ "ApplicationId": "fff5a801e01643c18a13a763e22a8fbf",
+ "CreatedTimestamp": "2024-10-17T05:38:24.070Z",
+ "Code": "655745"
+}
+{{< /command >}}
+
+The OTP code is also printed in an `INFO` level message in the LocalStack log output:
+
+```text
+2024-10-17T11:08:24.044 INFO : OTP for application ID fff5a801e01643c18a13a763e22a8fbf reference ID liftoffcampaign: 655745
+```
+
+Finally, the OTP code can be verified using:
+
+{{< command >}}
+$ awslocal pinpoint verify-otp-message \
+ --application-id fff5a801e01643c18a13a763e22a8fbf \
+ --verify-otp-message-request-parameters '{
+ "ReferenceId": "liftoffcampaign",
+ "DestinationIdentity": "+1224364860",
+ "Otp": "655745"
+ }'
+
+{
+ "VerificationResponse": {
+ "Valid": true
+ }
+}
+
+{{< /command >}}
+
+When validating OTP codes, LocalStack checks for the number of allowed attempts and the validity period.
+Unlike AWS, there is no lower limit for validity period.
diff --git a/src/content/docs/aws/services/pipes.md b/src/content/docs/aws/services/pipes.md
new file mode 100644
index 00000000..100bb41a
--- /dev/null
+++ b/src/content/docs/aws/services/pipes.md
@@ -0,0 +1,210 @@
+---
+title: "EventBridge Pipes"
+linkTitle: "EventBridge Pipes"
+description: Get started with EventBridge Pipes on LocalStack
+tags: ["Free"]
+persistence: supported with limitations
+---
+
+## Introduction
+
+EventBridge Pipes allows users to create point-to-point integrations between event producers and consumers with transform, filter and enrichment steps.
+Pipes are particularly useful for scenarios involving real-time data processing, application integration, and automated workflows, while simplifying the process of routing events between AWS services.
+Pipes offer a point-to-point connection from one source to one target (one-to-one).
+In contrast, EventBridge Event Bus offers a one-to-many integration where an event router delivers one event to zero or more destinations.
+
+LocalStack allows you to use the Pipes APIs in your local environment to create Pipes with SQS queues and Kinesis streams as source and target.
+You can also filter events using EventBridge event patterns and enrich events using Lambda.
+
+The supported APIs are available on our [API coverage page]({{< ref "coverage_pipes" >}}), which provides information on the extent of Pipe's integration with LocalStack.
+
+{{< callout >}}
+The implementation of EventBridge Pipes is currently in **preview** stage and under active development.
+If you would like support for more APIs or report bugs, please make an issue on [GitHub](https://github.com/localstack/localstack/issues/new/choose).
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to EventBridge Pipes and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Pipe with SQS queues as source and target, and send events to the source queue which will be routed to the target queue.
+
+### Create an SQS queue
+
+Create two SQS queues that will be used as source and target for the Pipe.
+Run the following command to create a queue using the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API:
+
+{{< command >}}
+$ awslocal sqs create-queue --queue-name source-queue
+$ awslocal sqs create-queue --queue-name target-queue
+{{< /command >}}
+
+You can fetch their queue ARNs using the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API:
+
+{{< command >}}
+$ SOURCE_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue --attribute-names QueueArn --output text)
+$ TARGET_QUEUE_ARN=$(awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue --attribute-names QueueArn --output text)
+{{< /command >}}
+
+### Create a Pipe
+
+You can now create a Pipe, using the [`CreatePipe`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreatePipe.html) API.
+Run the following command, by specifying the source and target queue ARNs we created earlier:
+
+{{< command >}}
+$ awslocal pipes create-pipe --name sample-pipe \
+ --source $SOURCE_QUEUE_ARN \
+ --target $TARGET_QUEUE_ARN \
+ --role-arn arn:aws:iam::000000000000:role/pipes-role
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Arn": "arn:aws:pipes:us-east-1:000000000000:pipe/sample-pipe",
+ "CreationTime": "2024-01-26T11:55:27.069088+05:30",
+ "CurrentState": "CREATING",
+ "DesiredState": "RUNNING",
+ "LastModifiedTime": "2024-01-26T11:55:27.069088+05:30",
+ "Name": "sample-pipe"
+}
+```
+
+### Describe the Pipe
+
+You can use the [`DescribePipe`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_DescribePipe.html) API to get information about the Pipe:
+
+{{< command >}}
+$ awslocal pipes describe-pipe --name sample-pipe
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Arn": "arn:aws:pipes:us-east-1:000000000000:pipe/sample-pipe",
+ "CreationTime": "2024-01-26T11:55:27.069088+05:30",
+ "CurrentState": "RUNNING",
+ "DesiredState": "RUNNING",
+ "EnrichmentParameters": {},
+ "LastModifiedTime": "2024-01-26T11:55:27.069088+05:30",
+ "Name": "sample-pipe",
+ "RoleArn": "arn:aws:iam::000000000000:role/pipe-role",
+ "Source": "arn:aws:sqs:us-east-1:000000000000:source-queue",
+ "SourceParameters": {
+ "SqsQueueParameters": {
+ "BatchSize": 10
+ }
+ },
+ "StateReason": "USER_INITIATED",
+ "Tags": {},
+ "Target": "arn:aws:sqs:us-east-1:000000000000:target-queue",
+ "TargetParameters": {}
+}
+```
+
+### Send events to the source queue
+
+You can now send events to the source queue, which will be routed to the target queue.
+Run the following command to send an event to the source queue:
+
+{{< command >}}
+$ awslocal sqs send-message \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/source-queue \
+ --message-body "message-1"
+{{< /command >}}
+
+### Receive events from the target queue
+
+You can fetch the message from the target queue using the [`ReceiveMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API:
+
+{{< command >}}
+$ awslocal sqs receive-message \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/target-queue
+{{< /command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing EventBridge Pipes.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **EventBridge Pipes** under the **App Integration** section.
+
+
+
+
+
+The Resource Browser for EventBridge Pipes in LocalStack allows you to perform the following actions:
+
+1. **Create a Pipe**: Click on the **Create Pipe** button to set up a new pipe with a source and target service, filter criteria, and more.
+2. **View Pipe Details**: Click on the pipe name to view detailed information, including source, target, batch size, state, and more.
+3. **Delete a Pipe**: Select a pipe and click on the **Actions** dropdown menu, followed by **Remove Selected**, to delete the pipe.
+
+## Supported sources
+
+LocalStack supports the following [sources](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-source.html) for Pipes:
+
+* Amazon DynamoDB stream
+* Amazon Kinesis stream
+* Amazon SQS queue
+
+Please create a feature request on [GitHub](https://github.com/localstack/localstack/issues/new/choose) if you miss support for
+Amazon MQ broker,
+Amazon MSK stream,
+or Apache Kafka stream.
+
+## Supported enrichments
+
+LocalStack supports the following [enrichments](https://docs.aws.amazon.com/eventbridge/latest/userguide/pipes-enrichment.html) for Pipes:
+
+* Lambda function
+
+Please create a feature request on [GitHub](https://github.com/localstack/localstack/issues/new/choose) if you miss support for
+API destination,
+Amazon API Gateway,
+or Step Functions state machine
+
+## Supported targets
+
+LocalStack supports the following [targets](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-event-target.html) for Pipes:
+
+* EventBride bus
+* Kinesis stream
+* Lambda function (SYNC or ASYNC)
+* Amazon SNS topic
+* Amazon SQS queue
+* Step Functions state machine
+ * Standard workflows (ASYNC)
+
+Please create a feature request on [GitHub](https://github.com/localstack/localstack/issues/new/choose) if you miss support for
+API destination,
+API Gateway,
+Batch job queue,
+CloudWatch log group,
+ECS task,
+Firehose delivery stream,
+Inspector assessment template,
+Redshift cluster data API queries,
+SageMaker Pipeline,
+Step Functions state machine: Express workflows (SYNC or ASYNC),
+or Timestream for LiveAnalytics table.
+
+## Supported log destinations
+
+LocalStack supports the following [log destinations](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-pipes-logs.html) for detailed Pipes logging:
+
+* CloudWatch Logs
+
+Please create a feature request on [GitHub](https://github.com/localstack/localstack/issues/new/choose) if you miss support for
+Firehose stream logs,
+or Amazon S3 logs.
+
+## Current Limitations
+
+The EventBridge Pipes implementation in LocalStack is currently in preview stage and has the following limitations:
+
+* Lack of input transformers.
+* Lack of concurrency support (i.e., ParallelizationFactor), resulting in slower processing in high-throughput scenarios.
+* Lack of lifecycle management for pipe states (i.e., missing tests for state transitions).
+* Lack of re-sharding support when polling from Kinesis and DynamoDB streams.
+* Batch handling behavior may have parity issues (e.g., batch flushing rules by size, length, time, etc. are not implemented).
diff --git a/src/content/docs/aws/services/qldb.md b/src/content/docs/aws/services/qldb.md
new file mode 100644
index 00000000..9401e79a
--- /dev/null
+++ b/src/content/docs/aws/services/qldb.md
@@ -0,0 +1,362 @@
+---
+title: "Quantum Ledger Database (QLDB)"
+linkTitle: "Quantum Ledger Database (QLDB)"
+tags: ["Ultimate"]
+description: Get started with Quantum Ledger Database (QLDB) on LocalStack
+---
+
+{{< callout "warning" >}}
+Amazon QLDB will be [retired on 31 July 2025](https://docs.aws.amazon.com/qldb/latest/developerguide/what-is.html).
+It will be removed from LocalStack soon after this date.
+{{< /callout >}}
+
+## Introduction
+
+Amazon Quantum Ledger Database is a fully managed ledger database service offered by Amazon Web
+Services.
+It is designed to provide transparent, immutable, and cryptographically verifiable
+transaction
+log functionality to applications.
+QLDB is particularly useful for applications that need a secure
+and scalable
+way to maintain a complete and verifiable history of data changes over time.
+
+LocalStack allows you to use the QLDB APIs in your local environment to create and manage ledgers.
+The supported APIs are available on the [API coverage page]({{< ref "/references/coverage/coverage_qldb/index.md" >}} "QLDB service coverage page"), which provides information on the extent of QLDB's integration with LocalStack.
+
+## Getting started
+
+These instructions will follow along with
+the [getting started guide](https://docs.aws.amazon.com/qldb/latest/developerguide/getting-started.html)
+from the official documentation, but instead of using the console to
+perform all the operations, the LocalStack AWS CLI (management API only) and the QLDB shell (data
+API only) will be used.
+
+### Installing the QLDB shell
+
+QLDB supports PartiQL, a SQL-compatible query language, which allows you to query and manipulate
+data stored in QLDB.
+You can write PartiQL statements to perform complex queries, aggregations, and transformations on
+your data.
+Amazon QLDB provides a command line shell for interaction with the transactional data API.
+With the
+QLDB shell,
+you can run PartiQL statements on ledger data.
+
+For instructions on how to use and install the latest version of the QLDB shell, see
+the [README.md](https://github.com/awslabs/amazon-qldb-shell/blob/main/README.md#installation) file
+on GitHub.
+QLDB provides pre-built binary files for Linux, macOS, and Windows in
+the [Releases](https://github.com/awslabs/amazon-qldb-shell/releases) section of the repository.
+
+### Creating a new ledger
+
+QLDB provides ledger databases, which are centralized, immutable, and cryptographically verifiable
+journals of transactions.
+
+{{< command >}}
+$ awslocal qldb create-ledger --name vehicle-registration --permissions-mode ALLOW_ALL
+{{< / command >}}
+
+```bash
+{
+ "Name": "vehicle-registration",
+ "Arn": "arn:aws:qldb:us-east-1:000000000000:ledger/vehicle-registration",
+ "State": "ACTIVE",
+ "CreationDateTime": 1696782718.0,
+ "PermissionsMode": "ALLOW_ALL",
+ "DeletionProtection": true
+}
+```
+
+{{< callout >}}
+
+- Permissions mode – the following options are available in AWS:
+
+**Allow all** – A legacy permissions mode that enables access control with API-level granularity for
+ledgers.
+This mode disregards any table-level or command-level IAM permissions policies that you create for
+the ledger.
+
+**Standard** (Recommended) - A permissions mode that enables access control with finer granularity
+for ledgers,
+tables, and PartiQL commands.
+It is recommended using this permissions mode to maximize the security
+of your
+ledger data.
+By default, this mode denies all requests to run any PartiQL commands on any tables in this ledger.
+To allow PartiQL
+commands, you must create IAM permissions policies for specific table resources and PartiQL actions,
+in addition to
+the `SendCommand` API permission for the ledger.
+{{< /callout >}}
+
+The following command can be used directly to write PartiQL statements against a QLDB ledger:
+
+{{< command >}}
+$ qldb --qldb-session-endpoint http://localhost:4566 --ledger vehicle-registration
+{{< / command >}}
+
+The user can continue from here to create tables, populate and interrogate them.
+
+### Creating tables and sample data
+
+PartiQL is a query language designed for processing structured data, allowing you to perform
+various data manipulation tasks using familiar SQL-like syntax.
+
+{{< command >}}
+qldb> CREATE TABLE VehicleRegistration
+{{< / command >}}
+
+```bash
+{
+ information_schema: {
+ user_tables: [
+ {
+ name: "VehicleRegistration",
+ status: "ACTIVE",
+ indexes: [
+ ]
+ }
+ ]
+ },
+ Vehicle: [
+ ],
+ VehicleRegistration: [
+ ]
+}
+1 document in bag (read-ios: 0, server-time: 0ms, total-time: 31ms)
+```
+
+The `VehicleRegistration` table was created.
+Now it's time to add some items:
+
+{{< command >}}
+qldb> INSERT INTO VehicleRegistration VALUE
+{
+ 'VIN' : 'KM8SRDHF6EU074761',
+ 'RegNum' : 1722,
+ 'State' : 'WA',
+ 'City' : 'Kent',
+ 'PendingPenaltyTicketAmount' : 130.75,
+ 'Owners' : {
+ 'PrimaryOwner' : { 'PersonId': '294jJ3YUoH1IEEm8GSabOs' },
+ 'SecondaryOwners' : [
+ { 'PersonId' : '1nmeDdLo3AhGswBtyM1eYh' },
+ { 'PersonId': 'IN7MvYtUjkp1GMZu0F6CG9' }
+ ]
+ },
+ 'ValidFromDate' : `2017-09-14T`,
+ 'ValidToDate' : `2020-06-25T`
+}
+{{< / command >}}
+
+```bash
+{
+documentId: "3TYR9BamzyqHWBjYOfHegE"
+}
+1 document in bag (read-ios: 0, server-time: 0ms, total-time: 894ms)
+```
+
+### Querying a table
+
+The table can be interrogated based on the inserted registration number:
+
+{{< command >}}
+qldb> SELECT * FROM VehicleRegistration WHERE RegNum=1722
+{{< / command >}}
+
+```bash
+{
+ 'VIN' : 'KM8SRDHF6EU074761',
+ 'RegNum' : 1722,
+ 'State' : 'WA',
+ 'City' : 'Kent',
+ 'PendingPenaltyTicketAmount' : 130.75,
+ 'Owners' : {
+ 'PrimaryOwner' : { 'PersonId': '294jJ3YUoH1IEEm8GSabOs' },
+ 'SecondaryOwners' : [
+ { 'PersonId' : '1nmeDdLo3AhGswBtyM1eYh' },
+ { 'PersonId': 'IN7MvYtUjkp1GMZu0F6CG9' }
+ ]
+ },
+ 'ValidFromDate' : `2017-09-14T`,
+ 'ValidToDate' : `2020-06-25T`
+}
+1 document in bag (read-ios: 0, server-time: 0ms, total-time: 477ms)
+```
+
+### Modifying documents in a ledger
+
+Additional changes can be made to documents in the `vehicle-registration` ledger with more complex
+queries.
+Supposed the vehicle is sold and changes owners, this information needs to be updated with a new
+person ID.
+
+{{< command >}}
+qldb> UPDATE VehicleRegistration AS r SET r.Owners.PrimaryOwner.PersonId = '112233445566NO' WHERE r.VIN = 'KM8SRDHF6EU074761'
+{{< / command >}}
+The command will return the updated document ID.
+
+```bash
+{
+ documentId: "3TYR9BamzyqHWBjYOfHegE"
+}
+1 document in bag (read-ios: 0, server-time: 0ms, total-time: 62ms)
+```
+
+The next step is to check on the updates made to the `PersonId` field of the `PrimaryOwner`:
+{{< command >}}
+qldb> SELECT r.Owners FROM VehicleRegistration AS r WHERE r.VIN = 'KM8SRDHF6EU074761'
+{{< / command >}}
+
+```bash
+{
+ Owners: {
+ PrimaryOwner: {
+ PersonId: "112233445566NO"
+ },
+ SecondaryOwners: [
+ {
+ PersonId: "1nmeDdLo3AhGswBtyM1eYh"
+ },
+ {
+ PersonId: "IN7MvYtUjkp1GMZu0F6CG9"
+ }
+ ]
+ }
+}
+1 document in bag (read-ios: 0, server-time: 0ms, total-time: 518ms)
+```
+
+### Viewing the revision history of a document
+
+After modifying the data in a document, the user can query the history of the entity.
+You can see all revisions of a document that you inserted, updated, and deleted by querying the
+built-in History function.
+First the unique `id` of the document must be found.
+
+{{< command >}}
+qldb> SELECT r_id FROM VehicleRegistration AS r BY r_id WHERE r.VIN = 'KM8SRDHF6EU074761'
+{{< / command >}}
+
+```bash
+{
+r_id: "3TYR9BamzyqHWBjYOfHegE"
+}
+
+1 document in bag (read-ios: 0, server-time: 0ms, total-time: 541ms)
+```
+
+Then, the `id` is used to query the history function.
+
+{{< command >}}
+qldb> SELECT h.data.VIN, h.data.City, h.data.Owners FROM history(VehicleRegistration) AS h WHERE h.metadata.id = '3TYR9BamzyqHWBjYOfHegE'
+{{< / command >}}
+
+```bash
+{
+ VIN: "KM8SRDHF6EU074761",
+ City: "Kent",
+ Owners: {
+ PrimaryOwner: {
+ PersonId: "294jJ3YUoH1IEEm8GSabOs"
+ },
+ SecondaryOwners: [
+ {
+ PersonId: "1nmeDdLo3AhGswBtyM1eYh"
+ },
+ {
+ PersonId: "IN7MvYtUjkp1GMZu0F6CG9"
+ }
+ ]
+ }
+},
+{
+ VIN: "KM8SRDHF6EU074761",
+ City: "Kent",
+ Owners: {
+ PrimaryOwner: {
+ PersonId: "112233445566NO"
+ },
+ SecondaryOwners: [
+ {
+ PersonId: "1nmeDdLo3AhGswBtyM1eYh"
+ },
+ {
+ PersonId: "IN7MvYtUjkp1GMZu0F6CG9"
+ }
+ ]
+ }
+}
+2 documents in bag (read-ios: 0, server-time: 0ms, total-time: 544ms)
+```
+
+### Cleaning up resources
+
+Unused ledgers can be deleted.
+You'll notice that directly running the following command will lead
+to an error message.
+
+{{< command >}}
+$ awslocal qldb delete-ledger --name vehicle-registration
+{{< / command >}}
+
+```bash
+An error occurred (ResourcePreconditionNotMetException) when calling the DeleteLedger operation: Preventing deletion
+of ledger vehicle-registration with DeletionProtection enabled
+```
+
+This can be adjusted using the `update-ledger` command in the AWS CLI to remove the deletion protection of the ledger:
+
+{{< command >}}
+$ awslocal qldb update-ledger --name vehicle-registration --no-deletion-protection
+{{< / command >}}
+
+```bash
+{
+ "Name": "vehicle-registration",
+ "Arn": "arn:aws:qldb:us-east-1:000000000000:ledger/vehicle-registration",
+ "State": "ACTIVE",
+ "CreationDateTime": 1697038061.0,
+ "DeletionProtection": false
+}
+```
+
+Now the `delete-ledger` command can be repeated without errors.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing QLDB ledgers.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **QLDB** under the **Database** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Ledger**: Create a new QLDB ledger by clicking on the **Create Ledger** button and providing the ledger name and permissions mode.
+- **View Ledger**: View the details of a specific ledger by clicking on the ledger name.
+- **Edit Ledger**: Edit the details of a specific ledger by clicking on the ledger name and then clicking on the **Edit Ledger** button.
+- **Delete Ledger**: Delete a specific ledger by selecting the ledger name and clicking on the **Actions** dropdown menu, then selecting **Remove Selected**.
+
+## Examples
+
+Interacting with Amazon QLDB (Quantum Ledger Database) is typically done using language-specific
+software
+development kits (SDKs) provided by AWS.
+These SDKs make it easier for developers to interact with
+QLDB and
+perform operations such as managing ledgers, executing PartiQL queries, and processing the results.
+When interacting with QLDB, it's common to use a combination of SDKs and PartiQL queries to achieve
+specific data
+processing tasks, ensuring flexibility and ease of development.
+
+A simple QLDB example running on LocalStack is provided
+in [this Github repository](https://github.com/localstack/localstack-pro-samples/tree/master/qldb-ledger-queries)
+.
+The sample consists of two simple scenarios:
+
+1. Create and list tables via the `pyqldb` Python library.
+2. Insert data into two tables and perform a `JOIN` query that combines data from the two tables.
diff --git a/src/content/docs/aws/services/ram.md b/src/content/docs/aws/services/ram.md
new file mode 100644
index 00000000..4d1db6dd
--- /dev/null
+++ b/src/content/docs/aws/services/ram.md
@@ -0,0 +1,44 @@
+---
+title: "Resource Access Manager (RAM)"
+linkTitle: "Resource Access Manager (RAM)"
+description: Get started with RAM on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Resource Access Manager (RAM) helps resources to be shared across AWS accounts, within or across organizations.
+On AWS, RAM is an abstraction on top of AWS Identity and Access Management (IAM) which can manage resource-based policies to supported resource types.
+The API operations supported by LocalStack can be found on the [API coverage page]({{< ref "coverage_ram" >}}).
+
+## Getting started
+
+Start the LocalStack container using your preferred method.
+This section will illustrate how to create permissions and resource shares using the AWS CLI.
+
+### Create a permission
+
+{{< command >}}
+$ awslocal ram create-permission \
+ --name example \
+ --resource-type appsync:apis \
+ --policy-template '{"Effect": "Allow", "Action": "appsync:SourceGraphQL"}'
+{{< /command >}}
+
+### Create a resource share
+
+{{< command >}}
+$ awslocal ram create-resource-share \
+ --name example-resource-share \
+ --principals arn:aws:organizations::000000000000:organization/o-truopwybwi \
+ --resource-arn arn:aws:appsync:eu-central-1:000000000000:apis/wcgmjril5wuyvhmpildatuaat3
+{{< /command >}}
+
+## Current Limitations
+
+LocalStack RAM supports emulated sharing for EC2 Subnets only.
+Only specified account principals are granted access to the shared subnets, and associated VPC and route tables.
+Furthermore, only the sharing aspect is implemented at this time.
+No IAM policies are created or attached, and no permission enforcement takes place.
+
+For all other resource types, the functionality is limited to mocking.
diff --git a/src/content/docs/aws/services/rds.md b/src/content/docs/aws/services/rds.md
new file mode 100644
index 00000000..75ee825d
--- /dev/null
+++ b/src/content/docs/aws/services/rds.md
@@ -0,0 +1,392 @@
+---
+title: "Relational Database Service (RDS)"
+linkTitle: "Relational Database Service (RDS)"
+description: Get started with Relational Database Service (RDS) on LocalStack
+tags: ["Base"]
+persistence: supported with limitations
+---
+
+## Introduction
+
+Relational Database Service (RDS) is a managed database service provided by Amazon Web Services (AWS) that allows users to setup, operate, and scale relational databases in the cloud.
+RDS allows you to deploy and manage various relational database engines like MySQL, PostgreSQL, MariaDB, and Microsoft SQL Server.
+RDS handles routine database tasks such as provisioning, patching, backup, recovery, and scaling.
+
+LocalStack allows you to use the RDS APIs in your local environment to create and manage RDS clusters and instances for testing & integration purposes.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_rds" >}}), which provides information on the extent of RDS's integration with LocalStack.
+
+{{< callout >}}
+We’ve introduced a new native RDS provider in LocalStack and made it the default.
+This replaces Moto-based CRUD operations with a more reliable setup.
+
+RDS state created in version 4.3 or earlier using Cloud Pods or standard persistence will not be compatible with the new provider introduced in version 4.4.
+Recreating the RDS state is recommended for compatibility.
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to RDS and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate the following with the AWS CLI:
+
+1. Creating an RDS cluster.
+2. Generating a `SecretsManager` secret containing the database password.
+3. Executing a basic `SELECT 123 query` through the RDS Data API.
+
+LocalStack's RDS implementation also supports the [RDS Data API](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html), which allows executing data queries against RDS clusters over a JSON/REST interface.
+
+### Create an RDS cluster
+
+To create an RDS cluster, you can use the [`CreateDBCluster`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) API.
+The following command creates a new cluster with the name `db1` and the engine `aurora-postgresql`.
+Instances for the cluster must be added manually.
+
+{{< command >}}
+$ awslocal rds create-db-cluster \
+ --db-cluster-identifier db1 \
+ --engine aurora-postgresql \
+ --database-name test \
+ --master-username myuser \
+ --master-user-password mypassword
+{{< / command >}}
+
+You should see the following output:
+
+```json
+{
+ "DBCluster": {
+ ...
+ "Endpoint": "localhost",
+ "Port": 4510, # may vary
+ "DBClusterArn": "arn:aws:rds:us-east-1:000000000000:cluster:db1",
+ ...
+ }
+}
+```
+
+To add an instance you can run the following command:
+
+{{< command >}}
+$ awslocal rds create-db-instance \
+ --db-instance-identifier db1-instance \
+ --db-cluster-identifier db1 \
+ --engine aurora-postgresql \
+ --db-instance-class db.t3.large
+{{< / command >}}
+
+### Create a SecretsManager secret
+
+To create a `SecretsManager` secret, you can use the [`CreateSecret`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateSecret.html) API.
+Before creating the secret, you need to create a JSON file containing the credentials for the database.
+The following command creates a file called `mycreds.json` with the credentials for the database.
+
+{{< command >}}
+$ cat << 'EOF' > mycreds.json
+{
+ "engine": "aurora-postgresql",
+ "username": "myuser",
+ "password": "mypassword",
+ "host": "localhost",
+ "dbname": "test",
+ "port": "4510"
+}
+EOF
+{{< / command >}}
+
+Run the following command to create the secret:
+
+{{< command >}}
+$ awslocal secretsmanager create-secret \
+ --name dbpass \
+ --secret-string file://mycreds.json
+{{< / command >}}
+
+You should see the following output:
+
+```json
+{
+ "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:dbpass-cfnAX",
+ "Name": "dbpass",
+ "VersionId": "fffa1f4a-2381-4a2b-a977-4869d59a16c0"
+}
+```
+
+### Execute a query
+
+To execute a query, you can use the [`ExecuteStatement`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ExecuteStatement.html) API.
+
+Make sure to replace the `secret-arn` with the ARN from the secret you just created in the previous step, and check that the `resource-arn` matches the `cluster-arn` that you have created before.
+
+The following command executes a query against the database.
+The query returns the value `123`.
+
+{{< command >}}
+$ awslocal rds-data execute-statement \
+ --database test \
+ --resource-arn arn:aws:rds:us-east-1:000000000000:cluster:db1 \
+ --secret-arn arn:aws:secretsmanager:us-east-1:000000000000:secret:dbpass-cfnAX \
+ --include-result-metadata --sql 'SELECT 123'
+{{< / command >}}
+
+You should see the following output:
+
+```json
+{
+ "columnMetadata": [
+ {
+ "arrayBaseColumnType": 0,
+ "isAutoIncrement": false,
+ "isCaseSensitive": false,
+ "isCurrency": false,
+ "isSigned": true,
+ "label": "?column?",
+ "name": "?column?",
+ "nullable": 0,
+ "precision": 10,
+ "scale": 0,
+ "schemaName": "",
+ "tableName": "",
+ "type": 4,
+ "typeName": "int4"
+ }
+ ],
+ "numberOfRecordsUpdated": 0,
+ "records": [
+ [
+ {
+ "longValue": 123
+ }
+ ]
+ ]
+}
+```
+
+Alternative clients, such as `psql`, can also be employed to interact with the database.
+You can retrieve the hostname and port of your created instance either from the preceding output or by using the [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) API.
+
+{{< command >}}
+$ psql -d test -U test -p 4513 -h localhost -W
+{{< / command >}}
+
+## Supported DB engines
+
+Presently, you can spin up PostgreSQL, MariaDB, MySQL, and MSSQL (SQL Server) databases directly on your local machine, using LocalStack's RDS implementation.
+However, certain configurations of RDS clusters and instances currently offer only CRUD functionality.
+For instance, the `storage-encrypted` flag is returned as configured, but active support for actual storage encryption is not yet available.
+
+### PostgreSQL Engine
+
+When you establish an RDS DB cluster or instance using the `postgres`/`aurora-postgresql` DB engine along with a specified `EngineVersion`, LocalStack will dynamically install and configure the corresponding PostgreSQL version as required.
+Presently, you have the option to choose major versions ranging from 11 to 17.
+If you select a major version beyond this range, the system will automatically default to version 17.
+
+It's important to note that the selection of minor versions is not available.
+The latest major version will be installed within the Docker environment.
+If you wish to prevent the installation of customized versions, adjusting the `RDS_PG_CUSTOM_VERSIONS` environment variable to `0` will enforce the use of the default PostgreSQL version 17.
+
+{{< callout >}}
+While the [`DescribeDbCluster`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) and [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) APIs will still reflect the initially defined `engine-version`, the actual installed PostgreSQL engine might differ.
+This can have implications, particularly when employing a Terraform configuration, where unexpected changes should be avoided.
+{{< /callout >}}
+
+Instances and clusters with the PostgreSQL engine have the capability to both create and restore snapshots.
+
+### MariaDB Engine
+
+MariaDB will be set up as an operating system package within LocalStack.
+However, currently, the option to choose a particular version is not available.
+As of now, snapshots are not supported for MariaDB.
+
+### MySQL Engine
+
+A MySQL community server will be launched in a new Docker container upon requesting the MySQL engine.
+
+The `engine-version` will serve as the tag for the Docker image, allowing you to freely select the desired MySQL version from those available on the [official MySQL Docker Hub](https://hub.docker.com/_/mysql).
+If you have a specific image in mind, you can also use the environment variable `MYSQL_IMAGE=`.
+
+{{< callout >}}
+The `arm64` MySQL images are limited to newer versions.
+For more information about availability, check the [MySQL Docker Hub repository](https://hub.docker.com/_/mysql).
+{{< /callout >}}
+
+It's essential to understand that the `MasterUserPassword` you define for the database cluster/instance will be used as the `MYSQL_ROOT_PASSWORD` environment variable for the `root` user within the MySQL container.
+The user specified in `MasterUserName` will use the same password and will have complete access to the database.
+As of now, snapshots are not supported for MySQL.
+
+### Microsoft SQL Server Engine
+
+To utilize MSSQL databases, it's necessary to expressly agree to the terms of the [Microsoft SQL Server End-User Licensing Agreement (EULA)](https://hub.docker.com/_/microsoft-mssql-server) by configuring `MSSQL_ACCEPT_EULA=Y` within the LocalStack container environment.
+The `arm64` architecture is not currently officially supported for MSSQL.
+
+For the MSSQL engine, the database server is initiated in a fresh Docker container using the `latest` image.
+As of now, snapshots are not supported for MSSQL.
+
+## Default Usernames and Passwords
+
+The following details concern default usernames, passwords, and database names for local RDS clusters created by LocalStack:
+
+- The default values for `master-username` and `db-name` are both **test**.
+ For the `master-user-password`, the default is **test**, except for MSSQL databases, which employ **Test123!** as the default master password.
+- When setting up a new RDS instance, you have the flexibility to utilize any `master-username`, with the exception of **postgres**.
+ The system will automatically generate the user.
+- It's important to remember that the username **postgres** has special significance, preventing the creation of a new RDS instance under this particular name.
+- For clarity, please avoid using the `db-name` **postgres**, as it is already allocated for use by LocalStack.
+
+## IAM Authentication Support
+
+IAM authentication tokens can be employed to establish connections with RDS.
+As of now, this functionality is supported for PostgreSQL within LocalStack.
+However, IAM authentication is not yet validated at this stage.
+Consequently, any database user assigned the `rds_iam` role will obtain a valid token, thereby gaining the ability to connect to the database.
+
+In this example, you will be able to verify the IAM authentication process for RDS Postgres:
+
+1. Establish a database instance and obtain the corresponding host and port information.
+2. Connect to the database using the master username and password.
+ Subsequently, generate a new user and assign the `rds_iam` role as follows:
+ - `CREATE USER WITH LOGIN`
+ - `GRANT rds_iam TO `
+3. Create a token for the `` using the `generate-db-auth-token` command.
+4. Connect to the database utilizing the user you generated and the token obtained in the previous step as the password.
+
+### Create a database instance
+
+The following command creates a new database instance with the name `mydb` and the engine `postgres`.
+The database will be created with a single instance, which will be used as the master instance.
+
+{{< command >}}
+$ MASTER_USER=hello
+$ MASTER_PW='MyPassw0rd!'
+$ DB_NAME=test
+$ awslocal rds create-db-instance \
+ --master-username $MASTER_USER \
+ --master-user-password $MASTER_PW \
+ --db-instance-identifier mydb \
+ --engine postgres \
+ --db-name $DB_NAME \
+ --enable-iam-database-authentication \
+ --db-instance-class db.t3.small
+{{< / command >}}
+
+### Connect to the database
+
+You can retrieve the hostname and port of your created instance either from the preceding output or by using the [`DescribeDbInstances`](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBInstances.html) API.
+Run the following command to retrieve the host and port of the instance:
+
+{{< command >}}
+$ PORT=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Port")
+$ HOST=$(awslocal rds describe-db-instances --db-instance-identifier mydb | jq -r ".DBInstances[0].Endpoint.Address")
+{{< / command >}}
+
+Next, you can connect to the database using the master username and password:
+
+{{< command >}}
+$ PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'CREATE USER myiam WITH LOGIN'
+$ PGPASSWORD=$MASTER_PW psql -d $DB_NAME -U $MASTER_USER -p $PORT -h $HOST -w -c 'GRANT rds_iam TO myiam'
+{{< / command >}}
+
+### Create a token
+
+You can create a token for the user you generated using the [`generate-db-auth-token`](https://docs.aws.amazon.com/cli/latest/reference/rds/generate-db-auth-token.html) command:
+
+{{< command >}}
+$ TOKEN=$(awslocal rds generate-db-auth-token --username myiam --hostname $HOST --port $PORT)
+{{< / command >}}
+
+You can now connect to the database utilizing the user you generated and the token obtained in the previous step as the password:
+
+{{< command >}}
+$ PGPASSWORD=$TOKEN psql -d $DB_NAME -U myiam -w -p $PORT -h $HOST
+{{< / command >}}
+
+## Global Database Support
+
+LocalStack extends support for [Aurora Global Database](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database.html) with certain limitations:
+
+- Creating a global database will result in the generation of a single local database.
+ All clusters and instances associated with the global database will share a common endpoint.
+- It's important to note that clusters removed from a global database lose their ability to function as standalone clusters, differing from their intended behavior on AWS.
+- At present, the capability for persistence within global databases is not available.
+
+## RDS PostgreSQL Extensions for AWS Service Integrations
+
+LocalStack supports certain extensions and functions that are provided in RDS to interact with other AWS services.
+At the moment, primarily extension functions for the PostgreSQL engine are supported.
+
+### `aws_lambda` extension
+
+The [`aws_lambda` extension](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL-Lambda.html) can be used in local RDS PostgreSQL databases to interact with the Lambda API.
+
+For example, in the SQL code snippet below, we are loading the `aws_lambda` extension, then generate a full ARN from a function name, and finally invoke the Lambda function directly from the SQL query:
+
+```sql
+CREATE EXTENSION IF NOT EXISTS aws_lambda CASCADE;
+-- create a Lambda function ARN
+SELECT aws_commons.create_lambda_function_arn('my_function');
+-- invoke a Lambda function directly from a SQL query
+SELECT aws_lambda.invoke('my_function', '{\"body\": \"Hello!\"}'::json);
+```
+
+### `aws_s3` extension
+
+The [`aws_s3` extension](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/postgresql-s3-export.html) can be used in local RDS PostgreSQL databases to interact with the S3 API.
+
+In the SQL code snippet below, we are loading the `aws_s3` extension, then use the `table_import_from_s3(..)` function to populate the data in a table `table1` from a CSV file `test.csv` stored in a local S3 bucket `mybucket1`:
+
+```sql
+CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE;
+SELECT aws_s3.table_import_from_s3(
+ 'table1', 'c1, c2, c3', '(format csv)',
+ aws_commons.create_s3_uri('mybucket1', 'test.csv', 'us-east-1')
+)
+```
+
+Analogously, we can use the `query_export_to_s3(..)` extension function to export data from a table `table2` into a CSV file `test.csv` in local S3 bucket `mybucket2`:
+
+```sql
+CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE;
+SELECT aws_s3.query_export_to_s3(
+ 'SELECT * FROM table2',
+ aws_commons.create_s3_uri('mybucket2', 'test.csv', 'us-east-1'),
+ options := 'FORMAT csv'
+)
+```
+
+### Additional extensions
+
+In addition to the `aws_*` extensions described in the sections above, LocalStack RDS supports the following PostgreSQL extensions (some of which are bundled with the [`PostGIS` extension](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.PostGIS.html)):
+
+- `address_standardizer_data_us`
+- `fuzzystrmatch`
+- `postgis`
+- `postgis_raster`
+- `postgis_tiger_geocoder`
+- `postgis_topology`
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing RDS instances and clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **RDS** under the **Database** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Instance**: Create a new RDS instance by specifying the instance name, engine, DBInstance Class & Identifier, and other parameters.
+- **Create Cluster**: Create a new RDS cluster by specifying the database name, engine, DBCluster Identifier, and other parameters.
+- **View Instance & Cluster**: View an existing RDS instance or cluster by clicking the instance/cluster name.
+- **Edit Instance & Cluster**: Edit an existing RDS instance or cluster by clicking the instance/cluster name and clicking the **EDIT INSTANCE** or **EDIT CLUSTER** button.
+- **Remove Instance & Cluster**: Remove an existing RDS instance or cluster by clicking the instance/cluster name and clicking the **ACTIONS** followed by **Remove Selected** button.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use RDS in LocalStack for various use cases:
+
+- [AppSync GraphQL APIs for DynamoDB and RDS Aurora PostgreSQL](https://github.com/localstack/appsync-graphql-api-sample)
+- [Amazon RDS initialization using CDK, Lambda, ECR, and Secrets Manager](https://github.com/localstack/amazon-rds-init-cdk)
+- [Serverless RDS Proxy with API Gateway, Lambda, and Aurora RDS](https://github.com/localstack-samples/sample-serverless-rds-proxy-demo/)
+- [Running queries against an RDS database](https://github.com/localstack/localstack-pro-samples/tree/master/rds-db-queries)
+- [Running cloud integration tests against LocalStack's RDS with Testcontainers](https://github.com/localstack/localstack-pro-samples/tree/master/testcontainers-java-sample)
diff --git a/src/content/docs/aws/services/redshift.md b/src/content/docs/aws/services/redshift.md
new file mode 100644
index 00000000..507b9e8c
--- /dev/null
+++ b/src/content/docs/aws/services/redshift.md
@@ -0,0 +1,162 @@
+---
+title: "Redshift"
+linkTitle: "Redshift"
+description: Get started with Redshift on LocalStack
+tags: ["Free", "Ultimate"]
+---
+
+## Introduction
+
+RedShift is a cloud-based data warehouse solution which allows end users to aggregate huge volumes of data and parallel processing of data.
+RedShift is fully managed by AWS and serves as a petabyte-scale service which allows users to create visualization reports and critically analyze collected data.
+The query results can be saved to an S3 Data Lake while additional analytics can be provided by Athena or SageMaker.
+
+LocalStack allows you to use the RedShift APIs in your local environment to analyze structured and semi-structured data across local data warehouses and data lakes.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_redshift" >}}), which provides information on the extent of RedShift's integration with LocalStack.
+
+{{< callout "Note" >}}
+Users on Free plan can use RedShift APIs in LocalStack for basic mocking and testing.
+For advanced features like Redshift Data API and other emulation capabilities, please refer to the Ultimate plan.
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to RedShift and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a RedShift cluster and database while using a Glue Crawler to populate the metadata store with the schema of the RedShift database tables using the AWS CLI.
+
+### Define the variables
+
+First, we will define the variables we will use throughout this guide.
+Export the following variables in your shell:
+
+```bash
+REDSHIFT_CLUSTER_IDENTIFIER="redshiftcluster"
+REDSHIFT_SCHEMA_NAME="public"
+REDSHIFT_DATABASE_NAME="db1"
+REDSHIFT_TABLE_NAME="sales"
+REDSHIFT_USERNAME="crawlertestredshiftusername"
+REDSHIFT_PASSWORD="crawlertestredshiftpassword"
+GLUE_DATABASE_NAME="gluedb"
+GLUE_CONNECTION_NAME="glueconnection"
+GLUE_CRAWLER_NAME="gluecrawler"
+```
+
+The above variables will be used to create a RedShift cluster, database, table, and user.
+You will also create a Glue database, connection, and crawler to populate the Glue Data Catalog with the schema of the RedShift database tables.
+
+### Create a RedShift cluster and database
+
+You can create a RedShift cluster using the [`CreateCluster`](https://docs.aws.amazon.com/redshift/latest/APIReference/API_CreateCluster.html) API.
+The following command will create a RedShift cluster with the variables defined above:
+
+{{< command >}}
+$ awslocal redshift create-cluster \
+ --cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER \
+ --db-name $REDSHIFT_DATABASE_NAME \
+ --master-username $REDSHIFT_USERNAME \
+ --master-user-password $REDSHIFT_PASSWORD \
+ --node-type n1
+{{< / command >}}
+
+You can fetch the status of the cluster using the [`DescribeClusters`](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DescribeClusters.html) API.
+Run the following command to extract the URL of the cluster:
+
+{{< command >}}
+$ REDSHIFT_URL=$(awslocal redshift describe-clusters \
+ --cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER | jq -r '(.Clusters[0].Endpoint.Address) + ":" + (.Clusters[0].Endpoint.Port|tostring)')
+{{< / command >}}
+
+### Create a Glue database, connection, and crawler
+
+You can create a Glue database using the [`CreateDatabase`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateDatabase.html) API.
+The following command will create a Glue database:
+
+{{< command >}}
+$ awslocal glue create-database \
+ --database-input "{\"Name\": \"$GLUE_DATABASE_NAME\"}"
+{{< / command >}}
+
+You can create a connection to the RedShift cluster using the [`CreateConnection`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateConnection.html) API.
+The following command will create a Glue connection with the RedShift cluster:
+
+{{< command >}}
+$ awslocal glue create-connection \
+ --connection-input "{\"Name\":\"$GLUE_CONNECTION_NAME\", \"ConnectionType\": \"JDBC\", \"ConnectionProperties\": {\"USERNAME\": \"$REDSHIFT_USERNAME\", \"PASSWORD\": \"$REDSHIFT_PASSWORD\", \"JDBC_CONNECTION_URL\": \"jdbc:redshift://$REDSHIFT_URL/$REDSHIFT_DATABASE_NAME\"}}"
+{{< / command >}}
+
+Finally, you can create a Glue crawler using the [`CreateCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_CreateCrawler.html) API.
+The following command will create a Glue crawler:
+
+{{< command >}}
+$ awslocal glue create-crawler \
+ --name $GLUE_CRAWLER_NAME \
+ --database-name $GLUE_DATABASE_NAME \
+ --targets "{\"JdbcTargets\": [{\"ConnectionName\": \"$GLUE_CONNECTION_NAME\", \"Path\": \"$REDSHIFT_DATABASE_NAME/%/$REDSHIFT_TABLE_NAME\"}]}" \
+ --role r1
+{{< / command >}}
+
+### Create table in RedShift
+
+You can create a table in RedShift using the [`CreateTable`](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html) API.
+The following command will create a table in RedShift:
+
+{{< command >}}
+$ REDSHIFT_STATEMENT_ID=$(awslocal redshift-data execute-statement \
+ --cluster-identifier $REDSHIFT_CLUSTER_IDENTIFIER \
+ --database $REDSHIFT_DATABASE_NAME \
+ --sql \
+ "create table $REDSHIFT_TABLE_NAME(salesid integer not null, listid integer not null, sellerid integer not null, buyerid integer not null, eventid integer not null, dateid smallint not null, qtysold smallint not null, pricepaid decimal(8,2), commission decimal(8,2), saletime timestamp)" | jq -r .Id)
+{{< / command >}}
+
+You can check the status of the statement using the [`DescribeStatement`](https://docs.aws.amazon.com/redshift-data/latest/APIReference/API_DescribeStatement.html) API.
+The following command will check the status of the statement:
+
+{{< command >}}
+$ wait "awslocal redshift-data describe-statement \
+ --id $REDSHIFT_STATEMENT_ID" ".Status" "FINISHED"
+{{< / command >}}
+
+### Run the crawler
+
+You can run the crawler using the [`StartCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_StartCrawler.html) API.
+The following command will run the crawler:
+
+{{< command >}}
+$ awslocal glue start-crawler \
+ --name $GLUE_CRAWLER_NAME
+{{< / command >}}
+
+You can wait for the crawler to finish using the [`GetCrawler`](https://docs.aws.amazon.com/glue/latest/webapi/API_GetCrawler.html) API.
+The following command will wait for the crawler to finish:
+
+{{< command >}}
+$ wait "awslocal glue get-crawler \
+ --name $GLUE_CRAWLER_NAME" ".Crawler.State" "READY"
+{{< / command >}}
+
+You can finally retrieve the schema of the table using the [`GetTable`](https://docs.aws.amazon.com/glue/latest/webapi/API_GetTable.html) API.
+The following command will retrieve the schema of the table:
+
+{{< command >}}
+$ awslocal glue get-table \
+ --database-name $GLUE_DATABASE_NAME \
+ --name "${REDSHIFT_DATABASE_NAME}_${REDSHIFT_SCHEMA_NAME}_${REDSHIFT_TABLE_NAME}"
+{{< / command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing RedShift clusters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **RedShift** under the **Analytics** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+* **Create Cluster**: Create a new RedShift cluster by specifying the cluster identifier, database name, master username, master password, and node type.
+* **View Cluster**: View the details of a RedShift cluster, including the cluster identifier, database name, master username, master password, node type, and endpoint.
+* **Edit Cluster**: Edit an existing RedShift cluster by clicking the cluster name and clicking the **EDIT CLUSTER** button.
+* **Remove Cluster**: Remove an existing Redshift cluster by selecting it from the table and clicking the **ACTIONS** followed by **Remove Selected** button.
diff --git a/src/content/docs/aws/services/resourcegroups.md b/src/content/docs/aws/services/resourcegroups.md
new file mode 100644
index 00000000..b3729229
--- /dev/null
+++ b/src/content/docs/aws/services/resourcegroups.md
@@ -0,0 +1,73 @@
+---
+title: "Resource Groups"
+linkTitle: "Resource Groups"
+tags: ["Free"]
+description: >
+ Get started with Resource Groups on LocalStack
+---
+
+## Introduction
+
+Resource Groups allow developers to organize and manage their AWS resources more efficiently.
+Resource Groups allow for a unified view of their resources allowing developers to perform specific actions, such as resource tagging, access control, and policy enforcement across multiple resources simultaneously.
+Resource Groups in AWS provide two types of queries that developers can use to build groups: Tag-based queries and CloudFormation stack-based queries.
+With Tag-based queries, developers can organize resources based on common attributes or characteristics, while CloudFormation stack-based queries allow developers to group resources that are deployed together as part of a CloudFormation stack.
+
+LocalStack allows you to use the Resource Groups APIs in your local environment to group and categorize resources based on criteria such as tags, resource types, regions, or custom attributes.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_resource-groups" >}}), which provides information on the extent of Resource Group's integration with LocalStack.
+
+## Getting Started
+
+This guide is designed for users new to Resource Groups and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Resource Group using the AWS CLI.
+We will use tag-based query to create a resource group.
+However, you can also use CloudFormation stack-based queries to create a resource group.
+
+### Create a Resource Group
+
+Resource Groups in AWS are built around the concept of queries, which serve as a fundamental component.
+The tag-based queries list the resource types in the format `AWS::::` (e.g. `AWS::Lambda::Function` along with specified tags.
+A tag-based group is created based on a query of type `TAG_FILTERS_1_0`.
+
+Use the [`CreateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_CreateGroup.html) API to create a Resource Group.
+Run the following command to create a Resource Group named `my-resource-group`:
+
+{{< command >}}
+$ awslocal resource-groups create-group \
+ --name my-resource-group \
+ --resource-query '{"Type":"TAG_FILTERS_1_0","Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\"],\"TagFilters\":[{\"Key\":\"Stage\",\"Values\":[\"Test\"]}]}"}'
+{{< /command >}}
+
+You can also specify `AWS::AllSupported` as the `ResourceTypeFilters` value to include all supported resource types in the group.
+
+### Update a Resource Group
+
+To update a Resource Group, use the [`UpdateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_UpdateGroup.html) API.
+Execute the following command to update the Resource Group `my-resource-group`:
+
+{{< command >}}
+awslocal resource-groups update-group \
+ --group-name my-resource-group \
+ --description "EC2 S3 buckets and RDS DBs that we are using for the test stage"
+{{< /command >}}
+
+Furthermore, you can also update the query and tags associated with a Resource Group using the [`UpdateGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_UpdateGroup.html) API.
+Run the following command to update the query and tags of the Resource Group `my-resource-group`:
+
+{{< command >}}
+awslocal resource-groups update-group-query \
+ --group-name my-resource-group \
+ --resource-query '{"Type":"TAG_FILTERS_1_0","Query":"{\"ResourceTypeFilters\":[\"AWS::EC2::Instance\",\"AWS::S3::Bucket\",\"AWS::RDS::DBInstance\"],\"TagFilters\":[{\"Key\":\"Stage\",\"Values\":[\"Test\"]}]}"}'
+{{< /command >}}
+
+### Delete a Resource Group
+
+To delete a Resource Group, use the [`DeleteGroup`](https://docs.aws.amazon.com/resource-groups/latest/APIReference/API_DeleteGroup.html) API.
+Run the following command to delete the Resource Group `my-resource-group`:
+
+{{< command >}}
+$ awslocal resource-groups delete-group \
+ --group-name my-resource-group
+{{< /command >}}
diff --git a/src/content/docs/aws/services/route53.md b/src/content/docs/aws/services/route53.md
new file mode 100644
index 00000000..8496ab94
--- /dev/null
+++ b/src/content/docs/aws/services/route53.md
@@ -0,0 +1,200 @@
+---
+title: "Route 53"
+linkTitle: "Route 53"
+description: Get started with Route 53 on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Route 53 is a highly scalable and reliable domain name system (DNS) web service provided by Amazon Web Services.
+Route 53 allows you to register domain names, and associate them with IP addresses or other resources.
+In addition to basic DNS functionality, Route 53 offers advanced features like health checks and DNS failover.
+Route 53 integrates seamlessly with other AWS services, such as route traffic to CloudFront distributions, S3 buckets configured for static website hosting, EC2 instances, and more.
+
+LocalStack allows you to use the Route53 APIs in your local environment to create hosted zones and to manage DNS entries.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_route53" >}}), which provides information on the extent of Route53's integration with LocalStack.
+LocalStack also integrates with its DNS server to respond to DNS queries with these domains.
+
+{{< callout "note">}}
+LocalStack CLI does not publish port `53` anymore by default.
+Use the CLI flag `--host-dns` to expose the port on the host.
+This would be required if you want to reach out to Route53 domain names from your host machine, using the LocalStack DNS server.
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to Route53 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a hosted zone and query the DNS record with the AWS CLI.
+
+### Create a hosted zone
+
+You can created a hosted zone for `example.com` using the [`CreateHostedZone`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html) API.
+Run the following command:
+
+{{< command >}}
+$ zone_id=$(awslocal route53 create-hosted-zone \
+ --name example.com \
+ --caller-reference r1 | jq -r '.HostedZone.Id')
+$ echo $zone_id
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+/hostedzone/WBCZ6F10CWV9J1G
+```
+
+### Change resource record sets
+
+You can now change the resource record sets for the hosted zone `example.com` using the [`ChangeResourceRecordSets`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal route53 change-resource-record-sets \
+ --hosted-zone-id $zone_id \
+ --change-batch 'Changes=[{Action=CREATE,ResourceRecordSet={Name=test.example.com,Type=A,ResourceRecords=[{Value=1.2.3.4}]}}]'
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ChangeInfo": {
+ "Id": "/change/C2682N5HXP0BZ4",
+ "Status": "INSYNC",
+ "SubmittedAt": "2010-09-10T01:36:41.958000Z"
+ }
+}
+```
+
+## DNS resolution
+
+LocalStack Pro supports the ability to respond to DNS queries for your Route53 domain names, with our [integrated DNS server]({{< ref "user-guide/tools/dns-server" >}}).
+
+{{< callout >}}
+To follow the example below you must [configure your system DNS to use the LocalStack DNS server]({{< ref "user-guide/tools/dns-server#system-dns-configuration" >}}).
+{{< /callout >}}
+
+### Query a DNS record
+
+You can query the DNS record using `dig` via the built-in DNS server by running the following command:
+
+{{< command >}}
+$ dig @localhost test.example.com
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+;; QUESTION SECTION:
+;test.example.com. IN A
+
+;; ANSWER SECTION:
+test.example.com. 300 IN A 1.2.3.4
+```
+
+### Customizing internal endpoint resolution
+
+The DNS name `localhost.localstack.cloud`, along with its subdomains like `mybucket.s3.localhost.localstack.cloud`, serves an internal routing purpose within LocalStack.
+It facilitates communication between a LocalStack compute environment (such as a Lambda function) and the LocalStack APIs, as well as your containerised applications with the LocalStack APIs.
+For example configurations, see the [Network Troubleshooting guide]({{< ref "references/network-troubleshooting/endpoint-url/#from-your-container" >}}).
+
+For most use-cases, the default configuration of the internal LocalStack DNS name requires no modification.
+It functions seamlessly in typical scenarios.
+However, there are instances where adjusting the external resolution of this DNS name becomes necessary.
+For instance, this might be required when your LocalStack instance operates on a distinct Docker network compared to your application code or even on a separate machine.
+
+Suppose you intend to achieve a scenario in which all subdomains in the format `*.localhost.localstack.cloud` resolve to the IP address `5.6.7.8`.
+This IP signifies the accessibility of your LocalStack instance.
+This can be accomplished using Route53.
+
+Create a hosted zone for the domain `localhost.localstack.cloud` using the [`CreateHostedZone` API](https://docs.aws.amazon.com/Route53/latest/APIReference/API_CreateHostedZone.html) API.
+Run the following command:
+
+{{< command >}}
+$ zone_id=$(awslocal route53 create-hosted-zone \
+ --name localhost.localstack.cloud \
+ --caller-reference r1 | jq -r .HostedZone.Id)
+$ echo $zone_id
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+/hostedzone/3NF6SEGOB5EBHS1
+```
+
+You can now use the [`ChangeResourceRecordSets`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_ChangeResourceRecordSets.html) API to create a record set for the domain `localhost.localstack.cloud` using the `zone_id` retrieved in the previous step.
+Run the following command to accomplish this:
+
+{{< command >}}
+$ awslocal route53 change-resource-record-sets \
+ --hosted-zone-id $zone_id \
+ --change-batch '{"Changes":[{"Action":"CREATE","ResourceRecordSet":{"Name":"localhost.localstack.cloud","Type":"A","ResourceRecords":[{"Value":"5.6.7.8"}]}},{"Action":"CREATE","ResourceRecordSet":{"Name":"*.localhost.localstack.cloud","Type":"A","ResourceRecords":[{"Value":"5.6.7.8"}]}}]}'
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ChangeInfo": {
+ "Id": "/change/C2682N5HXP0BZ4",
+ "Status": "INSYNC",
+ "SubmittedAt": "2010-09-10T01:36:41.958000Z"
+ }
+}
+```
+
+You can now verify that the DNS name `localhost.localstack.cloud` and its subdomains resolve to the IP address:
+
+{{< command >}}
+$ dig @127.0.0.1 bucket1.s3.localhost.localstack.cloud
+$ dig @127.0.0.1 localhost.localstack.cloud
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+...
+;; ANSWER SECTION:
+bucket1.s3.localhost.localstack.cloud. 300 IN A 127.0.0.1
+bucket1.s3.localhost.localstack.cloud. 300 IN A 5.6.7.8
+...
+;; QUESTION SECTION:
+;localhost.localstack.cloud. IN A
+
+;; ANSWER SECTION:
+localhost.localstack.cloud. 300 IN A 5.6.7.8
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Route53 for creating hosted zones and to manage DNS entries.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Route53** under the **Analytics** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Hosted Zone**: Create a hosted zone for a domain name by clicking on the **Create Hosted Zone** button.
+ This will open a modal where you can enter the name, VPC, and other parameters and click on the **Submit** button to create the hosted zone.
+- **View Hosted Zone**: View the details of a hosted zone by clicking on the specific hosted zone name.
+ This will open a modal where you can view the hosted zone details.
+- **Create Record**: Click on the **Records** button on the individual hosted zone page, followed by clicking **Create Record** to create a record for the hosted zone.
+ This will open a modal where you can enter the name, type, and other parameters and click on the **Submit** button to create the record.
+- **Edit Record**: Click on the **Records** button on the individual hosted zone page, followed by clicking **Edit** on the specific record to edit the record.
+ This will open a modal where you can edit the record details and click on the **Submit** button to save the changes.s
+- **View Records**: Click on the **Records** button on the individual hosted zone page, followed by clicking on the specific record to view the record details.
+ This will open a modal where you can view the record details.
+- **Delete Hosted Zone**: Select the hosted zones you want to delete by clicking on the checkbox next to the hosted zone name, followed by clicking on the **Actions** button and then clicking on **Remove Selected**.
+- **Delete Record**: Click on the **Records** button on the individual hosted zone page, followed by clicking on checkbox next to the specific record, and then clicking on the **Actions** button and then clicking on **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Route53 in LocalStack for various use cases:
+
+- [DNS Failover with Route53 on LocalStack](https://github.com/localstack/localstack-pro-samples/tree/master/route53-dns-failover)
diff --git a/src/content/docs/aws/services/route53resolver.md b/src/content/docs/aws/services/route53resolver.md
new file mode 100644
index 00000000..dfebf69c
--- /dev/null
+++ b/src/content/docs/aws/services/route53resolver.md
@@ -0,0 +1,208 @@
+---
+title: "Route 53 Resolver"
+linkTitle: "Route 53 Resolver"
+description: Get started with Route 53 Resolver on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Route 53 Resolver allows you to route DNS queries between your virtual private cloud (VPC) and your network.
+Route 53 Resolver forwards DNS queries for domain names to the appropriate DNS service based on the configuration you set up.
+Route 53 Resolver can be used to resolve domain names between your VPC and your network, and to resolve domain names between your VPCs.
+
+LocalStack allows you to use the Route 53 Resolver endpoints in your local environment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_route53resolver" >}}), which provides information on the extent of Route 53 Resolver's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Route53 Resolver and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a resolver endpoint, list the endpoints, and delete the endpoint with the AWS CLI.
+
+### Fetch the IP addresses & Security Group ID
+
+Fetch the default VPC ID using the following command:
+
+{{< command >}}
+$ VPC_ID=$(awslocal ec2 describe-vpcs --query 'Vpcs[?IsDefault==`true`].VpcId' --output text)
+{{< / command >}}
+
+Fetch the default VPC's security group ID using the following command:
+
+{{< command >}}
+$ awslocal ec2 describe-subnets --filters Name=vpc-id,Values=$VPC_ID --query 'Subnets[].SubnetId'
+{{< / command >}}
+
+You should see the following output:
+
+```bash
+[
+ "subnet-bdd58a47",
+ "subnet-957d6ba6",
+ "subnet-3f8669d3",
+ "subnet-ec2a41c6",
+ "subnet-3d583924",
+ "subnet-8c1b0af8"
+]
+```
+
+Choose two subnets from the list above and fetch the CIDR block of the subnets which tells you the range of IP addresses within it:
+
+{{< command >}}
+$ awslocal ec2 describe-subnets --subnet-ids subnet-957d6ba6 --query 'Subnets[*].CidrBlock'
+
+[
+ "172.31.16.0/20"
+]
+
+$ awslocal ec2 describe-subnets --subnet-ids subnet-bdd58a47 --query 'Subnets[*].CidrBlock'
+
+[
+ "172.31.0.0/20"
+]
+
+{{< / command >}}
+
+Save the CIDR blocks of the subnets as you will need them later.
+Lastly fetch the security group ID of the default VPC:
+
+{{< command >}}
+$ awslocal ec2 describe-security-groups \
+ --filters Name=vpc-id,Values=$VPC_ID \
+ --query 'SecurityGroups[0].GroupId'
+
+sg-39936e572e797b360
+
+{{< / command >}}
+
+Save the security group ID as you will need it later.
+
+### Create a resolver endpoint
+
+Create a new file named `create-outbound-resolver-endpoint.json` and add the following content:
+
+```json
+{
+ "CreatorRequestId": "2020-01-01-18:47",
+ "Direction": "OUTBOUND",
+ "IpAddresses": [
+ {
+ "Ip": "172.31.0.0",
+ "SubnetId": "subnet-bdd58a47"
+ },
+ {
+ "Ip": "172.31.16.0",
+ "SubnetId": "subnet-957d6ba6"
+ }
+ ],
+ "Name": "my-outbound-endpoint",
+ "SecurityGroupIds": [ "sg-39936e572e797b360" ],
+ "Tags": [
+ {
+ "Key": "purpose",
+ "Value": "test"
+ }
+ ]
+ }
+```
+
+Replace the `Ip` and `SubnetId` values with the CIDR blocks and subnet IDs you fetched earlier.
+
+You can now use the [`CreateResolverEndpoint`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_CreateResolverEndpoint.html) API to create an outbound resolver endpoint.
+Run the following command:
+
+{{< command >}}
+$ awslocal route53resolver create-resolver-endpoint \
+ --cli-input-json file://create-outbound-resolver-endpoint.json
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ResolverEndpoint": {
+ "Id": "rslvr-out-5d61abaff9de06b99",
+ "CreatorRequestId": "2020-01-01-18:47",
+ "Arn": "arn:aws:route53resolver:us-east-1:000000000000:resolver-endpoint/rslvr-out-5d61abaff9de06b99",
+ "Name": "my-outbound-endpoint",
+ "SecurityGroupIds": [
+ "sg-39936e572e797b360"
+ ],
+ "Direction": "OUTBOUND",
+ "IpAddressCount": 2,
+ "HostVPCId": "vpc-d78cf7bb",
+ "Status": "CREATING",
+ "StatusMessage": "[Trace id: 1-bf9fe209-b90acae7cbcefe68a98b2882] Successfully created Resolver Endpoint",
+ "CreationTime": "2024-05-02T15:03:17.266471+00:00",
+ "ModificationTime": "2024-05-02T15:03:17.266491+00:00"
+ }
+}
+```
+
+### List the resolver endpoints
+
+You can list the resolver endpoints using the [`ListResolverEndpoints`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_ListResolverEndpoints.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal route53resolver list-resolver-endpoints
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "ResolverEndpoints": [
+ {
+ "Id": "rslvr-out-5d61abaff9de06b99",
+ "CreatorRequestId": "2020-01-01-18:47",
+ "Arn": "arn:aws:route53resolver:us-east-1:000000000000:resolver-endpoint/rslvr-out-5d61abaff9de06b99",
+ "Name": "my-outbound-endpoint",
+ "SecurityGroupIds": [
+ "sg-39936e572e797b360"
+ ],
+ "Direction": "OUTBOUND",
+ "IpAddressCount": 2,
+ "HostVPCId": "vpc-d78cf7bb",
+ "Status": "OPERATIONAL",
+ "StatusMessage": "[Trace id: 1-bf9fe209-b90acae7cbcefe68a98b2882] Successfully created Resolver Endpoint",
+ "CreationTime": "2024-05-02T15:03:17.266471+00:00",
+ "ModificationTime": "2024-05-02T15:03:17.266491+00:00"
+ }
+ ],
+ "MaxResults": 10
+}
+```
+
+### Delete the resolver endpoint
+
+You can delete the resolver endpoint using the [`DeleteResolverEndpoint`](https://docs.aws.amazon.com/Route53/latest/APIReference/API_route53resolver_DeleteResolverEndpoint.html) API.
+Run the following command:
+
+{{< command >}}
+$ awslocal route53resolver delete-resolver-endpoint \
+ --resolver-endpoint-id rslvr-out-5d61abaff9de06b99
+{{< / command >}}
+
+Replace `rslvr-out-5d61abaff9de06b99` with the ID of the resolver endpoint you want to delete.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Route53 Resolver for creating and managing resolver endpoints.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Route53** under the **Analytics** section.
+Navigate to the **Resolver Endpoints** tab to view the resolver endpoints.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create resolver endpoint**: Create a resolver endpoint by clicking on the **Create Endpoint** button.
+ This will open a modal where you can enter the name, VPC, and other parameters and click on the **Submit** button to create the resolver endpoint.
+- **View resolver endpoint**: View the details of a resolver endpoint by clicking on the specific resolver endpoint name.
+ This will open a modal where you can view the resolver endpoint details.
+- **Edit resolver endpoint**: Edit the details of a resolver endpoint by clicking on the **Edit Endpoint** button in the specific resolver endpoint page.
+ This will open a modal where you can edit the resolver endpoint details.
+- **Delete resolver endpoint**: Select the resolver endpoints you want to delete by clicking on the checkbox next to the resolver endpoint name, followed by clicking on the **Actions** button and then clicking on **Remove Selected**.
diff --git a/src/content/docs/aws/services/s3.md b/src/content/docs/aws/services/s3.md
new file mode 100644
index 00000000..118d83d4
--- /dev/null
+++ b/src/content/docs/aws/services/s3.md
@@ -0,0 +1,327 @@
+---
+title: "Simple Storage Service (S3)"
+linkTitle: "Simple Storage Service (S3)"
+description: Get started with Amazon S3 on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Simple Storage Service (S3) is an object storage service that provides a highly scalable and durable solution for storing and retrieving data.
+In S3, a bucket represents a directory, while an object corresponds to a file.
+Each object or file within S3 encompasses essential attributes such as a unique key denoting its name, the actual content it holds, a version ID for versioning support, and accompanying metadata.
+S3 can store unlimited objects, allowing you to store, retrieve, and manage your data in a highly adaptable and reliable manner.
+
+LocalStack allows you to use the S3 APIs in your local environment to create new buckets, manage your S3 objects, and test your S3 configurations locally.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_s3" >}}), which provides information on the extent of S3's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to S3 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your [preferred method]({{< ref "getting-started/installation" >}}).
+We will demonstrate how you can create an S3 bucket, manage S3 objects, and generate pre-signed URLs for S3 objects.
+
+### Create an S3 bucket
+
+You can create an S3 bucket using the [`CreateBucket`](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html) API.
+Run the following command to create an S3 bucket named `sample-bucket`:
+
+{{< command >}}
+$ awslocal s3api create-bucket --bucket sample-bucket
+{{< / command >}}
+
+You can list your S3 buckets using the [`ListBuckets`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-buckets.html) API.
+Run the following command to list your S3 buckets:
+
+{{< command >}}
+$ awslocal s3api list-buckets
+{{< / command >}}
+
+On successful creation of the S3 bucket, you will see the following output:
+
+```bash
+{
+ "Buckets": [
+ {
+ "Name": "sample-bucket",
+ "CreationDate": "2023-07-18T06:36:25+00:00"
+ }
+ ],
+ "Owner": {
+ "DisplayName": "webfile",
+ "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
+ }
+}
+```
+
+### Managing S3 objects
+
+To upload a file to your S3 bucket, you can use the [`PutObject`](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html) API.
+Download a random image from the internet and save it as `image.jpg`.
+Run the following command to upload the file to your S3 bucket:
+
+{{< command >}}
+$ awslocal s3api put-object \
+ --bucket sample-bucket \
+ --key image.jpg \
+ --body image.jpg
+{{< / command >}}
+
+You can list the objects in your S3 bucket using the [`ListObjects`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-objects.html) API.
+Run the following command to list the objects in your S3 bucket:
+
+{{< command >}}
+$ awslocal s3api list-objects \
+ --bucket sample-bucket
+{{< / command >}}
+
+If your image has been uploaded successfully, you will see the following output:
+
+```bash
+{
+ "Contents": [
+ {
+ "Key": "image.jpg",
+ "LastModified": "2023-07-18T06:40:07+00:00",
+ "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
+ "Size": 0,
+ "StorageClass": "STANDARD",
+ "Owner": {
+ "DisplayName": "webfile",
+ "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a"
+ }
+ }
+ ]
+}
+```
+
+Run the following command to upload a file named `index.html` to your S3 bucket:
+
+{{< command >}}
+
+$ awslocal s3api put-object --bucket sample-bucket --key index.html --body index.html
+
+{
+ "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\""
+}
+{{< / command >}}
+
+### Generate a pre-signed URL for S3 object
+
+You can generate a pre-signed URL for your S3 object using the [`presign`](https://docs.aws.amazon.com/cli/latest/reference/s3/presign.html) command.
+Pre-signed URL allows anyone to retrieve the S3 object with an HTTP GET request.
+
+Run the following command to generate a pre-signed URL for your S3 object:
+
+{{< command >}}
+$ awslocal s3 presign s3://sample-bucket/image.jpg
+{{< / command >}}
+
+You will see a generated pre-signed URL for your S3 object.
+You can use [curl](https://curl.se/) or [`wget`](https://www.gnu.org/software/wget/) to retrieve the S3 object using the pre-signed URL.
+
+## Path-Style and Virtual Hosted-Style Requests
+
+Similar to AWS, LocalStack categorizes requests as either [Path style or Virtual-Hosted style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html) based on the Host header of the request.
+The following example illustrates this distinction:
+
+```bash
+http://.s3..localhost.localstack.cloud:4566/ # host-style request
+http://.s3.localhost.localstack.cloud:4566/ # host-style request, region is not mandatory in LocalStack
+http://s3..localhost.localstack.cloud:4566// # path-style request
+http://localhost:4566// # path-style request
+```
+
+A **Virtual-Hosted style** request will have the `bucket` as part of the `Host` header of your request.
+In order for LocalStack to be able to parse the bucket name from your request, your endpoint needs to be prefixed with `s3.`, like `s3.localhost.localstack.cloud`.
+
+If your endpoint cannot be prefixed with `s3.`, you should configure your SDK to use **Path style** request instead, and make the bucket part of the path.
+
+By default, most SDKs will try to use **Virtual-Hosted style** requests and prepend your endpoint with the bucket name.
+However, if the endpoint is not prefixed by `s3.`, LocalStack will not be able to understand the request and it will most likely result in an error.
+
+You can either change the endpoint to an S3-specific one, or configure your SDK to use **Path style** requests instead.
+Check out our [SDK documentation]({{< ref "sdks" >}}) to learn how you can configure AWS SDKs to access LocalStack and S3.
+
+{{< callout "tip" >}}
+While using [AWS SDKs](https://aws.amazon.com/developer/tools/#SDKs), you would need to configure the `ForcePathStyle` parameter to `true` in the S3 client configuration to use **Path style** requests.
+If you want to use virtual host addressing of buckets, you can remove `ForcePathStyle` from the configuration.
+The `ForcePathStyle` parameter name can vary between SDK and languages, please check our [SDK documentation]({{< ref "sdks" >}})
+{{< /callout >}}
+
+If your endpoint is not prefixed with `s3.`, all requests are treated as **Path style** requests.
+Using the `s3.localhost.localstack.cloud` endpoint URL is recommended for all requests aimed at S3.
+
+## Configuring Cross-Origin Resource Sharing on S3
+
+You can configure Cross-Origin Resource Sharing (CORS) on a LocalStack S3 bucket using AWS Command Line Interface (CLI).
+It would allow your local application to communicate directly with an S3 bucket in LocalStack.
+By default, LocalStack will apply specific CORS rules to all requests to allow you to display and access your resources through [LocalStack Web Application](https://app.localstack.cloud).
+If no CORS rules are configured for your S3 bucket, LocalStack will apply default rules unless specified otherwise.
+
+To configure CORS rules for your S3 bucket, you can use the `awslocal` wrapper.
+Optionally, you can run a local web application on [localhost:3000](http://localhost:3000).
+You can emulate the same behaviour with an AWS SDK or an integration you use.
+Follow this step-by-step guide to configure CORS rules on your S3 bucket.
+
+Run the following command on your terminal to create your S3 bucket:
+
+{{< command >}}
+$ awslocal s3api create-bucket --bucket cors-bucket
+{
+ "Location": "/cors-bucket"
+}
+{{< / command >}}
+
+Next, create a JSON file with the CORS configuration.
+The file should have the following format:
+
+```json
+{
+ "CORSRules": [
+ {
+ "AllowedHeaders": ["*"],
+ "AllowedMethods": ["GET", "POST", "PUT"],
+ "AllowedOrigins": ["http://localhost:3000"],
+ "ExposeHeaders": ["ETag"]
+ }
+ ]
+}
+```
+
+{{< callout >}}
+Note that this configuration is a sample, and you can tailor it to fit your needs better, for example, restricting the **AllowedHeaders** to specific ones.
+{{< /callout >}}
+
+Save the file locally with a name of your choice, for example, `cors-config.json`.
+Run the following command to apply the CORS configuration to your S3 bucket:
+
+{{< command >}}
+$ awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json
+{{< / command >}}
+
+You can further verify that the CORS configuration was applied successfully by running the following command:
+
+{{< command >}}
+$ awslocal s3api get-bucket-cors --bucket cors-bucket
+{{< / command >}}
+
+On applying the configuration successfully, you should see the same JSON configuration file you created earlier.
+Your S3 bucket is configured to allow cross-origin resource sharing, and if you try to send requests from your local application running on [localhost:3000](http://localhost:3000), they should be successful.
+
+However, if you try to access your bucket from [LocalStack Web Application](https://app.localstack.cloud), you'll see errors, and your bucket won't be accessible anymore.
+We can edit the JSON file `cors-config.json` you created earlier with the following configuration and save it:
+
+```json
+{
+ "CORSRules": [
+ {
+ "AllowedHeaders": ["*"],
+ "AllowedMethods": ["GET", "POST", "PUT", "HEAD", "DELETE"],
+ "AllowedOrigins": [
+ "http://localhost:3000",
+ "https://app.localstack.cloud",
+ "http://app.localstack.cloud"
+ ],
+ "ExposeHeaders": ["ETag"]
+ }
+ ]
+}
+```
+
+You can now run the same steps as before to update the CORS configuration and verify if it is applied correctly:
+
+{{< command >}}
+$ awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json
+$ awslocal s3api get-bucket-cors --bucket cors-bucket
+{{< / command >}}
+
+You can try again to upload files in your bucket from the [LocalStack Web Application](https://app.localstack.cloud) and it should work.
+
+## S3 Docker image
+
+LocalStack provides a Docker image for S3, which you can use to run S3 in a Docker container.
+The image is available on [Docker Hub](https://hub.docker.com/r/localstack/localstack) and can be pulled using the following command:
+
+{{< command >}}
+$ docker pull localstack/localstack:s3-latest
+{{< / command >}}
+
+The S3 Docker image only supports the S3 APIs and does not include other services like Lambda, DynamoDB, etc. You can run the S3 Docker image using any of the following commands:
+
+{{< tabpane lang="shell" >}}
+{{< tab header="LocalStack CLI" lang="shell" >}}
+IMAGE_NAME=localstack/localstack:s3-latest localstack start
+{{< /tab >}}
+{{< tab header="Docker Compose" lang="yml" >}}
+services:
+ localstack:
+ container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
+ image: localstack/localstack:s3-latest
+ ports:
+ - "127.0.0.1:4566:4566" # LocalStack Gateway
+ environment:
+ - DEBUG=${DEBUG:-0}
+ volumes:
+ - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
+ - "/var/run/docker.sock:/var/run/docker.sock"
+{{< /tab >}}
+{{< tab header="Docker" lang="shell" >}}
+docker run \
+ --rm \
+ -p 4566:4566 \
+ localstack/localstack:s3-latest
+{{< /tab >}}
+{{< /tabpane >}}
+
+The S3 Docker image has similar parity with the S3 APIs supported by LocalStack Docker image.
+You can use similar [configuration options]({{< ref "configuration/#s3" >}}) to alter the behaviour of the S3 Docker image, such as `DEBUG` or `S3_SKIP_SIGNATURE_VALIDATION`.
+
+{{< callout >}}
+The S3 Docker image does not support persistence, and all data is lost when the container is stopped.
+To use persistence or save the container state as a Cloud Pod, you need to use the [`localstack/localstack-pro`](https://hub.docker.com/r/localstack/localstack-pro) image.
+{{< /callout >}}
+
+## SSE-C Encryption
+
+SSE-C (Server-Side Encryption with Customer-Provided Keys) is an Amazon S3 encryption method where customers provide their own encryption keys for securing objects.
+AWS handles the encryption and decryption, but the keys are managed entirely by the customer.
+
+LocalStack supports SSE-C parameter validation for the following S3 APIs:
+
+- [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+- [`GetObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+- [`HeadObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+- [`GetObjectAttributes`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)
+- [`CopyObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)
+- [`CreateMultipartUpload`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+- [`UploadPart`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+
+However, LocalStack does not support the actual encryption and decryption of objects using SSE-C.
+
+## Resource Browser
+
+The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing S3 buckets & configurations.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **S3** under the **Storage** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Bucket**: Create a new S3 bucket by specifying a **Bucket Name**, **Bucket Configuration**, **ACL**, **Object Ownership**, and more.
+- **Objects & Permissions**: View, upload, download, and delete objects in your S3 buckets.
+ You can also view and edit the permissions, like the CORS Configuration for the bucket.
+- **Create Folder**: Create a new folder in your S3 bucket by clicking on the **Create Folder** button and specifying a **Folder Name**.
+- **Delete Bucket**: Delete an S3 bucket by selecting the S3 bucket and clicking on **Actions** button and clicking on **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use S3 in LocalStack for various use cases:
+
+- [Full-Stack application with Lambda, DynamoDB & S3 for shipment validation](https://github.com/localstack-samples/sample-shipment-list-demo-lambda-dynamodb-s3).
+- [Serverless Transcription application using Transcribe, S3, Lambda, SQS, and SES](https://github.com/localstack/sample-transcribe-app)
+- [Query data in S3 Bucket with Amazon Athena, Glue Catalog & CloudFormation](https://github.com/localstack/query-data-s3-athena-glue-sample)
+- [Serverless Image Resizer with Lambda, S3, SNS, and SES](https://github.com/localstack/serverless-image-resizer)
+- [Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack]({{< ref "s3-static-website-terraform" >}})
diff --git a/src/content/docs/aws/services/sagemaker.md b/src/content/docs/aws/services/sagemaker.md
new file mode 100644
index 00000000..6ff02246
--- /dev/null
+++ b/src/content/docs/aws/services/sagemaker.md
@@ -0,0 +1,120 @@
+---
+title: "SageMaker"
+linkTitle: "SageMaker"
+description: Get started with SageMaker on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Amazon SageMaker is a fully managed service provided by Amazon Web Services (AWS) that provides the tools to build, train, and deploy machine-learning models in the cloud for predictive analytics applications.
+It streamlines the machine learning development process, reduces the time and effort required to build and deploy models, and offers the scalability and flexibility needed for large-scale machine learning projects in the AWS cloud.
+
+LocalStack provides a local version of the SageMaker API, which allows running jobs to create machine learning models (e.g., using PyTorch) and to deploy them.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_sagemaker" >}}), which provides information on the extent of Sagemaker's integration with LocalStack.
+
+{{< callout >}}
+LocalStack supports custom-built models in SageMaker.
+You can push your Docker image to LocalStack's Elastic Container Registry (ECR) and use it in SageMaker.
+LocalStack will use the local ECR image to create a SageMaker model.
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to SageMaker and assumes basic knowledge of Python3 and [AWS SDK for Python (Boto3)](https://aws.amazon.com/sdk-for-python/).
+
+We will demonstrate an application illustrating running a machine learning job using the SageMaker API locally that perform the following:
+
+- Set up an MNIST model in SageMaker using LocalStack.
+- Creates a SageMaker Endpoint for accessing the model
+- Invokes the endpoint directly on the container via Boto3
+
+{{< callout >}}
+SageMaker is a fairly comprehensive API for now.
+Currently a subset of the functionality is provided locally, but new features are being added on a regular basis.
+{{< /callout >}}
+
+### Download the sample application
+
+You can download the sample application from [GitHub](https://github.com/localstack/localstack-pro-samples/tree/master/sagemaker-inference) or by running the following commands:
+
+{{< command >}}
+$ mkdir localstack-samples && cd localstack-samples
+$ git init
+$ git remote add origin -f git@github.com:localstack/localstack-pro-samples.git
+$ git config core.sparseCheckout true
+$ echo sagemaker-inference >> .git/info/sparse-checkout
+$ git pull origin master
+{{< /command >}}
+
+### Set up the environment
+
+After downloading the sample application, you can set up your Docker Client to pull the AWS Deep Learning images by running the following command:
+
+{{< command >}}
+$ aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 763104351884.dkr.ecr.us-east-1.amazonaws.com
+{{< /command >}}
+
+Since the images are quite large (several gigabytes), it's a good idea to pull the images using Docker in advance.
+
+{{< command >}}
+$ docker pull 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-inference:1.5.0-cpu-py3
+{{< /command >}}
+
+### Run the sample application
+
+Start your LocalStack container using your preferred method.
+Run the sample application by executing the following command:
+
+{{< command >}}
+$ python3 main.,py
+{{< /command >}}
+
+You should see the following output:
+
+```bash
+Creating bucket...
+Uploading model data to bucket...
+Creating model in SageMaker...
+Adding endpoint configuration...
+Creating endpoint...
+Checking endpoint status...
+Endpoint not ready - waiting...
+Checking endpoint status...
+Endpoint ready!
+Invoking via boto...
+Predicted digits: [7, 3]
+Invoking endpoint directly...
+Predicted digits: [2, 6]
+```
+
+You can also invoke a serverless endpoint, by navigating to `main.py` and uncommenting the [`run_serverless`](https://github.com/localstack/localstack-pro-samples/blob/cca7a59e0b2b46a18a3db226c31d44401b68447e/sagemaker-inference/main.py#L134) function call.
+
+## Resource Browser
+
+The LocalStack Web Application provides a [Resource Browser]({{< ref "resource-browser" >}}) for managing Lambda resources.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Sagemaker** under the **Compute** section.
+
+The Resource Browser displays Models, Endpoint Configurations and Endpoint.
+You can click on individual resources to view their details.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create and Remove Models**: You can remove existing model and create a new model with the required configuration
+
+
+
+- **Endpoint Configurations & Endpoints**: You can create endpoints from the resource browser that hosts your deployed machine learning model.
+ You can also create endpoint configuration that specifies the type and number of instances that will be used to serve your model on an endpoint.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Sagemaker in LocalStack for various use cases:
+
+- [MNIST handwritten digit recognition model](https://github.com/localstack-samples/sample-mnist-digit-recognition-sagemaker) demonstrates a web application that allows users to draw a digit and submit it to a locally running SageMaker endpoint.
+
+## Limitations
+
+Currently, GPU models are not supported by the LocalStack SageMaker implementation.
diff --git a/src/content/docs/aws/services/scheduler.md b/src/content/docs/aws/services/scheduler.md
new file mode 100644
index 00000000..09ebf360
--- /dev/null
+++ b/src/content/docs/aws/services/scheduler.md
@@ -0,0 +1,129 @@
+---
+title: "EventBridge Scheduler"
+linkTitle: "EventBridge Scheduler"
+description: Get started with EventBridge Scheduler on LocalStack
+tags: ["Free"]
+---
+
+## Introduction
+
+EventBridge Scheduler is a service that enables you to schedule the execution of your AWS Lambda functions, Amazon ECS tasks, and Amazon Batch jobs.
+You can use EventBridge Scheduler to create schedules that run at a specific time or at regular intervals.
+You can also use EventBridge Scheduler to create schedules that run within a flexible time window.
+
+LocalStack allows you to use the Scheduler APIs in your local environment to create and run schedules.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_scheduler" >}}), which provides information on the extent of EventBridge Scheduler's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to EventBridge Scheduler and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a new schedule, list all schedules, and tag a schedule using the EventBridge Scheduler APIs.
+
+### Create a new SQS queue
+
+You can create a new SQS queue using the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API.
+Run the following command to create a new SQS queue:
+
+{{< command >}}
+$ awslocal sqs create-queue --queue-name local-notifications
+{{< /command >}}
+
+You can fetch the Queue ARN using the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API.
+Run the following command to fetch the Queue ARN by specifying the Queue URL:
+
+{{< command >}}
+$ awslocal sqs get-queue-attributes \
+ --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/local-notifications \
+ --attribute-names All
+{{< /command >}}
+
+Save the Queue ARN for later use.
+
+### Create a new schedule
+
+You can create a new schedule using the [`CreateSchedule`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_CreateSchedule.html) API.
+Run the following command to create a new schedule:
+
+{{< command >}}
+$ awslocal scheduler create-schedule \
+ --name sqs-templated-schedule \
+ --schedule-expression 'rate(5 minutes)' \
+ --target '{"RoleArn": "arn:aws:iam::000000000000:role/schedule-role", "Arn":"arn:aws:sqs:us-east-1:000000000000:local-notifications", "Input": "test" }' \
+ --flexible-time-window '{ "Mode": "OFF"}'
+{{< /command >}}
+
+The following output is displayed:
+
+```bash
+{
+ "ScheduleArn": "arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule"
+}
+```
+
+### List all schedules
+
+You can list all schedules using the [`ListSchedules`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_ListSchedules.html) API.
+Run the following command to list all schedules:
+
+{{< command >}}
+$ awslocal scheduler list-schedules
+{{< /command >}}
+
+The following output is displayed:
+
+```bash
+{
+ "Schedules": [
+ {
+ "Arn": "arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule",
+ "CreationDate": "2024-07-11T23:13:15.296906+05:30",
+ "GroupName": "default",
+ "LastModificationDate": "2024-07-11T23:13:15.296906+05:30",
+ "Name": "sqs-templated-schedule",
+ "State": "ENABLED",
+ "Target": {
+ "Arn": "arn:aws:sqs:us-east-1:000000000000:local-notifications"
+ }
+ }
+ ]
+}
+```
+
+### Tag a schedule
+
+You can tag a schedule using the [`TagResource`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_TagResource.html) API.
+Run the following command to tag a schedule:
+
+{{< command >}}
+$ awslocal scheduler tag-resource \
+ --resource-arn arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule \
+ --tags Key=Name,Value=Test
+{{< /command >}}
+
+You can view the tags associated with a schedule using the [`ListTagsForResource`](https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_ListTagsForResource.html) API.
+Run the following command to list the tags associated with a schedule:
+
+{{< command >}}
+$ awslocal scheduler list-tags-for-resource \
+ --resource-arn arn:aws:scheduler:us-east-1:000000000000:schedule/default/sqs-templated-schedule
+{{< /command >}}
+
+The following output is displayed:
+
+```bash
+{
+ "Tags": [
+ {
+ "Key": "Name",
+ "Value": "Test"
+ }
+ ]
+}
+```
+
+## Current Limitations
+
+EventBridge Scheduler in LocalStack only provides mocked functionality.
+It does not emulate actual features such as schedule execution or target triggering for Lambda functions or SQS queues.
diff --git a/src/content/docs/aws/services/secretsmanager.md b/src/content/docs/aws/services/secretsmanager.md
new file mode 100644
index 00000000..bc05e433
--- /dev/null
+++ b/src/content/docs/aws/services/secretsmanager.md
@@ -0,0 +1,193 @@
+---
+title: "Secrets Manager"
+linkTitle: "Secrets Manager"
+description: Get started with Secrets Manager on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Secrets Manager is a service provided by Amazon Web Services (AWS) that enables you to securely store, manage, and retrieve sensitive information such as passwords, API keys, and other credentials.
+Secrets Manager integrates seamlessly with AWS services, making it easier to manage secrets used by various applications and services.
+Secrets Manager supports automatic secret rotation, replacing long-term secrets with short-term ones to mitigate the risk of compromise without requiring application updates.
+
+LocalStack allows you to use the Secrets Manager APIs in your local environment to manage, retrieve, and rotate secrets.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_secretsmanager" >}}), which provides information on the extent of Secrets Manager's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Secrets Manager and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a secret, get the secret value, and rotate the secret using the AWS CLI.
+
+### Create a secret
+
+Before your create a secret, create a file named `secrets.json` and add the following content:
+
+{{}}
+$ touch secrets.json
+$ cat > secrets.json << EOF
+{
+ "username": "admin",
+ "password": "password"
+}
+EOF
+{{ }}
+
+You can now create a secret using the [`CreateSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_CreateSecret.html) API.
+Execute the following command to create a secret named `test-secret`:
+
+{{}}
+$ awslocal secretsmanager create-secret \
+ --name test-secret \
+ --description "LocalStack Secret" \
+ --secret-string file://secrets.json
+{{ }}
+
+Upon successful execution, the output will provide you with the ARN of the newly created secret.
+This identifier will be useful for further operations or integrations.
+
+The following output would be retrieved:
+
+{{}}
+{
+ "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP",
+ "Name": "test-secret",
+ "VersionId": "a50c6752-3343-4eb0-acf3-35c74f00f707"
+}
+{{ }}
+
+### Describe the secret
+
+To retrieve the details of the secret you created earlier, you can use the [`DescribeSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_DescribeSecret.html) API.
+Execute the following command:
+
+{{}}
+$ awslocal secretsmanager describe-secret \
+ --secret-id test-secret
+{{ }}
+
+The following output would be retrieved:
+
+{{}}
+{
+ "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP",
+ "Name": "test-secret",
+ "Description": "LocalStack Secret",
+ "LastChangedDate": 1692882479.857329,
+ "VersionIdsToStages": {
+ "a50c6752-3343-4eb0-acf3-35c74f00f707": [
+ "AWSCURRENT"
+ ]
+ },
+ "CreatedDate": 1692882479.857329
+}
+{{ }}
+
+You can also get a list of the secrets available in your local environment that have **Secret** in the name using the [`ListSecrets`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_ListSecrets.html) API.
+Execute the following command:
+
+{{}}
+$ awslocal secretsmanager list-secrets \
+ --filters Key=name,Values=Secret
+{{ }}
+
+### Get the secret value
+
+To retrieve the value of the secret you created earlier, you can use the [`GetSecretValue`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html) API.
+Execute the following command:
+
+{{}}
+$ awslocal secretsmanager get-secret-value \
+ --secret-id test-secret
+{{ }}
+
+The following output would be retrieved:
+
+{{}}
+{
+ "ARN": "arn:aws:secretsmanager:us-east-1:000000000000:secret:test-secret-pyfjVP",
+ "Name": "test-secret",
+ "VersionId": "a50c6752-3343-4eb0-acf3-35c74f00f707",
+ "SecretString": "{\n \"username\": \"admin\",\n \"password\": \"password\"\n}\n",
+ "VersionStages": [
+ "AWSCURRENT"
+ ],
+ "CreatedDate": 1692882479.857329
+}
+{{ }}
+
+You can tag your secret using the [`TagResource`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_TagResource.html) API.
+Execute the following command:
+
+{{}}
+$ awslocal secretsmanager tag-resource \
+ --secret-id test-secret \
+ --tags Key=Environment,Value=Development
+{{ }}
+
+### Rotate the secret
+
+To rotate a secret, you need a Lambda function that can rotate the secret.
+You can copy the code from a [Secrets Manager template](https://docs.aws.amazon.com/secretsmanager/latest/userguide/reference_available-rotation-templates.html) or you can use a [generic Lambda function](https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas/blob/master/SecretsManagerRotationTemplate/lambda_function.py) that rotates the secret.
+
+Zip the Lambda function and create a Lambda function using the [`CreateFunction`](https://docs.aws.amazon.com/lambda/latest/dg/API_CreateFunction.html) API.
+Execute the following command:
+
+{{}}
+$ zip my-function.zip lambda_function.py
+$ awslocal lambda create-function \
+ --function-name my-rotation-function \
+ --runtime python3.9 \
+ --zip-file fileb://my-function.zip \
+ --handler my-handler \
+ --role arn:aws:iam::000000000000:role/service-role/rotation-lambda-role
+{{ }}
+
+You can now set a resource policy on the Lambda function to allow Secrets Manager to invoke it using [`AddPermission`](https://docs.aws.amazon.com/lambda/latest/dg/API_AddPermission.html) API.
+
+Please note that this is not required with the default LocalStack settings, since IAM permission enforcement is disabled by default.
+
+Execute the following command:
+
+{{}}
+$ awslocal lambda add-permission \
+ --function-name my-rotation-function \
+ --action lambda:InvokeFunction \
+ --statement-id SecretsManager \
+ --principal secretsmanager.amazonaws.com
+{{ }}
+
+You can now create a rotation schedule for the secret using the [`RotateSecret`](https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_RotateSecret.html) API.
+Execute the following command:
+
+{{}}
+$ awslocal secretsmanager rotate-secret \
+ --secret-id MySecret \
+ --rotation-lambda-arn arn:aws:lambda:us-east-1:000000000000:function:my-rotation-function \
+ --rotation-rules "{\"ScheduleExpression\": \"cron(0 16 1,15 *?*)\", \"Duration\": \"2h\"}"
+{{ }}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing secrets in your local environment.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Secrets Manager** under the **Security Identity Compliance** section.
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Secret**: Create a new secret by clicking **Add a Secret** and providing the required details, such as Name, Tags, Kms Key Id, Secret String, and more.
+- **View Secrets**: View the details of a secret by clicking on the secret name.
+ You can also see the secret value by clicking on **Display Secret**.
+- **Edit Secret**: Edit the details of a secret by clicking on the secret name and then clicking **Edit Secret** and adding the new secret value.
+- **Delete Secret**: Delete a secret by clicking on the secret name and then clicking **Actions** and then **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Secrets Manager in LocalStack for various use cases:
+
+- [Amazon RDS initialization using CDK, Lambda, ECR, and Secrets Manager](https://github.com/localstack/amazon-rds-init-cdk)
diff --git a/src/content/docs/aws/services/serverlessrepo.md b/src/content/docs/aws/services/serverlessrepo.md
new file mode 100644
index 00000000..c7f3f860
--- /dev/null
+++ b/src/content/docs/aws/services/serverlessrepo.md
@@ -0,0 +1,109 @@
+---
+title: "Serverless Application Repository"
+linkTitle: "Serverless Application Repository"
+description: >
+ Get started with Serverless Application Repository on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+[Serverless Application Repository](https://aws.amazon.com/serverless/serverlessrepo/) allows developers to discover, deploy, and share serverless applications and components.
+Using Serverless Application Repository, developers can build & publish applications and components once and share them across the community and organizations, making them accessible to others.
+Serverless Application Repository provides a user-friendly interface to search, filter, and browse through a diverse catalog of serverless applications.
+
+LocalStack allows you to use the Serverless Application Repository APIs in your local environment to create, update, delete, and list serverless applications and components.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_serverlessrepo" >}}), which provides information on the extent of Serverless Application Repository's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Serverless Application Repository and assumes basic knowledge of the SAM CLI and our [`samlocal`](https://github.com/localstack/aws-sam-cli-local) wrapper script.
+
+Start your LocalStack container using your preferred method, such as via docker-compose.
+We will demonstrate how to create a SAM application that comprises a Hello World serverless application with a simple API backend using the SAM CLI and then publish it to the Serverless Application Repository by defining it using a SAM template.
+
+### Setup the SAM application
+
+To create a sample SAM application using the `samlocal` CLI, execute the following command:
+
+{{< command >}}
+$ samlocal init --runtime python3.9
+{{< /command >}}
+
+This command downloads a sample SAM application template and generates a `template.yml` file in the current directory.
+The template includes a Lambda function and an API Gateway endpoint that supports a `GET` operation.
+
+### Package the SAM application
+
+Next, we can use the `samlocal` CLI to create a deployment package and a packaged SAM template.
+Add a Metadata section to your SAM template file (`template.yaml`), and specify the following properties:
+
+To create a deployment package and a packaged SAM template using the `samlocal` CLI, add a Metadata section to your SAM template file (`template.yaml`) and specify the desired properties:
+
+```yaml
+Metadata:
+ AWS::ServerlessRepo::Application:
+ Name: helloworld
+ Description: hello world
+ Author: author
+ SpdxLicenseId: Apache-2.0
+ Labels: ['tests']
+ SemanticVersion: 0.0.1
+```
+
+Once the Metadata section is added, run the following command to create the Lambda function deployment package and the packaged SAM template:
+
+{{< command >}}
+samlocal package \
+ --template-file template.yaml \
+ --output-template-file packaged.yaml
+{{< /command >}}
+
+This command generates a `packaged.yaml` file in the current directory containing the packaged SAM template.
+The packaged template will be similar to the original template file, but it will now include a `CodeUri` property for the Lambda function, as shown in the example below:
+
+```yaml
+Resources:
+ HelloWorldFunction:
+ Type: AWS::Serverless::Function
+ Properties:
+ CodeUri: s3://aws-sam-cli-managed-default-samclisourcebucket-b6325dc3/c6ce8fa8b5a97dd022ecd006536eb5a4
+```
+
+### Retrieve the Application ID
+
+To retrieve the Application ID for your SAM application, you can utilize the [`awslocal`](https://github.com/localstack/awscli-local) CLI by running the following command:
+
+{{< command >}}
+awslocal serverlessrepo list-applications
+{{< /command >}}
+
+In the output, you will observe the `ApplicationId` property in the output, which is the Application ID for your SAM application, along with other properties such as the `Author`, `Description`, `Name`, `SpdxLicenseId`, and `Version` providing further details about your application.
+
+### Publish the SAM application
+
+To publish your application to the Serverless Application Repository, execute the following command:
+
+{{< command >}}
+samlocal publish \
+ --template packaged.yaml \
+ --region us-east-1
+{{< /command >}}
+
+### Delete the SAM application
+
+To remove a SAM application from the Serverless Application Repository, you can use the following command:
+
+{{< command >}}
+awslocal serverlessrepo delete-application \
+ --application-id
+{{< /command >}}
+
+Replace `` with the Application ID of your SAM application that you retrieved in the previous step.
+
+You can also create a CloudFormation changeset using the [`CreateCloudFormationChangeSet`](https://docs.aws.amazon.com/serverlessrepo/latest/devguide/serverlessrepo-how-to-publish.html) API, and then execute the changeset to deploy the SAM application using the [`ExecuteChangeSet`](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_ExecuteChangeSet.html) API.
+
+## Current Limitations
+
+- Keep in mind, since the application is only registered in your individual localstack instance, you won't be able to share them with other developers.
+- Currently LocalStack only supports one AWS-hosted application (`"arn:aws:serverlessrepo:us-east-1:297356227824:applications/SecretsManagerRDSPostgreSQLRotationMultiUser`).
diff --git a/src/content/docs/aws/services/servicediscovery.md b/src/content/docs/aws/services/servicediscovery.md
new file mode 100644
index 00000000..acc5c8f0
--- /dev/null
+++ b/src/content/docs/aws/services/servicediscovery.md
@@ -0,0 +1,242 @@
+---
+title: "Service Discovery"
+linkTitle: "Service Discovery"
+description: >
+ Get started with Service Discovery on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Service Discovery simplifies the management and discovery of services by locating and connecting to the components and resources that make up their applications.
+Service Discovery allows for a centralized mechanism for dynamically registering, tracking, and resolving service instances, allowing seamless communication between services.
+Service discovery uses Cloud Map API actions to manage HTTP and DNS namespaces for services, enabling automatic registration and discovery of services running in the cluster.
+
+LocalStack allows you to use the Service Discovery APIs in your local environment to monitor and manage your services across various environments and network topologies.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_servicediscovery" >}}), which provides information on the extent of Service Discovery's integration with LocalStack.
+
+## Getting Started
+
+This guide is designed for users new to Service Discovery and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an ECS service containing a Fargate task that uses Service Discovery with the AWS CLI.
+
+### Create a Cloud Map service discovery namespace
+
+To set up a private Cloud Map service discovery namespace, you can utilize the [`CreatePrivateDnsNamespace`](https://docs.aws.amazon.com/cloud-map/latest/api/API_CreatePrivateDnsNamespace.html) API.
+This API allows you to define a custom name for your namespace and specify the VPC ID where your services will be locatedBefore proceeding, make sure to create the required VPC.
+
+To create the private Cloud Map service discovery namespace, execute the following command:
+
+{{< command >}}
+$ awslocal servicediscovery create-private-dns-namespace \
+ --name tutorial \
+ --vpc
+{{< /command >}}
+
+Ensure that you replace `` with the actual ID of the VPC you intend to use for the namespace.
+Upon running this command, you will receive an output containing an `OperationId`.
+This identifier can be used to check the status of the operation.
+
+To verify the status of the operation, execute the following command:
+
+{{< command >}}
+$ awslocal servicediscovery get-operation \
+ --operation-id
+{{< /command >}}
+
+The output will consist of a `NAMESPACE` ID, which you will need to create a service within the namespace.
+
+### Create a Cloud Map service
+
+After creating the private Cloud Map service discovery namespace, you can proceed to create a service within that namespace using the [`CreateService`](https://docs.aws.amazon.com/cloud-map/latest/api/API_CreateService.html) API
+This service represents a specific component or resource in your application.
+
+To create a service within the namespace, execute the following command:
+
+{{< command >}}
+$ awslocal servicediscovery create-service \
+ --name myapplication \
+ --dns-config "NamespaceId="",DnsRecords=[{Type="A",TTL="300"}]" \
+ --health-check-custom-config FailureThreshold=1
+{{< /command >}}
+
+Upon successful execution, the output will provide you with the Service ID and the Amazon Resource Name (ARN) of the newly created service.
+These identifiers will be useful for further operations or integrations.
+
+### Create an ECS cluster
+
+To integrate the service you created earlier with an ECS (Elastic Container Service) service, you can follow the steps below.
+
+Start by creating an ECS cluster using the [`CreateCluster`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCluster.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal ecs create-cluster \
+ --cluster-name tutorial
+{{< /command >}}
+
+### Register a task definition
+
+Next, you will register a task definition that's compatible with Fargate.
+Create a file named `fargate-task.json` and add the following content:
+
+```json
+{
+ "family": "tutorial-task-def",
+ "networkMode": "awsvpc",
+ "containerDefinitions": [
+ {
+ "name": "sample-app",
+ "image": "httpd:2.4",
+ "portMappings": [
+ {
+ "containerPort": 80,
+ "hostPort": 80,
+ "protocol": "tcp"
+ }
+ ],
+ "essential": true,
+ "entryPoint": [
+ "sh",
+ "-c"
+ ],
+ "command": [
+ "/bin/sh",
+ "-c",
+ "echo ' Amazon ECS Sample App Amazon ECS Sample App
Congratulations!
Your application is now running on a container in Amazon ECS.
' > /usr/local/apache2/htdocs/index.html && httpd-foreground"
+ ]
+ }
+ ],
+ "requiresCompatibilities": [
+ "FARGATE"
+ ],
+ "cpu": "256",
+ "memory": "512"
+}
+```
+
+Register the task definition using the [`RegisterTaskDefinition`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal ecs register-task-definition \
+ --cli-input-json file://fargate-task.json
+{{< /command >}}
+
+### Create an ECS service
+
+To create an ECS service, you will need to retrieve the `securityGroups` and `subnets` associated with the VPC used to create the Cloud Map namespace.
+You can obtain this information by using the [`DescribeVpcs`](https://docs.aws.amazon.com/vpc/latest/APIReference/API_DescribeVpcs.html) API.
+Execute the following command to retrieve the details of all VPCs:
+
+{{< command >}}
+$ awslocal ec2 describe-vpcs
+{{< /command >}}
+
+The output will include a list of VPCs.
+Locate the VPC that was used to create the Cloud Map namespace and make a note of its `VpcId` value.
+
+Next, execute the following commands to retrieve the `securityGroups` and `subnets` associated with the VPC:
+
+{{< command >}}
+$ awslocal ec2 describe-security-groups --filters Name=vpc-id,Values=vpc- --query 'SecurityGroups[*].[GroupId, GroupName]' --output text
+
+$ awslocal ec2 describe-subnets --filters Name=vpc-id,Values=vpc- --query 'Subnets[*].[SubnetId, CidrBlock]' --output text
+{{< /command >}}
+
+Replace `` with the actual VpcId value of the VPC you identified earlier.
+Make a note of the `GroupId` and `SubnetId` values.
+
+Create a new file named `ecs-service-discovery.json` and add the following content to it:
+
+```json
+{
+ "cluster": "tutorial",
+ "serviceName": "ecs-service-discovery",
+ "taskDefinition": "tutorial-task-def",
+ "serviceRegistries": [
+ {
+ "registryArn":
+ }
+ ],
+ "launchType": "FARGATE",
+ "platformVersion": "LATEST",
+ "networkConfiguration": {
+ "awsvpcConfiguration": {
+ "assignPublicIp": "ENABLED",
+ "securityGroups": [ "sg-*" ], // Add the security group IDs here
+ "subnets": [ "subnet-*" ] // Add the subnet IDs here
+ }
+ },
+ "desiredCount": 1
+}
+```
+
+Create your ECS service using the [`CreateService`](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html) API.
+Execute the following command:
+
+{{< command >}}
+$ awslocal ecs create-service \
+ --cli-input-json file://ecs-service-discovery.json
+{{< /command >}}
+
+### Verify the service
+
+You can use the Service Discovery service ID to verify that the service was created successfully.
+Execute the following command:
+
+{{< command >}}
+$ awslocal servicediscovery list-instances \
+ --service-id
+{{< /command >}}
+
+The output will consist of the resource ID, and you can further use the [`DiscoverInstances`](https://docs.aws.amazon.com/cloud-map/latest/api/API_DiscoverInstances.html) API.
+This API allows you to query the DNS records associated with the service and perform various operations.
+
+To explore the DNS records of your service and perform other operations, refer to the [AWS CLI documentation](https://docs.aws.amazon.com/cli/latest/reference/servicediscovery/index.html) for comprehensive instructions and examples.
+
+### Using filters
+
+Filters can be used to narrow down the results of a list operation.
+Filters are supported for the following operations:
+
+- [`list-namespaces`](https://docs.aws.amazon.com/cli/latest/reference/servicediscovery/list-namespaces.html)
+- [`list-services`](https://docs.aws.amazon.com/cli/latest/reference/ecs/list-services.html)
+- [`discover-instances`](https://docs.aws.amazon.com/cli/latest/reference/servicediscovery/discover-instances.html)
+
+Using `list-namespaces` you can filter for the parameters `TYPE`, `NAME`, `HTTP_NAME`.
+Using `list-services` it is only possible to filter for `NAMESPACE_ID`.
+Both `list-services` and `list-namespaces` support `EQ` (default condition if not specified) and `BEGINS_WITH` as conditions.
+Both conditions and only support a single value to match by.
+The following examples demonstrate how to use filters with these operations:
+
+{{< command >}}
+$ awslocal servicediscovery list-namespaces \
+ --filters "Name=HTTP_NAME,Values=['example-namespace'],Condition=EQ"
+{{< /command >}}
+
+{{< command >}}
+$ awslocal servicediscovery list-services \
+ --filters "Name=NAMESPACE_ID,Values=['id_to_match']"
+{{< /command >}}
+
+The command `discover-instance` supports parameters and optional parameters as filter criteria.
+Conditions in parameters must match return values, while if one ore more conditions in optional parameters match, the subset is returned, if no conditions in optional parameters match, all unfiltered results are returned.
+
+This command will only return instances where the parameter `env` is equal to `fuu`:
+{{< command >}}
+$ awslocal servicediscovery discover-instances \
+ --namespace-name example-namespace \
+ --service-name example-service \
+ --query-parameters "env"="fuu"
+{{< /command >}}
+
+This command instead will return all instances where the optional parameter `env` is equal to `bar`, but if no instances match, all instances are returned:
+{{< command >}}
+$ awslocal servicediscovery discover-instances \
+ --namespace-name example-namespace \
+ --service-name example-service \
+ --optional-parameters "env"="bar"
+{{< /command >}}
diff --git a/src/content/docs/aws/services/ses.md b/src/content/docs/aws/services/ses.md
new file mode 100644
index 00000000..f4509be4
--- /dev/null
+++ b/src/content/docs/aws/services/ses.md
@@ -0,0 +1,133 @@
+---
+title: "Simple Email Service (SES)"
+linkTitle: "Simple Email Service (SES)"
+description: Get started with Amazon Simple Email Service (SES) on LocalStack
+tags: ["Free", "Base"]
+persistence: supported
+---
+
+## Introduction
+
+Simple Email Service (SES) is an emailing service that can be integrated with other cloud-based services.
+It provides API to facilitate email templating, sending bulk emails and more.
+
+The supported APIs are available on the API coverage page for [SESv1]({{< ref "coverage_ses" >}}) and [SESv2]({{< ref "coverage_sesv2" >}}).
+
+{{< callout "Note" >}}
+Users on Free plan can use SES V1 APIs in LocalStack for basic mocking and testing.
+For advanced features like SMTP integration and other emulation capabilities, please refer to the Ultimate plan.
+{{< /callout >}}
+
+## Getting Started
+
+This is an introductory guide to get started with SES.
+Basic knowledge of the AWS CLI and LocalStack [`awslocal`](https://github.com/localstack/awscli-local) command is assumed.
+
+Start LocalStack using your preferred method.
+
+To be able to send emails, we need to create a verified identity.
+A verified identity appears as part of the 'From' field in the sent email.
+
+A singular email identity can be added using the `VerifyEmailIdentity` operation.
+
+{{< command >}}
+$ awslocal ses verify-email-identity --email hello@example.com
+
+$ awslocal ses list-identities
+{
+ "Identities": [
+ "hello@example.com"
+ ]
+}
+{{< /command >}}
+
+{{< callout >}}
+On AWS, verifying email identities or domain identities require additional steps like changing DNS configuration or clicking verification links respectively.
+In LocalStack, identities are automatically verified.
+{{< /callout >}}
+
+Next, emails can be sent using the `SendEmail` operation.
+
+{{< command >}}
+$ awslocal ses send-email \
+ --from "hello@example.com" \
+ --message 'Body={Text={Data="This is the email body"}},Subject={Data="This is the email subject"}' \
+ --destination 'ToAddresses=jeff@aws.com'
+{
+ "MessageId": "labpqxukegeaftfh-ymaouvvy-ribr-qeoy-izfp-kxaxbfcfsgbh-wpewvd"
+}
+{{< /command >}}
+
+{{< callout >}}
+In LocalStack Community, all operations are mocked and no real emails are sent.
+In LocalStack Pro, it is possible to send real emails via an SMTP server.
+{{< /callout >}}
+
+## Retrieve Sent Emails
+
+LocalStack keeps track of all sent emails for retrospection.
+Sent messages can be retrieved in following ways:
+- **API endpoint:** LocalStack provides a service endpoint (`/_aws/ses`) which can be used to return in-memory saved messages.
+ A `GET` call returns all messages.
+ Query parameters `id` and `email` can be used to filter by message ID and message source respectively.
+ {{< command >}}
+$ curl --silent localhost.localstack.cloud:4566/_aws/ses?email=hello@example.com | jq .
+{
+ "messages": [
+ {
+ "Id": "dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc",
+ "Region": "eu-central-1",
+ "Destination": {
+ "ToAddresses": [
+ "jeff@aws.com"
+ ]
+ },
+ "Source": "hello@example.com",
+ "Subject": "This is the email subject",
+ "Body": {
+ "text_part": "This is the email body",
+ "html_part": null
+ },
+ "Timestamp": "2023-09-11T08:37:13"
+ }
+ ]
+}
+ {{< /command >}}
+ A `DELETE` call clears all messages from the memory.
+ The query parameter `id` can be used to delete only a specific message.
+ {{< command >}}
+ $ curl -X DELETE localhost.localstack.cloud:4566/_aws/ses?id=dqxhhgoutkmylpbc-ffuqlkjs-ljld-fckp-hcph-wcsrkmxhhldk-pvadjc
+ {{< /command >}}
+- **Filesystem:** All messages are saved to the state directory (see [filesystem layout]({{< ref "filesystem" >}})).
+ The files are saved as JSON in the `ses/` subdirectory and named by the message ID.
+
+## SMTP Integration
+
+LocalStack Pro supports sending emails via an SMTP server.
+To enable this, set the connections parameters and access credentials for the server in the configuration.
+Refer to the [Configuration]({{< ref "configuration#emails" >}}) guide for details.
+
+{{< callout "tip" >}}
+If you do not have access to a live SMTP server, you can use tools like [MailDev](https://github.com/maildev/maildev) or [smtp4dev](https://github.com/rnwood/smtp4dev).
+These run as Docker containers on your local machine.
+Make sure they run in the same Docker network as the LocalStack container.
+{{< /callout >}}
+
+## Resource Browser
+
+LocalStack Web Application provides a resource browser for managing email identities and introspecing sent emails.
+
+
+
+
+
+The Resource Browser allows you to perform following actions:
+- **Create Email Identity**: Create an email identity by clicking **Create Identity** and specifying the email address.
+- **View Sent Emails**: View all sent emails from an email identity by clicking the email address.
+ You can the view the details of a sent email by selecting them from the list.
+- **Send Emails**: On selecting an email identity, click **Send Message** and specify destination fields (To, CC and BCC addresses) and the body (Plaintext, HTML) to send an email.
+
+## Current Limitations
+
+- It is currently not possible to [receive emails via SES](https://docs.aws.amazon.com/ses/latest/dg/receiving-email.html) in LocalStack.
+- All operations related to Receipt Rules are mocked.
diff --git a/src/content/docs/aws/services/shield.md b/src/content/docs/aws/services/shield.md
new file mode 100644
index 00000000..3b32f837
--- /dev/null
+++ b/src/content/docs/aws/services/shield.md
@@ -0,0 +1,105 @@
+---
+title: "Shield"
+linkTitle: "Shield"
+description: Get started with Shield on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Shield is a managed Distributed Denial of Service (DDoS) protection service that safeguards applications running on AWS.
+Shield provides always-on detection and inline mitigations that minimize application downtime and latency, by protecting users from L4, L7 and most common L3, L4 network and transport layer DDoS attacks.
+Shield detection and mitigation is designed to protect against threats, including ones that are not known to the service at the time of detection.
+
+LocalStack allows you to use the Shield APIs in your local environment, and provides a simple way to mock and test the Shield service locally.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_shield" >}}), which provides information on the extent of Shield's integration with LocalStack.
+
+## Getting Started
+
+This guide is designed for users new to Shield and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Shield protection, list all protections, and delete a protection with the AWS CLI.
+
+### Create a Shield Protection
+
+To create a Shield protection, use the [`CreateProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/create-protection.html) API.
+The following command creates a Shield protection for a resource:
+
+{{< command >}}
+$ awslocal shield create-protection \
+ --name "my-protection" \
+ --resource-arn "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/app/my-alb/1234567890"
+{{< /command >}}
+
+The output should look similar to the following:
+
+```bash
+{
+ "ProtectionId": "67908d33-16c0-443d-820a-31c02c4d5976"
+}
+```
+
+### List all Protections
+
+To list all Shield protections, use the [`ListProtections`](https://docs.aws.amazon.com/cli/latest/reference/shield/list-protections.html) API.
+The following command lists all Shield protections:
+
+{{< command >}}
+$ awslocal shield list-protections
+{{< /command >}}
+
+The output should look similar to the following:
+
+```bash
+{
+ "Protections": [
+ {
+ "Id": "67908d33-16c0-443d-820a-31c02c4d5976",
+ "Name": "my-protection",
+ "ResourceArn": "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/app/my-alb/1234567890",
+ "ProtectionArn": "arn:aws:shield::000000000000:protection/67908d33-16c0-443d-820a-31c02c4d5976"
+ }
+ ]
+}
+```
+
+### Describe a Protection
+
+To describe a Shield protection, use the [`DescribeProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/describe-protection.html) API.
+The following command describes a Shield protection:
+
+{{< command >}}
+$ awslocal shield describe-protection \
+ --protection-id "67908d33-16c0-443d-820a-31c02c4d5976"
+{{< /command >}}
+
+Replace the protection ID with the ID of the protection you want to describe.
+The output should look similar to the following:
+
+```bash
+{
+ "Protection": {
+ "Id": "67908d33-16c0-443d-820a-31c02c4d5976",
+ "Name": "my-protection",
+ "ResourceArn": "arn:aws:elasticloadbalancing:us-east-1:000000000000:loadbalancer/app/my-alb/1234567890",
+ "ProtectionArn": "arn:aws:shield::000000000000:protection/67908d33-16c0-443d-820a-31c02c4d5976"
+ }
+}
+```
+
+### Delete a Protection
+
+To delete a Shield protection, use the [`DeleteProtection`](https://docs.aws.amazon.com/cli/latest/reference/shield/delete-protection.html) API.
+The following command deletes a Shield protection:
+
+{{< command >}}
+$ awslocal shield delete-protection \
+ --protection-id "67908d33-16c0-443d-820a-31c02c4d5976"
+{{< /command >}}
+
+## Current Limitations
+
+Shield Config is currently mocked in LocalStack.
+You can create, read, update, and delete Shield protections & subscriptions, but the actual protection or subscription is not applied to any resources.
+If you need this feature, please consider opening a [feature request on GitHub](https://github.com/localstack/localstack/issues/new).
diff --git a/src/content/docs/aws/services/sns.md b/src/content/docs/aws/services/sns.md
new file mode 100644
index 00000000..7b091368
--- /dev/null
+++ b/src/content/docs/aws/services/sns.md
@@ -0,0 +1,506 @@
+---
+title: "Simple Notification Service (SNS)"
+linkTitle: "Simple Notification Service (SNS)"
+description: Get started with Simple Notification Service (SNS) on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Simple Notification Service (SNS) is a serverless messaging service that can distribute a massive number of messages to multiple subscribers and can be used to send messages to mobile devices, email addresses, and HTTP(s) endpoints.
+SNS employs the Publish/Subscribe, an asynchronous messaging pattern that decouples services that produce events from services that process events.
+
+LocalStack allows you to use the SNS APIs in your local environment to coordinate the delivery of messages to subscribing endpoints or clients.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_sns" >}}), which provides information on the extent of SNS's integration with LocalStack.
+
+## Getting started
+
+This guide is intended for users who wish to get more acquainted with SNS over LocalStack.
+It assumes you have basic knowledge of the AWS CLI (and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script).
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an SNS topic, publish messages, and subscribe to the topic.
+
+### Create an SNS topic
+
+To create an SNS topic, use the [`CreateTopic`](https://docs.aws.amazon.com/sns/latest/api/API_CreateTopic.html) API.
+Run the following command to create a topic named `localstack-topic`:
+
+{{< command >}}
+$ awslocal sns create-topic --name localstack-topic
+{{< /command >}}
+
+You can set the SNS topic attribute using the SNS topic you created previously by using the [`SetTopicAttributes`](https://docs.aws.amazon.com/sns/latest/api/API_SetTopicAttributes.html) API.
+Run the following command to set the `DisplayName` attribute for the topic:
+
+{{< command >}}
+$ awslocal sns set-topic-attributes \
+ --topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic \
+ --attribute-name DisplayName \
+ --attribute-value MyTopicDisplayName
+{{< /command >}}
+
+You can list all the SNS topics using the [`ListTopics`](https://docs.aws.amazon.com/sns/latest/api/API_ListTopics.html) API.
+Run the following command to list all the SNS topics:
+
+{{< command >}}
+$ awslocal sns list-topics
+{{< /command >}}
+
+### Get attributes and publish messages to SNS topic
+
+You can get attributes for a single SNS topic using the [`GetTopicAttributes`](https://docs.aws.amazon.com/sns/latest/api/API_GetTopicAttributes.html) API.
+Run the following command to get the attributes for the SNS topic:
+
+{{< command >}}
+$ awslocal sns get-topic-attributes \
+ --topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic
+{{< /command >}}
+
+You can change the `topic-arn` to the ARN of the SNS topic you created previously.
+
+To publish messages to the SNS topic, create a new file named `messages.txt` in your current directory and add some content.
+Run the following command to publish messages to the SNS topic using the [`Publish`](https://docs.aws.amazon.com/sns/latest/api/API_Publish.html) API:
+
+{{< command >}}
+$ awslocal sns publish \
+ --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" \
+ --message file://message.txt
+{{< /command >}}
+
+### Subscribing to SNS topics and setting subscription attributes
+
+You can subscribe to the SNS topic using the [`Subscribe`](https://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html) API.
+Run the following command to subscribe to the SNS topic:
+
+{{< command >}}
+$ awslocal sns subscribe \
+ --topic-arn arn:aws:sns:us-east-1:000000000000:localstack-topic \
+ --protocol email \
+ --notification-endpoint test@gmail.com
+{{< /command >}}
+
+You can configure the SNS Subscription attributes, using the `SubscriptionArn` returned by the previous step.
+For example, run the following command to set the `RawMessageDelivery` attribute for the subscription:
+
+{{< command >}}
+$ awslocal sns set-subscription-attributes \
+ --subscription-arn arn:aws:sns:us-east-1:000000000000:test-topic:b6f5e924-dbb3-41c9-aa3b-589dbae0cfff \
+ --attribute-name RawMessageDelivery --attribute-value true
+{{< /command >}}
+
+### Working with SQS subscriptions for SNS
+
+The getting started covers email subscription, but SNS can integrate with many AWS technologies as seen in the [aws-cli docs](https://docs.aws.amazon.com/cli/latest/reference/sns/subscribe.html).
+A Common technology to integrate with is SQS.
+
+First we need to ensure we create an SQS queue named `my-queue`:
+{{< command >}}
+$ awslocal sqs create-queue --queue-name my-queue
+{
+ "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
+}
+{{< /command >}}
+
+Subscribe the SQS queue to the topic we created previously:
+{{< command >}}
+$ awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" --protocol sqs --notification-endpoint "arn:aws:sqs:us-east-1:000000000000:my-queue"
+{
+ "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8"
+}
+{{< /command >}}
+
+Sending a message to the queue, via the topic
+{{< command >}}
+$ awslocal sns publish --topic-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic" --message "hello"
+{
+ "MessageId": "5a1593ce-411b-44dc-861d-907daa05353b"
+}
+{{< /command >}}
+
+Check that our message has arrived:
+{{< command >}}
+$ awslocal sqs receive-message --queue-url "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
+{
+ "Messages": [
+ {
+ "MessageId": "72a15a17-5652-45ab-b4db-937f60f0c6d8",
+ "ReceiptHandle": "YjQ0YjgzMjAtNTk2NC00ZDk0LWE4ZGYtNjljMTViOTkwOTFmIGFybjphd3M6c3FzOnVzLWVhc3QtMTowMDAwMDAwMDAwMDA6bXktcXVldWUgNzJhMTVhMTctNTY1Mi00NWFiLWI0ZGItOTM3ZjYwZjBjNmQ4IDE3MDM3MDQxMTEuNTI2MzEwNA==",
+ "MD5OfBody": "2664b540fb6ce6fd7467cd8fb071c30f",
+ "Body": "{\"Type\": \"Notification\", \"MessageId\": \"5a1593ce-411b-44dc-861d-907daa05353b\", \"TopicArn\": \"arn:aws:sns:us-east-1:000000000000:localstack-topic\", \"Message\": \"hello\", \"Timestamp\": \"2023-12-27T19:07:55.341Z\", \"SignatureVersion\": \"1\", \"Signature\": \"EXAMPLEpH+..\", \"SigningCertURL\": \"...\", \"UnsubscribeURL\": \"http://localhost.localstack.cloud:4566/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8\"}"
+ }
+ ]
+}
+
+{{< /command >}}
+
+To remove the subscription you need the subscription ARN which you can find by listing the subscriptions.
+You can list all the SNS subscriptions using the [`ListSubscriptions`](https://docs.aws.amazon.com/sns/latest/api/API_ListSubscriptions.html) API.
+Run the following command to list all the SNS subscriptions:
+
+{{< command >}}
+$ awslocal sns list-subscriptions
+{
+ "Subscriptions": [
+ {
+ "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8",
+ "Owner": "000000000000",
+ "Protocol": "sqs",
+ "Endpoint": "arn:aws:sqs:us-east-1:000000000000:my-queue",
+ "TopicArn": "arn:aws:sns:us-east-1:000000000000:localstack-topic"
+ }
+ ]
+}
+{{< /command >}}
+
+Then, use the ARN to unsubscribe
+{{< command >}}
+$ awslocal sns unsubscribe --subscription-arn "arn:aws:sns:us-east-1:000000000000:localstack-topic:636e2a73-0dda-4e09-9fdf-77f113d0edd8"
+{{< /command >}}
+
+## Developer endpoints
+
+LocalStack’s SNS implementation offers additional endpoints for developers located at `/_aws/sns`.
+These endpoints provide the ability to access different SNS internals, like Platform Endpoint messages which are not sent to those platforms, or Subscription Tokens which you might not be able to retrieve otherwise.
+
+### Platform Endpoint messages
+
+For testing purposes, LocalStack retains all messages published to a platform endpoint in memory, making it easy to retrieve them.
+To learn more about SNS mobile push notifications, refer to the [AWS documentation on SNS mobile push notifications](https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-application-as-subscriber.html).
+
+You can access these messages in JSON format through `GET /_aws/sns/platform-endpoint-messages`.
+To retrieve specific messages, you can use query parameters to filter by `accountId`, `region`, and `endpointArn`.
+You can also call `DELETE /_aws/sns/platform-endpoint-messages` to clear the messages.
+
+#### Query parameters
+
+| Parameter | Required | Description |
+| - | - | - |
+| `accountId` | No | The AWS Account ID from which the messages have been published. If not specified, it will use the default `000000000000` |
+| `region` | No | The AWS region from which the messages have been published. If not specified, it will use the default `us-east-1` |
+| `endpointArn` | No | The target `EndpointArn` to which the messages have been published. If specified, the response will contain only messages sent to this target. Otherwise, it will return all endpoints with their messages. |
+
+#### Response format and attributes
+
+| Attribute | Description |
+| - | - |
+| `platform_endpoint_messages` | Contains endpoints ARN as field names. Each endpoint will have its messages in an Array. |
+| `region` | The region of the endpoints and messages. |
+
+
+
+In this example, we will create a platform endpoint in SNS and publish a message to it.
+Run the following commands to create a platform endpoint:
+
+{{< command >}}
+$ awslocal sns create-platform-application --name app-test --platform APNS --attributes {}
+{{< /command >}}
+An example response is shown below:
+
+```json
+{
+ "PlatformApplicationArn": "arn:aws:sns:us-east-1:000000000000:app/APNS/app-test"
+}
+```
+
+Using the `PlatformApplicationArn` from the previous call:
+{{< command >}}
+$ awslocal sns create-platform-endpoint --platform-application-arn "arn:aws:sns:us-east-1:000000000000:app/APNS/app-test" --token my-fake-token
+{{< /command >}}
+
+```json
+{
+ "EndpointArn": "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944"
+}
+```
+
+Publish a message to the platform endpoint:
+
+{{< command >}}
+$ awslocal sns publish --target-arn "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944" --message '{"APNS_PLATFORM": "{\"aps\": {\"content-available\": 1}}"}' --message-structure json
+{{< /command >}}
+
+```json
+{
+ "MessageId": "ed501a7a-caab-45aa-a941-2fcc64b5c227"
+}
+```
+
+Retrieve the messages published to the platform endpoint using [curl](https://curl.se/):
+
+{{< command >}}
+$ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
+{{< /command >}}
+
+```json
+{
+ "platform_endpoint_messages": {
+ "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944": [
+ {
+ "TargetArn": "arn:aws:sns:us-east-1:000000000000:endpoint/APNS/app-test/c25f353e-856b-4b02-a725-6bde35e6e944",
+ "Message": "{\"APNS_PLATFORM\": \"{\\\"aps\\\": {\\\"content-available\\\": 1}}\"}",
+ "MessageAttributes": null,
+ "MessageStructure": "json",
+ "Subject": null
+ }
+ ]
+ },
+ "region": "us-east-1"
+}
+```
+
+With those same filters, you can reset the saved messages at `DELETE /_aws/sns/platform-endpoint-messages`.
+Run the following command to reset the saved messages:
+
+{{< command >}}
+$ curl -X "DELETE" "http://localhost:4566/_aws/sns/platform-endpoint-messages"
+{{< /command >}}
+We can now check that the messages have been properly deleted:
+{{< command >}}
+$ curl "http://localhost:4566/_aws/sns/platform-endpoint-messages" | jq .
+{{< /command >}}
+
+```json
+{
+ "platform_endpoint_messages": {},
+ "region": "us-east-1"
+}
+```
+
+### SMS messages
+
+For testing purposes, LocalStack also retains all SMS messages published to a phone number in memory, making it easy to retrieve them.
+To learn more about SNS SMS notifications, refer to the [AWS documentation on SNS mobile text messaging (SMS)](https://docs.aws.amazon.com/sns/latest/dg/sns-mobile-phone-number-as-subscriber.html).
+
+You can access these messages in JSON format through `GET /_aws/sns/sms-messages`.
+To retrieve specific messages, you can use query parameters to filter by `accountId`, `region`, and `phoneNumber`.
+You can also call `DELETE /_aws/sns/sms-messages` to clear the messages.
+
+#### Query parameters
+
+| Parameter | Required | Description |
+| - | - | - |
+| `accountId` | No | The AWS Account ID from which the messages have been published. If not specified, it will use the default `000000000000` |
+| `region` | No | The AWS region from which the messages have been published. If not specified, it will use the default `us-east-1` |
+| `phoneNumber` | No | The `phoneNumber` to which the messages have been published. If specified, the response will contain only messages sent to this number. Otherwise, it will return all phone numbers with their messages. |
+
+#### Response format and attributes
+
+| Attribute | Description |
+| - | - |
+| `sms_messages` | Contains phone numbers as field names. Each phone number will have its messages in an Array. |
+| `region` | The region from where the messages were sent. |
+
+
+
+In this example, we will publish a message to a phone number and retrieve it:
+
+Publish a message to a phone number:
+
+{{< command >}}
+$ awslocal sns publish --phone-number "" --message "Hello World!"
+{{< /command >}}
+An example response is shown below:
+
+```json
+{
+ "MessageId": "9ce56934-dcc4-45f5-ba40-13691329fc67"
+}
+```
+
+Retrieve the message published using [curl](https://curl.se/) and [jq](https://jqlang.github.io/jq/):
+
+{{< command >}}
+$ curl "http://localhost:4566/_aws/sns/sms-messages" | jq .
+{{< /command >}}
+
+```json
+{
+ "sms_messages": {
+ "+123123123": [
+ {
+ "PhoneNumber": "+123123123",
+ "TopicArn": null,
+ "SubscriptionArn": null,
+ "MessageId": "9ce56934-dcc4-45f5-ba40-13691329fc67",
+ "Message": "Hello World",
+ "MessageAttributes": {},
+ "MessageStructure": null,
+ "Subject": null
+ }
+ ]
+ },
+ "region": "us-east-1"
+}
+```
+
+You can reset the saved messages at `DELETE /_aws/sns/sms-messages`.
+Using the query parameters, you can also selectively reset messages only in one region or from one phone number.
+Run the following command to reset the saved messages:
+
+{{< command >}}
+$ curl -X "DELETE" "http://localhost:4566/_aws/sns/sms-messages"
+{{< /command >}}
+We can now check that the messages have been properly deleted:
+{{< command >}}
+$ curl "http://localhost:4566/_aws/sns/sms-messages" | jq .
+{{< /command >}}
+
+```json
+{
+ "sms_messages": {},
+ "region": "us-east-1"
+}
+```
+
+### Subscription Tokens
+
+In case of email and HTTP(S) subscriptions, a special message is sent to the subscriber with a link to confirm the subscription so that it will be able to receive the messages afterwards.
+SNS does not send messages to endpoints pending confirmation.
+
+However, when working with external integrations, the link sent will most probably point to your local environment, which won't be accessible from the external integration to confirm.
+
+To still be able to test your external integrations, we expose the subscription tokens so that you can manually confirm the subscription.
+The subscription tokens are never deleted from memory, because they can be re-used.
+To manually confirm the subscription, you will use [`ConfirmSubscription`](https://docs.aws.amazon.com/sns/latest/api/API_ConfirmSubscription.html).
+
+To learn more about confirming subscriptions, refer to the [AWS documentation](https://docs.aws.amazon.com/sns/latest/dg/SendMessageToHttp.confirm.html).
+
+You can access the subscription tokens in JSON format through `GET /_aws/sns/subscription-tokens/`.
+
+#### Path parameters
+
+| Parameter | Required | Description |
+| - | - | - |
+| `subscription-arn` | Yes | The SNS Subscription ARN for which you would like to fetch the tokens |
+
+#### Response format and attributes
+
+| Attribute | Description |
+| - | - |
+| `subscription_token` | The Subscription token to be used with `ConfirmSubscription`. |
+| `subscription_arn` | The Subscription ARN provided. |
+
+
+
+In this example, we will subscribe to an external SNS integration not confirming the subscription, retrieve the subscription token and manually confirm it:
+
+Create an SNS topic, and create a subscription to a external HTTP SNS integration:
+
+{{< command >}}
+awslocal sns create-topic --name "test-external-integration"
+{{< /command >}}
+
+```json
+{
+ "TopicArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration"
+}
+```
+
+We now create an HTTP SNS subscription to an external endpoint:
+{{< command >}}
+awslocal sns subscribe --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" --protocol https --notification-endpoint "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9" --return-subscription-arn
+{{< /command >}}
+
+```json
+{
+ "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+}
+```
+
+Now, we can check the `PendingConfirmation` status of our subscription, showing our endpoint did not confirm the subscription.
+You will need to use the `SubscriptionArn` from the response of your subscribe call:
+{{< command >}}
+awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+{{< /command >}}
+
+```json
+{
+ "Attributes": {
+ "TopicArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration",
+ "Endpoint": "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9",
+ "Protocol": "https",
+ "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8",
+ "PendingConfirmation": "true",
+ "Owner": "000000000000",
+ "RawMessageDelivery": "false",
+ "SubscriptionPrincipal": "arn:aws:iam::000000000000:user/DummySNSPrincipal"
+ }
+}
+```
+
+To manually confirm the subscription, we will fetch its token with our developer endpoint:
+{{< command >}}
+curl "http://localhost:4566/_aws/sns/subscription-tokens/arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8" | jq .
+{{< /command >}}
+
+```json
+{
+ "subscription_token": "75732d656173742d312f3b875fb03b875fb03b875fb03b875fb03b875fb03b87",
+ "subscription_arn": "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+}
+```
+
+We can now use this token to manually confirm the subscription:
+{{< command >}}
+awslocal sns confirm-subscription --topic-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration" --token 75732d656173742d312f3b875fb03b875fb03b875fb03b875fb03b875fb03b87
+{{< /command >}}
+
+```json
+{
+ "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+}
+```
+
+We can now finally verify the subscription has been confirmed:
+{{< command >}}
+awslocal sns get-subscription-attributes --subscription-arn "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8"
+{{< /command >}}
+
+```json
+{
+ "Attributes": {
+ "TopicArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration",
+ "Endpoint": "https://api.opsgenie.com/v1/json/amazonsns?apiKey=b13fd59a-9",
+ "Protocol": "https",
+ "SubscriptionArn": "arn:aws:sns:us-east-1:000000000000:test-external-integration:c3ab47f3-b964-461d-84eb-903d8765b0c8",
+ "PendingConfirmation": "false",
+ "Owner": "000000000000",
+ "RawMessageDelivery": "false",
+ "SubscriptionPrincipal": "arn:aws:iam::000000000000:user/DummySNSPrincipal",
+ "ConfirmationWasAuthenticated": "true"
+ }
+}
+```
+
+SNS will now publish messages to your HTTP endpoint, even if it did not confirm itself the subscription.
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing SNS topics.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **SNS** under the **App Integration** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Topic**: Create a new SNS topic by specifying a topic name, attributes, and tags.
+- **View Details and Subscription**: View details and subscription of an SNS topic by selecting the topic name and navigating to the **Details** and **Subscriptions** tabs.
+- **Create Subscription**: Create a new subscription for an SNS topic by selecting the topic name, navigating to the **Subscriptions** tab, and clicking the **Create Subscription** button.
+ Fill in the required details such as protocol, endpoint, and attributes, delivery policy, return subscription ARN, and click **Create**.
+- **Delete Topic**: Delete an SNS topic by selecting the topic name and clicking the **Action** button, followed by **Delete Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use SNS in LocalStack for various use cases:
+
+- [Full-Stack application with AWS Lambda, DynamoDB & S3 for shipment validation](https://github.com/localstack/shipment-list-demo)
+- [Event-driven architecture with Amazon SNS FIFO, DynamoDB, Lambda, and S3](https://github.com/localstack/event-driven-architecture-with-amazon-sns-fifo)
+- [Loan Broker application with AWS Step Functions, DynamoDB, Lambda, SQS, and SNS](https://github.com/localstack/loan-broker-stepfunctions-lambda-app)
+- [Serverless Image Resizer with AWS Lambda, S3, SNS, and SES](https://github.com/localstack/serverless-image-resizer)
+
+## Current Limitations
+
+- LocalStack does not support the `cidr` operator for filter policies.
+ However, [other policies](https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html) are supported.
diff --git a/src/content/docs/aws/services/sqs.md b/src/content/docs/aws/services/sqs.md
new file mode 100644
index 00000000..0b46990b
--- /dev/null
+++ b/src/content/docs/aws/services/sqs.md
@@ -0,0 +1,650 @@
+---
+title: "Simple Queue Service (SQS)"
+description: Get started with Simple Queue Service (SQS) on LocalStack
+aliases:
+- /aws/sqs/
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Simple Queue Service (SQS) is a managed messaging service offered by AWS.
+It allows you to decouple different components of your applications by enabling asynchronous communication through message queues.
+SQS allows you to reliably send, store, and receive messages with support for standard and FIFO queues.
+
+LocalStack allows you to use the SQS APIs in your local environment to integrate and decouple distributed systems via hosted queues.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_sqs" >}}), which provides information on the extent of SQS's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to SQS and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create an SQS queue, retrieve queue attributes and URLs, and receive and delete messages from the queue.
+
+### Create a queue
+
+To create an SQS queue, use the [`CreateQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_CreateQueue.html) API.
+Run the following command to create a queue named `localstack-queue`:
+
+{{< command >}}
+$ awslocal sqs create-queue --queue-name localstack-queue
+{{< / command >}}
+
+You can list all queues in your account using the [`ListQueues`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ListQueues.html) API.
+Run the following command to list all queues in your account:
+
+{{< command >}}
+$ awslocal sqs list-queues
+{{< / command >}}
+
+You will see the following output:
+
+```json
+{
+ "QueueUrls": [
+ "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue"
+ ]
+}
+```
+
+You can query queue attributes with the [`GetQueueAttributes`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_GetQueueAttributes.html) API.
+You need to pass the `queue-url` and `attribute-names` parameters.
+
+Run the following command to retrieve the queue attributes:
+
+{{< command >}}
+$ awslocal sqs get-queue-attributes --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --attribute-names All
+{{< / command >}}
+
+### Sending and receiving messages from the queue
+
+You can send a message to the SQS queue which will be queued and a consumer can pick it up.
+To send a message to a SQS queue, you can use the [`SendMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html) API.
+
+Run the following command to send a message to the queue:
+
+{{< command >}}
+$ awslocal sqs send-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --message-body "Hello World"
+{{< / command >}}
+
+It will return the MD5 hash of the Message Body and a Message ID.
+You will see output similar to the following:
+
+```json
+{
+ "MD5OfMessageBody": "b10a8db164e0754105b7a99be72e3fe5",
+ "MessageId": "92612c02-4879-47db-92f6-40bf2b341c07"
+}
+```
+
+You can receive messages from the queue using the [`ReceiveMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_ReceiveMessage.html) API.
+Run the following command to receive messages from the queue:
+
+{{< command >}}
+$ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue
+{{< / command >}}
+
+You will see the Message ID, MD5 hash of the Message Body, Receipt Handle, and the Message Body in the output.
+
+### Delete a message from the queue
+
+To delete a message from the queue, you can use the [`DeleteMessage`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteMessage.html) API.
+You need to pass the `queue-url` and `receipt-handle` parameters.
+
+Run the following command to delete a message from the queue:
+
+{{< command >}}
+$ awslocal sqs delete-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue --receipt-handle
+{{< / command >}}
+
+Replace `` with the receipt handle you received in the previous step.
+If you have sent multiple messages to the queue, you can purge the queue using the [`PurgeQueue`](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_PurgeQueue.html) API.
+
+Run the following command to purge the queue:
+
+{{< command >}}
+$ awslocal sqs purge-queue --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue
+{{< / command >}}
+
+## Dead-letter queue testing
+
+LocalStack's SQS implementation supports both regular [dead-letter queues (DLQ)](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) and [DLQ redrive](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue-redrive.html) via move message tasks.
+Here's an end-to-end example of how to use message move tasks to test DLQ redrive.
+
+First, create three queues.
+One will serve as original input queue, one as DLQ, and the third as target for DLQ redrive.
+{{< command >}}
+$ awslocal sqs create-queue --queue-name input-queue
+$ awslocal sqs create-queue --queue-name dead-letter-queue
+$ awslocal sqs create-queue --queue-name recovery-queue
+{
+ "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue"
+}
+{
+ "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/dead-letter-queue"
+}
+{
+ "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/recovery-queue"
+}
+{{< /command >}}
+
+Configure `dead-letter-queue` to be a DLQ for `input-queue`:
+{{< command >}}
+$ awslocal sqs set-queue-attributes \
+--queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue \
+--attributes '{
+ "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:000000000000:dead-letter-queue\",\"maxReceiveCount\":\"1\"}"
+}'
+{{< /command >}}
+
+Send a message to the input queue:
+{{< command >}}
+$ awslocal sqs send-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue --message-body '{"hello": "world"}'
+{{< /command >}}
+
+Receive the message twice to provoke a move into the dead-letter queue:
+{{< command >}}
+$ awslocal sqs receive-message --visibility-timeout 0 --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue
+$ awslocal sqs receive-message --visibility-timeout 0 --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/input-queue
+{{< /command >}}
+
+In the localstack logs you should see something like the following line, indicating the message was moved to the DLQ:
+
+```bash
+2024-01-24T13:51:16.824 DEBUG --- [ asgi_gw_1] l.services.sqs.models : message SqsMessage(id=5be95a04-93f0-4b9d-8bd5-6695f34758cf,group=None) has been received 2 times, marking it for DLQ
+```
+
+Now, start a message move task to asynchronously move the messages from the DLQ into the recovery queue:
+{{< command >}}
+$ awslocal sqs start-message-move-task \
+ --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue \
+ --destination-arn arn:aws:sqs:us-east-1:000000000000:recovery-queue
+{{< /command >}}
+
+Listing the message move tasks should yield something like
+{{< command >}}
+$ awslocal sqs list-message-move-tasks --source-arn arn:aws:sqs:us-east-1:000000000000:dead-letter-queue
+{
+ "Results": [
+ {
+ "Status": "COMPLETED",
+ "SourceArn": "arn:aws:sqs:us-east-1:000000000000:dead-letter-queue",
+ "DestinationArn": "arn:aws:sqs:us-east-1:000000000000:recovery-queue",
+ "ApproximateNumberOfMessagesMoved": 1,
+ "ApproximateNumberOfMessagesToMove": 1,
+ "StartedTimestamp": 1706097183866
+ }
+ ]
+}
+{{< /command >}}
+
+Receiving messages from the recovery queue should now show us the original message:
+{{< command >}}
+$ awslocal sqs receive-message --queue-url http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/recovery-queue
+{
+ "Messages": [
+ {
+ "MessageId": "5be95a04-93f0-4b9d-8bd5-6695f34758cf",
+ "ReceiptHandle": "NzkwMWJiZDYtMzgyNy00Nzc3LTlkODMtMmEzYTNjYjlhZWQwIGFybjphd3M6c3FzOnV...",
+ "MD5OfBody": "49dfdd54b01cbcd2d2ab5e9e5ee6b9b9",
+ "Body": "{\"hello\": \"world\"}"
+ }
+ ]
+}
+{{< /command >}}
+
+## SQS Query API
+
+The [SQS Query API](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-making-api-requests.html), provides SQS Queue URLs as endpoints, enabling direct HTTP requests to the queues.
+LocalStack extends support for the Query API.
+
+With LocalStack, you can conveniently test SQS Query API calls without the need to sign or include `AUTHPARAMS` in your HTTP requests.
+
+For instance, you can use a basic [curl](https://curl.se/) command to send a `SendMessage` command along with a MessageBody attribute:
+
+{{< command >}}
+$ curl "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld"
+{{< / command >}}
+
+You will see the following output:
+
+```xml
+
+
+
+ c6be4e95a26409675447367b3e79f663
+ 466144ab-1d03-4ec5-8d70-97535b2957fb
+
+
+ JU40AF5GORK0WSR75MOY3VNQ1KZ3TAI7S5KAJYGK9C5P4W4XKMGF
+
+
+```
+
+Adding the `Accept: application/json` header will make the server return JSON:
+
+To receive JSON responses from the server, include the `Accept: application/json` header in your request.
+Here's an example using the [curl](https://curl.se/) command:
+
+{{< command >}}
+$ curl -H "Accept: application/json" "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue?Action=SendMessage&MessageBody=hello%2Fworld"
+{{< / command >}}
+
+The response will be in JSON format:
+
+```json
+{
+ "SendMessageResponse": {
+ "SendMessageResult": {
+ "MD5OfMessageBody": "c6be4e95a26409675447367b3e79f663",
+ "MessageId": "748297f2-4abd-4ec2-afc0-4d1a497fe604"
+ },
+ "ResponseMetadata": {
+ "RequestId": "XEA5L5AX16RTPET25U3TIRIASN6KNIT820WIT3EY7RCH7164W68T"
+ }
+ }
+}
+```
+
+## Configuration
+
+### Queue URLs
+
+You can control the format of the generated Queue URLs by setting the environment variable `SQS_ENDPOINT_STRATEGY` when starting LocalStack to one of the following values.
+
+| Value | URL format | Description |
+|------------|----------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `standard` | `sqs..localhost.localstack.cloud:4566//` | Default. This strategy resembles AWS the closest (see [Identifiers for Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-queue-message-identifiers.html#sqs-general-identifiers)) and comes with full multi-account and multi-region support. |
+| `domain` | `.queue.localhost.localstack.cloud:4566//` | This strategy behaves like the [SQS legacy service endpoints](https://docs.aws.amazon.com/general/latest/gr/sqs-service.html#sqs_region), and uses `localhost.localstack.cloud` to resolve to localhost. While using the `us-east-1` region, the `.` prefix is omitted. |
+| `path` | `localhost:4566/queue///` | An alternative that can be useful if you cannot resolve LocalStack's `localhost` domain. |
+| `dynamic` | either of the above, using the hostname used from the request | Based on the format of the hostname used by the client to call localstack, the URL will be constructed accordingly. The URL will also use the hostname specified in the request to make sure the client will be able to reach the URl. |
+| `off` | `localhost:4566//` | It is the current default for maintaining backward compatibility. However, this format does not encode the region information. As a result, you will encounter limitations when querying queues with the same name that exist in different regions. |
+
+### Enabling `PurgeQueue` errors
+
+In AWS, there is a restriction that allows only one call to the `PurgeQueue` operation every 60 seconds.
+You can refer to the [`PurgeQueue` API Reference](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_PurgeQueue.html) for more details.
+
+By default, LocalStack disables this behavior.
+However, if you want to enable the retry delay for `PurgeQueue` in LocalStack, you can start it with the `SQS_DELAY_PURGE_RETRY=1` environment variable.
+
+### Enabling `QueueDeletedRecently` errors
+
+In AWS, there is a restriction that prevents the creation of a queue with the same name within 60 seconds after it has been deleted.
+You can find more information about this behavior in the [`DeleteQueue` API Reference](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteQueue.html).
+
+By default, LocalStack disables this behavior.
+However, if you want to enable the delay for creating a recently deleted queue in LocalStack, you can start it with the `SQS_DELAY_RECENTLY_DELETED=1` environment variable.
+
+### Enabling `MessageRetentionPeriod`
+
+In AWS, you can set the `MessageRetentionPeriod` to control the length of time, in seconds, for which Amazon SQS retains a message.
+You can find more details in the [`SetQueueAttributes` API reference](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SetQueueAttributes.html#API_SetQueueAttributes_RequestParameters).
+
+You can enable this behavior in LocalStack by setting the `SQS_ENABLE_MESSAGE_RETENTION_PERIOD=1` environment variable.
+In AWS, valid values for message retention range from 60 (1 minute) to 1,209,600 (14 days).
+In LocalStack, we do not put constraints on the value which can be helpful for test scenarios.
+
+{{< callout >}}
+Note that, if you enable this option, [persistence]({{< ref "user-guide/state-management/persistence" >}}) or [cloud pods]({{}}) for SQS may not work as expected.
+The reason is that, LocalStack does not adjust timestamps when restoring a state, so time appears to pass between LocalStack runs.
+Consequently, when you restart LocalStack after a period that is longer than the message retention period, LocalStack will remove all those messages when SQS starts.
+{{}}
+
+### Disable CloudWatch Metrics Reporting
+
+When working with SQS messages, actions like sending, receiving, and deleting them will automatically trigger CloudWatch metrics.
+This feature, known as [CloudWatch metrics for Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-available-cloudwatch-metrics.html), is enabled by default but can be deactivated if needed.
+
+Disabling CloudWatch metrics can enhance the performance of SQS message operations.
+However, it's important to note that deactivation will also disable any integration with CloudWatch, including the triggering of alarms based on metrics.
+
+By default, metrics related to `Approximate*` messages are sent to CloudWatch once every minute.
+You can customize the reporting interval (in seconds) by setting the `SQS_CLOUDWATCH_METRICS_REPORT_INTERVAL` variable to the desired value, such as `SQS_CLOUDWATCH_METRICS_REPORT_INTERVAL=120`.
+
+If you wish to disable all CloudWatch metrics for SQS, including the `Approximate*` metrics, you can set the `SQS_DISABLE_CLOUDWATCH_METRICS` variable to `1`.
+
+## Accessing queues from Lambdas or other containers
+
+Using the SQS Query API, Queue URLs act as accessible endpoints via HTTP.
+Several SDKs, such as the Java SDK, leverage the SQS Query API for SQS interaction.
+
+By default, Queue URLs are configured to point to `http://localhost:4566`.
+This configuration can pose problems when Lambdas or other containers attempt to make direct calls to these queue URLs.
+These issues arise due to the fact that a Lambda function operates within a separate Docker container, and LocalStack is not accessible at the `localhost` address within that container.
+
+For instance, users of the Java SDK often encounter the following error when trying to access an SQS queue from their Lambda functions:
+
+```bash
+2023-07-28 15:04:00 Unable to execute HTTP request: Connect to localhost:4566 [localhost/127.0.0.1] failed: Connection refused (Connection refused): com.amazonaws.SdkClientException
+2023-07-28 15:04:00 com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to localhost:4566 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
+...
+```
+
+To address this issue, you can consider the steps documented below.
+
+### Lambda
+
+When utilizing the SQS Query API in Lambdas, we suggest configuring `SQS_ENDPOINT_STRATEGY=domain`.
+This configuration results in queue URLs using `*.queue.localhost.localstack.cloud` as their domain names.
+Our Lambda implementation automatically resolves these URLs to the LocalStack container, ensuring smooth interaction between your code and the SQS service.
+
+### Other containers
+
+When your code run within different containers like ECS tasks or your custom ones, it's advisable to establish your Docker network setup.
+You can follow these steps:
+
+1. Override the `LOCALSTACK_HOST` variable as outlined in our [network troubleshooting guide]({{< ref "endpoint-url" >}}).
+2. Ensure that your containers can resolve `LOCALSTACK_HOST` to the LocalStack container within the Docker network.
+3. We recommend employing `SQS_ENDPOINT_STRATEGY=path`, which generates queue URLs in the format `http:///queue/...`.
+
+## Developer endpoints
+
+LocalStack's SQS implementation offers additional endpoints for developers located at `/_aws/sqs`.
+These endpoints provide the ability to inspect queues without causing any side effects.
+This can be particularly useful when you need to examine the content of queues without executing a `ReceiveMessage` operation, which would normally remove messages from the queue.
+
+### Peeking into queues
+
+The `/_aws/sqs/messages` endpoint provides access to all messages within a queue without triggering the visibility timeout or modifying access metrics.
+This endpoint is particularly useful in scenarios such as tests, where you need to wait until a specific message arrives in the queue.
+
+The `/_aws/sqs/messages` endpoint is fully compatible with the `ReceiveMessage` operation from the SQS API.
+By default, it returns all messages in the queue along with their attributes and system attributes.
+The endpoint ignores any additional parameters from the `ReceiveMessage` operation, except for the `QueueUrl`.
+
+You can call the `/_aws/sqs/messages` endpoint in two different ways:
+
+1. Using the query argument `QueueUrl`, like this:
+ {{< command >}}
+ $ http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
+ {{< / command >}}
+
+2. Utilizing the path-based endpoint, as shown in this example:
+ {{< command >}}
+ $ http://localhost.localstack.cloud:4566/_aws/sqs/messages/us-east-1/000000000000/my-queue
+ {{< / command >}}
+
+#### XML response
+
+You can directly call the endpoint to obtain the raw AWS XML response.
+
+{{< tabpane >}}
+{{< tab header="curl" lang="bash" >}}
+curl "http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
+{{< /tab >}}
+{{< tab header="Python Requests" lang="python" >}}
+import requests
+
+response = requests.get(
+ url="http://localhost.localstack.cloud:4566/_aws/sqs/messages",
+ params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"},
+)
+print(response.text) # outputs the response XML
+{{< /tab >}}
+{{< / tabpane >}}
+
+An example response is shown below:
+
+```xml
+
+
+
+
+ 6a736e5d-4997-4895-8c96-b65a2d7dd600
+ 5d41402abc4b2a76b9719d911017c592
+ hello
+
+ SenderId
+ 000000000000
+
+
+ SentTimestamp
+ 1672853965675
+
+
+ ApproximateReceiveCount
+ 0
+
+
+ ApproximateFirstReceiveTimestamp
+ 1672855121076
+
+ SQS/BACKDOOR/ACCESS
+
+
+ 173c5aee-503a-4249-90be-159e0d427b48
+ 7d793037a0760186574b0282f2f435e7
+ world
+
+ SenderId
+ 000000000000
+
+
+ SentTimestamp
+ 1672853968176
+
+
+ ApproximateReceiveCount
+ 0
+
+
+ ApproximateFirstReceiveTimestamp
+ 1672855121076
+
+ SQS/BACKDOOR/ACCESS
+
+
+
+ KR3H1IN3JQ4LO1592IMGK2JLH8HW3J0Y4LRY1TVW2SAFGZFVXJGI
+
+
+```
+
+#### JSON response
+
+You can include the `Accept: application/json` header in your request if you prefer a JSON response.
+
+{{< tabpane >}}
+{{< tab header="curl" lang="bash" >}}
+curl -H "Accept: application/json" \
+ "http://localhost.localstack.cloud:4566/_aws/sqs/messages?QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue"
+{{< /tab >}}
+{{< tab header="Python Requests" lang="python" >}}
+import requests
+
+response = requests.get(
+ url="http://localhost.localstack.cloud:4566/_aws/sqs/messages",
+ params={"QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue""},
+)
+print(response.text) # outputs the response XML
+{{< /tab >}}
+{{< / tabpane >}}
+
+An example response is shown below:
+
+```json
+{
+ "ReceiveMessageResponse": {
+ "ReceiveMessageResult": {
+ "Message": [
+ {
+ "MessageId": "6a736e5d-4997-4895-8c96-b65a2d7dd600",
+ "MD5OfBody": "5d41402abc4b2a76b9719d911017c592",
+ "Body": "hello",
+ "Attribute": [
+ {
+ "Name": "SenderId",
+ "Value": "000000000000"
+ },
+ {
+ "Name": "SentTimestamp",
+ "Value": "1672853965675"
+ },
+ {
+ "Name": "ApproximateReceiveCount",
+ "Value": "0"
+ },
+ {
+ "Name": "ApproximateFirstReceiveTimestamp",
+ "Value": "1672855535794"
+ }
+ ],
+ "ReceiptHandle": "SQS/BACKDOOR/ACCESS"
+ },
+ {
+ "MessageId": "173c5aee-503a-4249-90be-159e0d427b48",
+ "MD5OfBody": "7d793037a0760186574b0282f2f435e7",
+ "Body": "world",
+ "Attribute": [
+ {
+ "Name": "SenderId",
+ "Value": "000000000000"
+ },
+ {
+ "Name": "SentTimestamp",
+ "Value": "1672853968176"
+ },
+ {
+ "Name": "ApproximateReceiveCount",
+ "Value": "0"
+ },
+ {
+ "Name": "ApproximateFirstReceiveTimestamp",
+ "Value": "1672855535794"
+ }
+ ],
+ "ReceiptHandle": "SQS/BACKDOOR/ACCESS"
+ }
+ ]
+ },
+ "ResponseMetadata": {
+ "RequestId": "TF87187MUBXJHA39J4Y6OVQG57J51OEEMX62UWYBUQJKC8YVID3P"
+ }
+ }
+}
+```
+
+#### AWS Client
+
+Since the `/_aws/sqs/messages` endpoint is compatible with the SQS `ReceiveMessage` operation, you can use the endpoint as the endpoint URL parameter in your AWS client call.
+
+{{< tabpane >}}
+{{< tab header="aws-cli" lang="bash" >}}
+aws --endpoint-url=http://localhost.localstack.cloud:4566/_aws/sqs/messages sqs receive-message \
+ --queue-url=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
+{{< /tab >}}
+{{< tab header="Boto3" lang="python" >}}
+import boto3
+sqs = boto3.client("sqs", endpoint_url="http://localhost.localstack.cloud:4566/_aws/sqs/messages")
+response = sqs.receive_message(QueueUrl="http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue")
+print(response)
+{{< /tab >}}
+{{< / tabpane >}}
+
+An example response is shown below:
+
+```json
+{
+ "Messages": [
+ {
+ "MessageId": "6a736e5d-4997-4895-8c96-b65a2d7dd600",
+ "ReceiptHandle": "SQS/BACKDOOR/ACCESS",
+ "MD5OfBody": "5d41402abc4b2a76b9719d911017c592",
+ "Body": "hello",
+ "Attributes": {
+ "SenderId": "000000000000",
+ "SentTimestamp": "1672853965675",
+ "ApproximateReceiveCount": "0",
+ "ApproximateFirstReceiveTimestamp": "1672854900237"
+ }
+ },
+ {
+ "MessageId": "173c5aee-503a-4249-90be-159e0d427b48",
+ "ReceiptHandle": "SQS/BACKDOOR/ACCESS",
+ "MD5OfBody": "7d793037a0760186574b0282f2f435e7",
+ "Body": "world",
+ "Attributes": {
+ "SenderId": "000000000000",
+ "SentTimestamp": "1672853968176",
+ "ApproximateReceiveCount": "0",
+ "ApproximateFirstReceiveTimestamp": "1672854900237"
+ }
+ }
+ ]
+}
+```
+
+#### Show invisible or delayed messages
+
+The developer endpoint also supports showing invisible and delayed messages via the query arguments `ShowInvisible` and `ShowDelayed`.
+
+{{< tabpane >}}
+{{< tab header="curl" lang="bash" >}}
+curl -H "Accept: application/json" \
+ "http://localhost.localstack.cloud:4566/_aws/sqs/messages?ShowInvisible=true&ShowDelayed=true&QueueUrl=http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/my-queue
+{{< /tab >}}
+{{< tab header="Python Requests" lang="python" >}}
+import requests
+
+response = requests.get(
+ "http://localhost.localstack.cloud:4566/_aws/sqs/messages",
+ params={"QueueUrl": queue_url, "ShowInvisible": True, "ShowDelayed": True},
+ headers={"Accept": "application/json"},
+)
+print(response.text)
+{{< /tab >}}
+{{< / tabpane >}}
+
+This will also include messages that currently have an active visibility timeout or were delayed and are not actually in the queue yet.
+Here's an example:
+
+```json
+[
+ {
+ "MessageId": "1c4187cc-f2c9-4f1c-9702-4a3bfaaa4817",
+ "MD5OfBody": "a06498de7fb4bd539c8895748f03175d",
+ "Body": "message-3",
+ "Attribute": [
+ {"Name": "SenderId", "Value": "000000000000"},
+ {"Name": "SentTimestamp", "Value": "1697494407799"},
+ {"Name": "ApproximateReceiveCount", "Value": "0"},
+ {"Name": "ApproximateFirstReceiveTimestamp", "Value": "0"},
+ {"Name": "IsVisible", "Value": "true"}, <--
+ {"Name": "IsDelayed", "Value": "false"}, <--
+ ],
+ "ReceiptHandle": "SQS/BACKDOOR/ACCESS",
+ },
+ ...
+]
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing SQS queues.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **SQS** under the **App Integration** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Queue**: Create a new SQS queue by specifying a queue name, optional attributes, and tags.
+- **Send Message**: Send a message to an SQS queue by specifying the queue name, message body, delay seconds, optional message attributes, and more.
+- **View Details and Messages**: View details and messages of an SQS queue by selecting the queue name and navigating to the **Details** and **Messages** tabs.
+- **Delete Queue**: Delete an SQS queue by selecting the queue name and clicking the **Action** button, followed by **Remove Selected**.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use SQS in LocalStack for various use cases:
+
+- [Serverless microservices with Amazon API Gateway, DynamoDB, SQS, and Lambda](https://github.com/localstack/microservices-apigateway-lambda-dynamodb-sqs-sample)
+- [Loan Broker application with AWS Step Functions, DynamoDB, Lambda, SQS, and SNS](https://github.com/localstack/loan-broker-stepfunctions-lambda-app)
+- [Messaging Processing application with SQS, DynamoDB, and Fargate](https://github.com/localstack/sqs-fargate-ddb-cdk-go)
+- [Serverless Transcription application using Transcribe, S3, Lambda, SQS, and SES](https://github.com/localstack/sample-transcribe-app)
+
+## Current Limitations
+
+- Updating a queue's `MessageRetentionPeriod` currently has no effect on existing messages
diff --git a/src/content/docs/aws/services/ssm.md b/src/content/docs/aws/services/ssm.md
new file mode 100644
index 00000000..60560f38
--- /dev/null
+++ b/src/content/docs/aws/services/ssm.md
@@ -0,0 +1,153 @@
+---
+title: "Systems Manager (SSM)"
+linkTitle: "Systems Manager (SSM)"
+description: Get started with Systems Manager (SSM) on LocalStack
+tags: ["Free"]
+persistence: supported
+---
+
+## Introduction
+
+Systems Manager (SSM) is a management service provided by Amazon Web Services that helps you effectively manage and control your infrastructure resources.
+SSM simplifies tasks related to system and application management, patching, configuration, and automation, allowing you to maintain the health and compliance of your environment.
+
+LocalStack allows you to use the SSM APIs in your local environment to run operational tasks on the Dockerized instances.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_ssm" >}}), which provides information on the extent of SSM's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Systems Manager (SSM) and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method with an additional `EC2_VM_MANAGER=docker` configuration variable.
+We will demonstrate how to use EC2 and SSM functionalities when using the Docker backend with LocalStack with the AWS CLI.
+
+### Create an EC2 instance
+
+To get started, pull the `ubuntu:focal` image from Docker Hub and tag it as `localstack-ec2/ubuntu-focal-docker-ami:ami-00a001`.
+LocalStack uses a naming scheme to recognise and manage the containers and images associated with it.
+The container are named `localstack-ec2.`, while images are tagged `localstack-ec2/:`.
+
+{{< command >}}
+$ docker pull ubuntu:focal
+$ docker tag ubuntu:focal localstack-ec2/ubuntu-focal-docker-ami:ami-00a001
+{{< / command >}}
+
+LocalStack's Docker backend treats Docker images with the above naming scheme as AMIs.
+The AMI ID is the last part of the image tag, `ami-00a001` in this case.
+You can run an EC2 instance using the [`RunInstances`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_RunInstances.html) API.
+Execute the following command to create an EC2 instance using the `ami-00a001` AMI.
+
+{{< command >}}
+$ awslocal ec2 run-instances \
+ --image-id ami-00a001 --count 1
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ ...
+ "Instances": [
+ {
+ ...
+ "InstanceId": "i-abf6920789a06dd84",
+ "InstanceType": "m1.small",
+ ...
+ "SecurityGroups": [],
+ "SourceDestCheck": true,
+ "Tags": [],
+ "VirtualizationType": "paravirtual"
+ }
+ ],
+ "OwnerId": "000000000000",
+ "ReservationId": "r-e9b21a68"
+ ...
+```
+
+You can copy the `InstanceId` value and use it in the following commands.
+
+### Send command using SSM
+
+You can use the [`SendCommand`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) API to send a command to the EC2 instance.
+The following command sends a `cat lsb-release` command in the `/etc` directory to the EC2 instance.
+
+{{< command >}}
+$ awslocal ssm send-command --document-name "AWS-RunShellScript" \
+ --document-version "1" \
+ --instance-ids i-abf6920789a06dd84 \
+ --parameters "commands='cat lsb-release',workingDirectory=/etc"
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Command": {
+ "CommandId": "23547a9b-6993-4967-9446-f96b9b5dac70",
+ "DocumentName": "AWS-RunShellScript",
+ "DocumentVersion": "1",
+ "InstanceIds": [
+ "i-abf6920789a06dd84"
+ ],
+ "Status": "InProgress"
+ }
+}
+```
+
+You can copy the `CommandId` value and use it in the following commands.
+
+### Retrieve the command output
+
+You can use the [`GetCommandInvocation`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetCommandInvocation.html) API to retrieve the command output.
+The following command retrieves the output of the command sent in the previous step.
+
+{{< command >}}
+$ awslocal ssm get-command-invocation \
+ --command-id 23547a9b-6993-4967-9446-f96b9b5dac70 \
+ --instance-id i-abf6920789a06dd84
+{{< / command >}}
+
+Change the `CommandId` and `InstanceId` values to the ones you received in the previous step.
+The following output would be retrieved:
+
+```bash
+{
+ "CommandId": "23547a9b-6993-4967-9446-f96b9b5dac70",
+ "InstanceId": "i-abf6920789a06dd84",
+ "DocumentName": "AWS-RunShellScript",
+ "DocumentVersion": "1",
+ "Status": "Success",
+ "StandardOutputContent": "DISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=20.04\nDISTRIB_CODENAME=focal\nDISTRIB_DESCRIPTION=\"Ubuntu 20.04.6 LTS\"\n",
+ "StandardErrorContent": ""
+}
+```
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing SSM System Parameters.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Simple Systems Manager (SSM)** under the **Management/Governance** section.
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create System Parameter**: Create a new System Parameter by clicking on the **Create Parameter** button and providing the required details.
+- **View the System Parameter**: View the details of a System Parameter, such as its value, by clicking on the parameter name.
+- **Delete the System Parameter**: Delete a System Parameter by selecting the parameter and clicking on the **Actions** dropdown menu followed by **Remove Selected**.
+
+## Current Limitations
+
+The following table highlights some differences between LocalStack SSM and AWS SSM.
+
+| LocalStack | AWS |
+| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Automated SSM registration for instances | Manual instance registration using [`CreateActivation`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateActivation.html) |
+| Operations performed through Docker exec | Operations facilitated by [Amazon SSM Agent](https://github.com/aws/amazon-ssm-agent) |
+| Instance IDs prefixed with `i-` | Instance IDs prefixed with `mi-` |
+
+The other limitations of LocalStack SSM are:
+
+- Dockerized instances only support `AWS-RunShellScript` commands.
+- [`SendCommand`](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_SendCommand.html) only runs 1 command per invocation and will ignore multiple commands passed as an array, starting from the 2nd one.
+- Commands returning non-zero codes won't capture standard output or error streams, leaving them empty.
+- Shell constructs such as job controls (`&&`, `||`), and redirection (`>`) are not supported.
diff --git a/src/content/docs/aws/services/stepfunctions.md b/src/content/docs/aws/services/stepfunctions.md
new file mode 100644
index 00000000..d5ae8d21
--- /dev/null
+++ b/src/content/docs/aws/services/stepfunctions.md
@@ -0,0 +1,541 @@
+---
+title: "Step Functions"
+linkTitle: "Step Functions"
+tags: ["Free"]
+description: >
+ Get started with Step Functions on LocalStack
+---
+
+## Introduction
+
+Step Functions is a serverless workflow engine that enables the orchestrating of multiple AWS services.
+It provides a JSON-based structured language called Amazon States Language (ASL) which allows to specify how to manage a sequence of tasks and actions that compose the application's workflow.
+Thus making it easier to build and maintain complex and distributed applications.
+
+LocalStack allows you to use the Step Functions APIs in your local environment to create, execute, update, and delete state machines locally.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_stepfunctions" >}}), which provides information on the extent of Step Function's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Step Functions and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a state machine, execute it, and check the status of the execution.
+
+### Create a state machine
+
+You can create a state machine using the [`CreateStateMachine`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_CreateStateMachine.html) API.
+The API requires the name of the state machine, the state machine definition, and the role ARN that the state machine will assume to call AWS services.
+Run the following command to create a state machine:
+
+{{< command >}}
+$ awslocal stepfunctions create-state-machine \
+ --name "CreateAndListBuckets" \
+ --definition '{
+ "Comment": "Create bucket and list buckets",
+ "StartAt": "CreateBucket",
+ "States": {
+ "CreateBucket": {
+ "Type": "Task",
+ "Resource": "arn:aws:states:::aws-sdk:s3:createBucket",
+ "Parameters": {
+ "Bucket": "new-sfn-bucket"
+ },
+ "Next": "ListBuckets"
+ },
+ "ListBuckets": {
+ "Type": "Task",
+ "Resource": "arn:aws:states:::aws-sdk:s3:listBuckets",
+ "End": true
+ }
+ }
+ }' \
+ --role-arn "arn:aws:iam::000000000000:role/stepfunctions-role"
+{{< /command >}}
+
+The output of the above command is the ARN of the state machine:
+
+```json
+{
+ "stateMachineArn": "arn:aws:states:us-east-1:000000000000:stateMachine:CreateAndListBuckets",
+ "creationDate": 1714643996.18017
+}
+```
+
+### Execute the state machine
+
+You can execute the state machine using the [`StartExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API.
+The API requires the state machine's ARN and the state machine's input.
+Run the following command to execute the state machine:
+
+{{< command >}}
+$ awslocal stepfunctions start-execution \
+ --state-machine-arn "arn:aws:states:us-east-1:000000000000:stateMachine:CreateAndListBuckets"
+{{< /command >}}
+
+The output of the above command is the execution ARN:
+
+```json
+{
+ "executionArn": "arn:aws:states:us-east-1:000000000000:execution:CreateAndListBuckets:bf7d2138-e96f-42d1-b1f9-41f0c1c7bc3e",
+ "startDate": 1714644089.748442
+}
+```
+
+### Check the execution status
+
+To check the status of the execution, you can use the [`DescribeExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API.
+Run the following command to describe the execution:
+
+{{< command >}}
+$ awslocal stepfunctions describe-execution \
+ --execution-arn "arn:aws:states:us-east-1:000000000000:execution:CreateAndListBuckets:bf7d2138-e96f-42d1-b1f9-41f0c1c7bc3e"
+{{< /command >}}
+
+Replace the `execution-arn` with the ARN of the execution you want to describe.
+
+The output of the above command is the execution status:
+
+```json
+{
+ "executionArn": "arn:aws:states:us-east-1:000000000000:execution:CreateAndListBuckets:bf7d2138-e96f-42d1-b1f9-41f0c1c7bc3e",
+ "stateMachineArn": "arn:aws:states:us-east-1:000000000000:stateMachine:CreateAndListBuckets",
+ "name": "bf7d2138-e96f-42d1-b1f9-41f0c1c7bc3e",
+ "status": "SUCCEEDED",
+ "startDate": 1714644089.748442,
+ "stopDate": 1714644089.907964,
+ "input": "{}",
+ "inputDetails": {
+ "included": true
+ },
+ "output": "{\"Buckets\":[{\"Name\":\"cdk-hnb659fds-assets-000000000000-us-east-1\",\"CreationDate\":\"2024-05-02T09:53:54+00:00\"},{\"Name\":\"new-sfn-bucket\",\"CreationDate\":\"2024-05-02T10:01:29+00:00\"}],\"Owner\":{\"DisplayName\":\"webfile\",\"Id\":\"75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a\"}}",
+ "outputDetails": {
+ "included": true
+ }
+}
+```
+
+## Supported services and operations
+
+Step Functions integrates with AWS services, allowing you to invoke API actions for each service within your workflow.
+LocalStack's Step Functions emulation supports the following AWS services:
+
+| Supported service integrations | Service | Request Response | Run a Job (.sync) | Run a Job (.sync2) | Wait for Callback (.waitForTaskToken) |
+|--------------------------------|-------------------------|:---: |:---: |:---: |:---: |
+| Optimized integrations | Lambda | ✓ | | | ✓ |
+| | DynamoDB | ✓ | | | |
+| | Amazon ECS/AWS Fargate | ✓ | ✓ | | ✓ |
+| | Amazon SNS | ✓ | | | ✓ |
+| | Amazon SQS | ✓ | | | ✓ |
+| | API Gateway | ✓ | | | ✓ |
+| | Amazon EventBridge | ✓ | | | ✓ |
+| | AWS Glue | ✓ | ✓ | | |
+| | AWS Step Functions | ✓ | ✓ | ✓ | ✓ |
+| | AWS Batch | ✓ | ✓ | | |
+| AWS SDK integrations | All LocalStack services | ✓ | | | ✓ |
+
+## Mocked Service Integrations
+
+Mocked service integrations let you test AWS Step Functions without invoking LocalStack’s emulated AWS services.
+Instead, Task states return predefined outputs from a mock configuration file.
+
+The key components are:
+
+- **Mocked service integrations**: Task states that return predefined responses instead of calling local AWS services.
+- **Mocked responses**: Static payloads linked to mocked Task states.
+- **Test cases**: Executions of your state machine that use mocked responses.
+- **Mock configuration file**: A JSON file that defines test cases, mocked states, and their response payloads.
+
+During execution, each Task state listed in the mock file returns its associated mocked response.
+States not included in the file continue to invoke the corresponding emulated services, allowing a mix of mocked and real interactions.
+
+You can define one or more mocked payloads per Task state.
+
+Supported integration patterns include `.sync`, `.sync2`, and `.waitForTaskToken`.
+
+Both success and failure scenarios can be simulated.
+
+### Compatibility with AWS Step Functions Local
+
+LocalStack can also serve as a drop-in replacement for [AWS Step Functions Local testing with mocked service integrations](https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local-test-sm-exec.html).
+It supports test cases with mocked Task states and maintains compatibility with existing Step Functions Local configurations.
+This functionality is extended in LocalStack by providing access to the latest Step Functions features such as [JSONata and Variables](https://blog.localstack.cloud/aws-step-functions-made-easy/), as well as the ability to enable both mocked and emulated service interactions emulated by LocalStack.
+
+{{< callout >}}
+LocalStack does not validate response formats.
+Ensure the payload structure in the mocked responses matches what the real service expects.
+{{< /callout >}}
+
+### Identify a State Machine for Mocked Integrations
+
+Mocked service integrations apply to specific state machine definitions.
+The first step is to select the state machine where mocked responses should be applied.
+
+In this example, we'll use a state machine named `LambdaSQSIntegration`, defined as follows:
+
+```json
+{
+ "Comment": "This state machine is called: LambdaSQSIntegration",
+ "QueryLanguage": "JSONata",
+ "StartAt": "LambdaState",
+ "States": {
+ "LambdaState": {
+ "Type": "Task",
+ "Resource": "arn:aws:states:::lambda:invoke",
+ "Arguments": {
+ "FunctionName": "GreetingsFunction",
+ "Payload": {
+ "fullname": "{% $states.input.name & ' ' & $states.input.surname %}"
+ }
+ },
+ "Retry": [
+ {
+ "ErrorEquals": [ "States.ALL" ],
+ "IntervalSeconds": 2,
+ "MaxAttempts": 4,
+ "BackoffRate": 2
+ }
+ ],
+ "Assign": {
+ "greeting": "{% $states.result.Payload.greeting %}"
+ },
+ "Next": "SQSState"
+ },
+ "SQSState": {
+ "Type": "Task",
+ "Resource": "arn:aws:states:::sqs:sendMessage",
+ "Arguments": {
+ "QueueUrl": "http://sqs.us-east-1.localhost.localstack.cloud:4566/000000000000/localstack-queue",
+ "MessageBody": "{% $greeting %}"
+ },
+ "End": true
+ }
+ }
+}
+```
+
+## Define Mock Integrations in a Configuration File
+
+Mock integrations are defined in a JSON file that follows the `RawMockConfig` schema.
+
+This file contains two top-level sections:
+
+- **StateMachines** – Maps each state machine to its test cases, specifying which states use which mocked responses.
+- **MockedResponses** – Defines reusable mock payloads, each identified by a `ResponseID`, which test cases can reference.
+
+#### `StateMachines`
+
+This section specifies the Step Functions state machines to mock, along with their corresponding test cases.
+
+Each test case maps state names to `ResponseID`s defined in the `MockedResponses` section.
+
+```json
+"StateMachines": {
+ "": {
+ "TestCases": {
+ "": {
+ "": "",
+ ...
+ }
+ }
+ }
+}
+```
+
+In the example above:
+
+- **`StateMachineName`**: Must exactly match the name used when the state machine was created in LocalStack.
+- **`TestCases`**: Named scenarios that define mocked behavior for specific `Task` states.
+
+Each test case maps `Task` states to mock responses that define their expected behavior.
+
+At runtime, if a test case is selected, the state uses the mocked response (if defined); otherwise, it falls back to calling the emulated service.
+
+Below is a complete example of the `StateMachines` section:
+
+```json
+"LambdaSQSIntegration": {
+ "TestCases": {
+ "LambdaRetryCase": {
+ "LambdaState": "MockedLambdaStateRetry",
+ "SQSState": "MockedSQSStateSuccess"
+ }
+ }
+}
+```
+
+#### `MockedResponses`
+
+This section defines mocked responses for Task states.
+
+Each `ResponseID` includes one or more step keys and defines either a `Return` value or a `Throw` error.
+
+```json
+"MockedResponses": {
+ "": {
+ "": { "Return": ... },
+ "": { "Throw": ... }
+ }
+}
+```
+
+In the example above:
+
+- `ResponseID`: A unique identifier used in test cases to reference a specific mock response.
+- `step-key`: Indicates the attempt number.
+ For example, `"0"` refers to the first try, while `"1-2"` covers a range of attempts.
+- `Return`: Simulates a successful response by returning a predefined payload.
+- `Throw`: Simulates a failure by returning an `Error` and an optional `Cause`.
+
+{{< callout >}}
+Each entry must have **either** `Return` or `Throw`, but cannot have both.
+{{< /callout >}}
+
+Here is a complete example of the `MockedResponses` section:
+
+```json
+"MockedLambdaStateRetry": {
+ "0": {
+ "Throw": {
+ "Error": "Lambda.ServiceException",
+ "Cause": "An internal service error occurred."
+ }
+ },
+ "1-2": {
+ "Throw": {
+ "Error": "Lambda.TooManyRequestsException",
+ "Cause": "Invocation rate limit exceeded."
+ }
+ },
+ "3": {
+ "Return": {
+ "StatusCode": 200,
+ "Payload": {
+ "greeting": "Hello John Smith, you’re now testing mocked integrations with LocalStack!"
+ }
+ }
+ }
+}
+```
+
+The `MockConfigFile.json` below is used to test the `LambdaSQSIntegration` state machine defined earlier.
+
+```json
+{
+ "StateMachines":{
+ "LambdaSQSIntegration":{
+ "TestCases":{
+ "BaseCase":{
+ "LambdaState":"MockedLambdaStateSuccess",
+ "SQSState":"MockedSQSStateSuccess"
+ },
+ "LambdaRetryCase":{
+ "LambdaState":"MockedLambdaStateRetry",
+ "SQSState":"MockedSQSStateSuccess"
+ },
+ "HybridCase":{
+ "LambdaState":"MockedLambdaSuccess"
+ }
+ }
+ }
+ },
+ "MockedResponses":{
+ "MockedLambdaStateSuccess":{
+ "0":{
+ "Return":{
+ "StatusCode":200,
+ "Payload":{
+ "greeting":"Hello John Smith, you’re now testing mocked integrations with LocalStack!"
+ }
+ }
+ }
+ },
+ "MockedSQSStateSuccess":{
+ "0":{
+ "Return":{
+ "MD5OfMessageBody":"3661896f-1287-45a3-8f89-53bd7b25a9a6",
+ "MessageId":"7c9ef661-c455-4779-a9c2-278531e231c2"
+ }
+ }
+ },
+ "MockedLambdaStateRetry":{
+ "0":{
+ "Throw":{
+ "Error":"Lambda.ServiceException",
+ "Cause":"An internal service error occurred."
+ }
+ },
+ "1-2":{
+ "Throw":{
+ "Error":"Lambda.TooManyRequestsException",
+ "Cause":"Invocation rate limit exceeded."
+ }
+ },
+ "3":{
+ "Return":{
+ "StatusCode":200,
+ "Payload":{
+ "greeting":"Hello John Smith, you’re now testing mocked integrations with LocalStack!"
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+### Provide the Mock Configuration to LocalStack
+
+Set the `SFN_MOCK_CONFIG` environment variable to the path of your mock configuration file.
+
+If you're running LocalStack in Docker, mount the file and pass the variable as shown below:
+
+{{< tabpane >}}
+{{< tab header="LocalStack CLI" lang="shell" >}}
+LOCALSTACK_SFN_MOCK_CONFIG=/tmp/MockConfigFile.json \
+localstack start --volume /path/to/MockConfigFile.json:/tmp/MockConfigFile.json
+{{< /tab >}}
+{{< tab header="Docker Compose" lang="yaml" >}}
+services:
+ localstack:
+ container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}"
+ image: localstack/localstack
+ ports:
+ - "127.0.0.1:4566:4566" # LocalStack Gateway
+ - "127.0.0.1:4510-4559:4510-4559" # external services port range
+ environment:
+ # LocalStack configuration: https://docs.localstack.cloud/references/configuration/
+ - DEBUG=${DEBUG:-0}
+ - SFN_MOCK_CONFIG=/tmp/MockConfigFile.json
+ volumes:
+ - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
+ - "/var/run/docker.sock:/var/run/docker.sock"
+ - "./MockConfigFile.json:/tmp/MockConfigFile.json"
+{{< /tab >}}
+{{< /tabpane >}}
+
+### Run Test Cases with Mocked Integrations
+
+Create the state machine to match the name defined in the mock configuration file.
+
+In this example, create the `LambdaSQSIntegration` state machine using:
+
+{{< command >}}
+$ awslocal stepfunctions create-state-machine \
+ --definition file://LambdaSQSIntegration.json \
+ --name "LambdaSQSIntegration" \
+ --role-arn "arn:aws:iam::000000000000:role/service-role/testrole"
+{{< /command >}}
+
+After the state machine is created and correctly named, you can run test cases defined in the mock configuration file using the [`StartExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_StartExecution.html) API.
+
+To execute a test case, append the test case name to the state machine ARN using `#`.
+
+This tells LocalStack to apply the corresponding mocked responses from the configuration file.
+
+For example, to run the `BaseCase` test case:
+
+{{< command >}}
+$ awslocal stepfunctions start-execution \
+ --state-machine arn:aws:states:us-east-1:000000000000:stateMachine:LambdaSQSIntegration#BaseCase \
+ --input '{"name": "John", "surname": "smith"}' \
+ --name "MockExecutionBaseCase"
+{{< /command >}}
+
+During execution, any state mapped in the mock config will use the predefined response.
+States without mock entries invoke the actual emulated service as usual.
+
+You can inspect the execution using the [`DescribeExecution`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_DescribeExecution.html) API:
+
+{{< command >}}
+$ awslocal stepfunctions describe-execution \
+ --execution-arn "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase"
+{{< /command >}}
+
+The sample output shows the execution details, including the state machine ARN, execution ARN, status, start and stop dates, input, and output:
+
+```json
+{
+ "executionArn": "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase",
+ "stateMachineArn": "arn:aws:states:us-east-1:000000000000:stateMachine:LambdaSQSIntegration",
+ "name": "MockExecutionBaseCase",
+ "status": "SUCCEEDED",
+ "startDate": "...",
+ "stopDate": "...",
+ "input": "{\"name\":\"John\",\"surname\":\"smith\"}",
+ "inputDetails": {
+ "included": true
+ },
+ "output": "{\"MessageId\":\"7c9ef661-c455-4779-a9c2-278531e231c2\",\"MD5OfMessageBody\":\"3661896f-1287-45a3-8f89-53bd7b25a9a6\"}",
+ "outputDetails": {
+ "included": true
+ }
+}
+```
+
+You can also use the [`GetExecutionHistory`](https://docs.aws.amazon.com/step-functions/latest/apireference/API_GetExecutionHistory.html) API to retrieve the execution history, including the events and their details.
+
+{{< command >}}
+$ awslocal stepfunctions get-execution-history \
+ --execution-arn "arn:aws:states:us-east-1:000000000000:execution:LambdaSQSIntegration:MockExecutionBaseCase"
+{{< /command >}}
+
+This will return the full execution history, including entries that indicate how mocked responses were applied to Lambda and SQS states.
+
+```json
+...
+{
+ "timestamp": "...",
+ "type": "TaskSucceeded",
+ "id": 5,
+ "previousEventId": 4,
+ "taskSucceededEventDetails": {
+ "resourceType": "lambda",
+ "resource": "invoke",
+ "output": "{\"StatusCode\": 200, \"Payload\": {\"greeting\": \"Hello John Smith, you\\u2019re now testing mocked integrations with LocalStack!\"}}",
+ "outputDetails": {
+ "truncated": false
+ }
+ }
+}
+...
+{
+ "timestamp": "...",
+ "type": "TaskSucceeded",
+ "id": 10,
+ "previousEventId": 9,
+ "taskSucceededEventDetails": {
+ "resourceType": "sqs",
+ "resource": "sendMessage",
+ "output": "{\"MessageId\": \"7c9ef661-c455-4779-a9c2-278531e231c2\", \"MD5OfMessageBody\": \"3661896f-1287-45a3-8f89-53bd7b25a9a6\"}",
+ "outputDetails": {
+ "truncated": false
+ }
+ }
+}
+...
+```
+
+## Resource Browser
+
+The LocalStack Web Application includes a **Resource Browser** for managing Step Functions state machines.
+
+To access it, open the LocalStack Web UI in your browser, navigate to the **Resource Browser** section, and click **Step Functions** under **App Integration**.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create state machine**: Create a new state machine by clicking on the **Create state machine** button and providing the required information.
+- **View state machine details**: Click on a state machine to view its details, including the state executions, definition details, such as the schema and flowchart, and the state machine's ARN.
+- **Start execution**: Start a new execution of the state machine by clicking on the **Start Execution** button and providing the input data.
+- **Delete state machine**: Delete a state machine by selecting it and clicking on the **Actions** button followed by **Remove Selected** button.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Step Functions in LocalStack for various use cases:
+
+- [Loan Broker application with AWS Step Functions, DynamoDB, Lambda, SQS, and SNS](https://github.com/localstack/loan-broker-stepfunctions-lambda-app)
+- [Integrating Step Functions with local Lambda functions on LocalStack](https://github.com/localstack/localstack-pro-samples/tree/master/stepfunctions-lambda)
diff --git a/src/content/docs/aws/services/sts.md b/src/content/docs/aws/services/sts.md
new file mode 100644
index 00000000..c8e0dd52
--- /dev/null
+++ b/src/content/docs/aws/services/sts.md
@@ -0,0 +1,172 @@
+---
+title: "Security Token Service (STS)"
+linkTitle: "Security Token Service (STS)"
+description: Get started with Security Token Service on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Security Token Service (STS) is a service provided by Amazon Web Services (AWS) that enables you to grant temporary, limited-privilege credentials to users and applications.
+STS implements fine-grained access control and reduce the exposure of your long-term credentials.
+The temporary credentials, known as security tokens, can be used to access AWS services and resources based on the permissions specified in the associated policies.
+
+LocalStack allows you to use the STS APIs in your local environment to request security tokens, manage permissions, integrate with identity providers, and more.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_sts" >}}), which provides information on the extent of STS's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to STS and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to assume an IAM Role and assume the role as well as creating an IAM user and getting using the STS with the AWS CLI.
+
+### Create an IAM User and get temporary Credentials
+
+You can create an IAM User and Role using the [`CreateUser`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateUser.html) API.
+The IAM User will be used to assume the IAM Role.
+Run the following command to create an IAM User, named `localstack-user`:
+
+{{< command >}}
+$ awslocal iam create-user \
+ --user-name localstack-user
+{{< /command >}}
+
+You can generate long-term access keys for the IAM user using the [`CreateAccessKey`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateAccessKey.html) API.
+Run the following command to create an access key for the IAM user:
+
+{{< command >}}
+$ awslocal iam create-access-key \
+ --user-name localstack-user
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "AccessKey": {
+ "UserName": "localstack-user",
+ "AccessKeyId": "ACCESS_KEY_ID",
+ "Status": "Active",
+ "SecretAccessKey": "SECRET_ACCESS_KEY",
+ "CreateDate": "2023-08-24T17:16:16Z"
+ }
+}
+```
+
+Using STS, you can also fetch temporary credentials for this user using the [`GetSessionToken`](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html) API.
+Run the following command using your long-term credentials to get your temporary credentials:
+
+{{< command >}}
+$ awslocal sts get-session-token
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Credentials": {
+ "AccessKeyId": "ACCESS_KEY_ID",
+ "SecretAccessKey": "SECRET_ACCESS_KEY",
+ "SessionToken": "SESSION_TOKEN",
+ "Expiration": "TIMESTAMP"
+ }
+}
+```
+
+### Create an IAM Role
+
+You can now create an IAM Role, named `localstack-role`, using the [`CreateRole`](https://docs.aws.amazon.com/STS/latest/APIReference/API_CreateRole.html) API.
+Run the following command to create the IAM Role:
+
+{{< command >}}
+$ awslocal iam create-role \
+ --role-name localstack-role \
+ --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::000000000000:root"},"Action":"sts:AssumeRole"}]}'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Role": {
+ "Path": "/",
+ "RoleName": "localstack-role",
+ "RoleId": "AROAQAAAAAAAEDP262HSR",
+ "Arn": "arn:aws:iam::000000000000:role/localstack-role",
+ "CreateDate": "2023-08-24T17:17:13.632000Z",
+ "AssumeRolePolicyDocument": {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::000000000000:root"
+ },
+ "Action": "sts:AssumeRole"
+ }
+ ]
+ }
+ }
+}
+```
+
+You can attach the policy to the IAM role using the [`AttachRolePolicy`](https://docs.aws.amazon.com/STS/latest/APIReference/API_AttachRolePolicy.html) API.
+Run the following command to attach the policy to the IAM role:
+
+{{< command >}}
+$ awslocal iam attach-role-policy \
+ --role-name localstack-role \
+ --policy-arn arn:aws:iam::aws:policy/AdministratorAccess
+{{< /command >}}
+
+### Assume an IAM Role
+
+You can assume an IAM Role using the [`AssumeRole`](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) API.
+Run the following command to assume the IAM Role:
+
+{{< command >}}
+$ awslocal sts assume-role \
+ --role-arn arn:aws:iam::000000000000:role/localstack-role \
+ --role-session-name localstack-session
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "Credentials": {
+ "AccessKeyId": "ACCESS_KEY_ID",
+ "SecretAccessKey": "SECRET_ACCESS_KEY",
+ "SessionToken": "SESSION_TOKEN",
+ "Expiration": "TIMESTAMP",
+ },
+ "AssumedRoleUser": {
+ "AssumedRoleId": "AROAQAAAAAAAEDP262HSR:localstack-session",
+ "Arn": "arn:aws:sts::000000000000:assumed-role/localstack-role/localstack-session"
+ },
+ "PackedPolicySize": 6
+}
+```
+
+You can use the temporary credentials in your applications for temporary access.
+
+### Get caller identity
+
+You can get the caller identity to identify the principal your current credentials are valid for using the [`GetCallerIdentity`](https://docs.aws.amazon.com/STS/latest/APIReference/API_GetCallerIdentity.html) API.
+Run the following command to get the caller identity for the credentials set in your environment:
+
+{{< command >}}
+$ awslocal sts get-caller-identity
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "UserId": "AKIAIOSFODNN7EXAMPLE",
+ "Account": "000000000000",
+ "Arn": "arn:aws:iam::000000000000:root"
+}
+```
diff --git a/src/content/docs/aws/services/support.md b/src/content/docs/aws/services/support.md
new file mode 100644
index 00000000..63e61890
--- /dev/null
+++ b/src/content/docs/aws/services/support.md
@@ -0,0 +1,107 @@
+---
+title: "Support"
+linkTitle: "Support"
+description: Get started with Support on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+AWS Support is a service provided by Amazon Web Services (AWS) that offers technical assistance and resources to help you optimize your AWS environment, troubleshoot issues, and maintain operational efficiency.
+Support APIs provide programmatic access to AWS Support services, including the ability to create and manage support cases programmatically.
+You can further automate your support workflow using various AWS services, such as Lambda, CloudWatch, and EventBridge.
+
+LocalStack allows you to use the Support APIs in your local environment to create and manage new cases, while testing your configurations locally.
+LocalStack provides a mock implementation via a mock Support Center provided by [Moto](https://docs.getmoto.org/en/latest/docs/services/support.html), and does not create real cases in the AWS.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_support" >}}), which provides information on the extent of Support API's integration with LocalStack.
+
+{{< callout >}}
+For technical support with LocalStack, you can reach out through our [support channels]({{< ref "help-and-support" >}}).
+It's important to note that LocalStack doesn't offer a programmatic interface to create support cases, and this documentation is only intended to demonstrate how you can use and mock the AWS Support APIs in your local environment.
+{{< /callout >}}
+
+## Getting started
+
+This guide is designed for users new to Support and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a case in the mock Support Center using the AWS CLI.
+
+### Create a support case
+
+To create a support case, you can use the [`CreateCase`](https://docs.aws.amazon.com/goto/WebAPI/support-2013-04-15/CreateCase) API.
+The following example creates a case with the subject "Test case" and the description "This is a test case" in the category "General guidance".
+
+{{< command >}}
+$ awslocal support create-case \
+ --subject "Test case" \
+ --service-code "general-guidance" \
+ --category-code "general-guidance" \
+ --communication-body "This is a test case"
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "caseId": "case-12345678910-2020-kEa16f90bJE766J4"
+}
+```
+
+### List support cases
+
+To list all support cases, you can use the [`DescribeCases`](https://docs.aws.amazon.com/awssupport/latest/APIReference/API_DescribeCases.html) API.
+The following example lists all cases in the category "General guidance".
+
+{{< command >}}
+$ awslocal support describe-cases
+{{< / command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "cases": [
+ {
+ "caseId": "case-12345678910-2020-kEa16f90bJE766J4",
+ ...
+ "submittedBy": "moto@moto.com",
+ "timeCreated": "2023-08-24T18:03:08.895247",
+ "recentCommunications": {
+ "communications": [
+ {
+ "caseId": "case-12345678910-2020-kEa16f90bJE766J4",
+ "body": "This is a test case",
+ "submittedBy": "moto@moto.com",
+ ...
+ }
+ ],
+ "nextToken": "foo_next_token"
+ }
+ }
+ ]
+}
+```
+
+### Resolve a support case
+
+To resolve a support case, you can use the [`ResolveCase`](https://docs.aws.amazon.com/goto/WebAPI/support-2013-04-15/ResolveCase) API.
+The following example resolves the case created in the previous step.
+
+{{< command >}}
+$ awslocal support resolve-case \
+ --case-id "case-12345678910-2020-kEa16f90bJE766J4"
+{{< / command >}}
+
+Replace the case ID with the ID of the case you want to resolve.
+The following output would be retrieved:
+
+```bash
+{
+ "initialCaseStatus": "resolved",
+ "finalCaseStatus": "resolved"
+}
+```
+
+You can also use the [`DescribeCases`](https://docs.aws.amazon.com/awssupport/latest/APIReference/API_DescribeCases.html) API to verify that the case has been resolved.
diff --git a/src/content/docs/aws/services/swf.md b/src/content/docs/aws/services/swf.md
new file mode 100644
index 00000000..04038589
--- /dev/null
+++ b/src/content/docs/aws/services/swf.md
@@ -0,0 +1,200 @@
+---
+title: "Simple Workflow Service (SWF)"
+linkTitle: "Simple Workflow Service (SWF)"
+description: >
+ Get started with Simple Workflow Service (SWF) on LocalStack
+tags: ["Free"]
+---
+
+## Introduction
+
+Simple Workflow Service (SWF) is a fully managed service offered by Amazon Web Services (AWS) that enables you to build and manage applications with distributed components and complex workflows.
+SWF allows you to define workflows in a way that's separate from the actual application code, making it easier to modify and adapt workflows without changing the application logic.
+SWF also provides a programming framework to design, coordinate, and execute workflows that involve multiple tasks, steps, and decision points.
+
+LocalStack allows you to use the SWF APIs in your local environment to monitor and manage workflow design, task coordination, activity implementation, and error handling.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_swf" >}}), which provides information on the extent of SWF's integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Simple Workflow Service and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to register an SWF domain and workflow using the AWS CLI.
+
+### Registering a domain
+
+You can register an SWF domain using the [`RegisterDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterDomain.html) API.
+Execute the following command to register a domain named `test-domain`:
+
+{{< command >}}
+$ awslocal swf register-domain \
+ --name test-domain \
+ --workflow-execution-retention-period-in-days 1
+{{< /command >}}
+
+You can use the [`DescribeDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeDomain.html) API to verify that the domain was registered successfully.
+Run the following command to describe the `test-domain` domain:
+
+{{< command >}}
+$ awslocal swf describe-domain \
+ --name test-domain
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "domainInfo": {
+ "name": "test-domain",
+ "status": "REGISTERED",
+ "arn": "arn:aws:swf:us-east-1:000000000000:/domain/test-domain"
+ },
+ "configuration": {
+ "workflowExecutionRetentionPeriodInDays": "1"
+ }
+}
+```
+
+### List the domains
+
+You can list all registered domains using the [`ListDomains`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_ListDomains.html) API.
+Run the following command to list all registered domains:
+
+{{< command >}}
+$ awslocal swf list-domains --registration-status REGISTERED
+{{< /command >}}
+
+To deprecate a domain, use the [`DeprecateDomain`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DeprecateDomain.html) API.
+Run the following command to deprecate the `test-domain` domain:
+
+{{< command >}}
+$ awslocal swf deprecate-domain \
+ --name test-domain
+{{< /command >}}
+
+You can now list the deprecated domains using the `--registration-status DEPRECATED` flag:
+
+{{< command >}}
+$ awslocal swf list-domains --registration-status DEPRECATED
+{{< /command >}}
+
+### Registering a workflow
+
+You can register a workflow using the [`RegisterWorkflowType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterWorkflowType.html) API.
+Execute the following command to register a workflow named `test-workflow`:
+
+{{< command >}}
+$ awslocal swf register-workflow-type \
+ --domain test-domain \
+ --name test-workflow \
+ --default-task-list name=test-task-list \
+ --default-task-start-to-close-timeout 30 \
+ --default-execution-start-to-close-timeout 60 \
+ --default-child-policy TERMINATE \
+ --workflow-version "1.0"
+{{< /command >}}
+
+You can use the [`DescribeWorkflowType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeWorkflowType.html) API to verify that the workflow was registered successfully.
+Run the following command to describe the `test-workflow` workflow:
+
+{{< command >}}
+$ awslocal swf describe-workflow-type \
+ --domain test-domain \
+ --workflow-type name=test-workflow,version=1.0
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "typeInfo": {
+ "workflowType": {
+ "name": "test-workflow",
+ "version": "1.0"
+ },
+ "status": "REGISTERED",
+ "creationDate": 1420066800.0
+ },
+ "configuration": {
+ "defaultTaskStartToCloseTimeout": "30",
+ "defaultExecutionStartToCloseTimeout": "60",
+ "defaultTaskList": {
+ "name": "test-task-list"
+ },
+ "defaultChildPolicy": "TERMINATE"
+ }
+}
+```
+
+### Registering an activity
+
+You can register an activity using the [`RegisterActivityType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_RegisterActivityType.html) API.
+Execute the following command to register an activity named `test-activity`:
+
+{{< command >}}
+$ awslocal swf register-activity-type \
+ --domain test-domain \
+ --name test-activity \
+ --default-task-list name=test-task-list \
+ --default-task-start-to-close-timeout 30 \
+ --default-task-heartbeat-timeout 30 \
+ --default-task-schedule-to-start-timeout 30 \
+ --default-task-schedule-to-close-timeout 30 \
+ --activity-version "1.0"
+{{< /command >}}
+
+You can use the [`DescribeActivityType`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_DescribeActivityType.html) API to verify that the activity was registered successfully.
+Run the following command to describe the `test-activity` activity:
+
+{{< command >}}
+$ awslocal swf describe-activity-type \
+ --domain test-domain \
+ --activity-type name=test-activity,version=1.0
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "typeInfo": {
+ "activityType": {
+ "name": "test-activity",
+ "version": "1.0"
+ },
+ "status": "REGISTERED",
+ "creationDate": 1420066800.0
+ },
+ "configuration": {
+ "defaultTaskStartToCloseTimeout": "30",
+ "defaultTaskHeartbeatTimeout": "30",
+ "defaultTaskList": {
+ "name": "test-task-list"
+ },
+ "defaultTaskScheduleToStartTimeout": "30",
+ "defaultTaskScheduleToCloseTimeout": "30"
+ }
+}
+```
+
+### Starting a workflow execution
+
+You can start a workflow execution using the [`StartWorkflowExecution`](https://docs.aws.amazon.com/amazonswf/latest/apireference/API_StartWorkflowExecution.html) API.
+Execute the following command to start a workflow execution for the `test-workflow` workflow:
+
+{{< command >}}
+$ awslocal swf start-workflow-execution \
+ --domain test-domain \
+ --workflow-type name=test-workflow,version=1.0 \
+ --workflow-id test-workflow-id \
+ --task-list name=test-task-list \
+ --input '{"foo": "bar"}'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "runId": "0602601afc71403abb934d8094c51668"
+}
+```
diff --git a/src/content/docs/aws/services/textract.md b/src/content/docs/aws/services/textract.md
new file mode 100644
index 00000000..0f218da3
--- /dev/null
+++ b/src/content/docs/aws/services/textract.md
@@ -0,0 +1,90 @@
+---
+title: "Textract"
+linkTitle: "Textract"
+description: Get started with Textract on LocalStack
+tags: ["Ultimate"]
+persistence: supported
+---
+
+Textract is a machine learning service that automatically extracts text, forms, and tables from scanned documents.
+It simplifies the process of extracting valuable information from a variety of document types, enabling applications to quickly analyze and understand document content.
+
+LocalStack allows you to mock Textract APIs in your local environment.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_textract" >}}), providing details on the extent of Textract's integration with LocalStack.
+
+## Getting started
+
+This guide is tailored for users new to Textract and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to perform basic Textract operations, such as mocking text detection in a document.
+
+### Detect document text
+
+You can use the [`DetectDocumentText`](https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html) API to identify and extract text from a document.
+Execute the following command:
+
+{{< command >}}
+$ awslocal textract detect-document-text \
+ --document '{"S3Object":{"Bucket":"your-bucket","Name":"your-document"}}'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "DocumentMetadata": {
+ "Pages": {
+ "Pages": 389
+ }
+ },
+ "Blocks": [],
+ "DetectDocumentTextModelVersion": "1.0"
+}
+```
+
+### Start document text detection job
+
+You can use the [`StartDocumentTextDetection`](https://docs.aws.amazon.com/textract/latest/dg/API_StartDocumentTextDetection.html) API to asynchronously detect text in a document.
+Execute the following command:
+
+{{< command >}}
+$ awslocal textract start-document-text-detection \
+ --document-location '{"S3Object":{"Bucket":"bucket","Name":"document"}}'
+{{< /command >}}
+
+The following output would be retrieved:
+
+```bash
+{
+ "JobId": "501d7251-1249-41e0-a0b3-898064bfc506"
+}
+```
+
+Save the `JobId` value to use in the next command.
+
+### Get document text detection job
+
+You can use the [`GetDocumentTextDetection`](https://docs.aws.amazon.com/textract/latest/dg/API_GetDocumentTextDetection.html) API to retrieve the results of a document text detection job.
+Execute the following command:
+
+{{< command >}}
+$ awslocal textract get-document-text-detection \
+ --job-id "501d7251-1249-41e0-a0b3-898064bfc506"
+{{< /command >}}
+
+Replace `501d7251-1249-41e0-a0b3-898064bfc506` with the `JobId` value retrieved from the previous command.
+The following output would be retrieved:
+
+```bash
+{
+ "DocumentMetadata": {
+ "Pages": {
+ "Pages": 389
+ }
+ },
+ "JobStatus": "SUCCEEDED",
+ "Blocks": [],
+ "DetectDocumentTextModelVersion": "1.0"
+}
+```
diff --git a/src/content/docs/aws/services/timestream.md b/src/content/docs/aws/services/timestream.md
new file mode 100644
index 00000000..bf5c3cf3
--- /dev/null
+++ b/src/content/docs/aws/services/timestream.md
@@ -0,0 +1,75 @@
+---
+title: "Timestream"
+linkTitle: "Timestream"
+description: Get started with Timestream on LocalStack
+tags: ["Ultimate"]
+persistence: supported
+---
+
+## Introduction
+
+LocalStack contains basic support for Timestream time series databases, including these operations:
+
+* Creating databases
+* Creating tables
+* Writing records to tables
+* Querying timeseries data from tables
+
+The supported APIs are available on our API Coverage Page ([Timestream-Query]({{< ref "coverage_timestream-query" >}})/[Timestream-Write]({{< ref "coverage_timestream-write" >}})), which provides information on the extent of Timestream integration with LocalStack.
+
+## Getting Started
+
+The following example illustrates the basic operations, using the [`awslocal`](https://github.com/localstack/awscli-local) command line.
+
+First, we create a test database and table:
+
+{{< command >}}
+$ awslocal timestream-write create-database --database-name testDB
+$ awslocal timestream-write create-table --database-name testDB --table-name testTable
+{{ command >}}
+
+We can then add a few records with a timestamp, measure name, and value to the table:
+
+{{< command >}}
+$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"60","TimeUnit":"SECONDS","Time":"1636986409"}]'
+$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"80","TimeUnit":"SECONDS","Time":"1636986412"}]'
+$ awslocal timestream-write write-records --database-name testDB --table-name testTable --records '[{"MeasureName":"cpu","MeasureValue":"70","TimeUnit":"SECONDS","Time":"1636986414"}]'
+{{ command >}}
+
+Finally, we can run a query to retrieve the timeseries data (or aggregate values) from the table:
+{{< command >}}
+$ awslocal timestream-query query --query-string "SELECT CREATE_TIME_SERIES(time, measure_value::double) as cpu FROM testDB.timeStreamTable WHERE measure_name='cpu'"
+{
+ "Rows": [{
+ "Data": [{
+ "TimeSeriesValue": [{
+ "Time": "2021-11-15T14:26:49",
+ "Value": {
+ "ScalarValue": 60
+ }
+ },
+...
+{{ command >}}
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing Timestream databases.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **Timestream** under the **Database** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+* **Create Database**: Create a new Timestream database by clicking on the **Create Database** button and providing a name for the database among other optional details.
+* **Create Table**: Create a new Timestream table by clicking on the **Create Table** button in the database view and providing a name for the table among other optional details.
+* **Run Query**: Run a Timestream query by clicking on the **Run Query** button in the table view and providing a query string.
+* **View Database/Table Details**: Click on a database or table to view its details, including the schema, retention policy, and other metadata.
+* **Delete Database/Table**: Delete the Timestream database/table by selecting it and clicking on the **Actions** button followed by **Remove Selected** button.
+
+## Current Limitations
+
+LocalStack's Timestream implementation is under active development and only supports a limited set of operations, please refer to the API Coverage pages for an up-to-date list of implemented and tested functions within [Timestream-Query]({{< ref "coverage_timestream-query" >}}) and [Timestream-Write]({{< ref "coverage_timestream-write" >}}).
+
+If you have a usecase that uses Timestream but doesn't work with our implementation yet, we encourage you to [get in touch](https://localstack.cloud/contact/), so we can streamline any operations you rely on.
diff --git a/src/content/docs/aws/services/transcribe.md b/src/content/docs/aws/services/transcribe.md
new file mode 100644
index 00000000..331c4c58
--- /dev/null
+++ b/src/content/docs/aws/services/transcribe.md
@@ -0,0 +1,171 @@
+---
+title: "Transcribe"
+linkTitle: "Transcribe"
+description: Get started with Amazon Transcribe on LocalStack
+persistence: supported
+tags: ["Free"]
+---
+
+## Introduction
+
+Transcribe is a service provided by AWS that offers automatic speech recognition (ASR) capabilities.
+It enables developers to convert spoken language into written text, making it valuable for a wide range of applications, from transcription services to voice analytics.
+
+LocalStack allows you to use the Transcribe APIs for offline speech-to-text jobs in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_transcribe" >}}), which provides information on the extent of Transcribe integration with LocalStack.
+
+LocalStack Transcribe uses an offline speech-to-text library called [Vosk](https://alphacephei.com/vosk/).
+It requires an active internet connection to download the language model.
+Once the language model is downloaded, subsequent transcriptions for the same language can be performed offline.
+Language models typically have a size of around 50 MiB and are saved in the cache directory (see [Filesystem Layout]({{< ref "filesystem" >}})).
+
+## Getting Started
+
+This guide is designed for users new to Transcribe and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a transcription job and view the transcript in an S3 bucket using the AWS CLI.
+
+### Create an S3 bucket
+
+You can create an S3 bucket using the [`mb`](https://docs.aws.amazon.com/cli/latest/reference/s3/mb.html) command.
+Run the following command to create a bucket named `foo` to upload a sample audio file named `example.wav`:
+
+{{< command >}}
+$ awslocal s3 mb s3://foo
+$ awslocal s3 cp ~/example.wav s3://foo/example.wav
+{{< / command >}}
+
+### Create a transcription job
+
+You can create a transcription job using the [`StartTranscriptionJob`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_StartTranscriptionJob.html) API.
+Run the following command to create a transcription job named `example` for the audio file `example.wav`:
+
+{{< command >}}
+$ awslocal transcribe start-transcription-job \
+ --transcription-job-name example \
+ --media MediaFileUri=s3://foo/example.wav \
+ --language-code en-IN
+{{< / command >}}
+
+You can list the transcription jobs using the [`ListTranscriptionJobs`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_ListTranscriptionJobs.html) API.
+Run the following command to list the transcription jobs:
+
+{{< command >}}
+$ awslocal transcribe list-transcription-jobs
+
+{
+ "TranscriptionJobSummaries": [
+ {
+ "TranscriptionJobName": "example",
+ "CreationTime": "2022-08-17T14:04:39.277000+05:30",
+ "StartTime": "2022-08-17T14:04:39.308000+05:30",
+ "LanguageCode": "en-IN",
+ "TranscriptionJobStatus": "IN_PROGRESS"
+ }
+ ]
+}
+
+{{< / command >}}
+
+### View the transcript
+
+After the job is complete, the transcript can be retrieved from the S3 bucket using the [`GetTranscriptionJob`](https://docs.aws.amazon.com/transcribe/latest/APIReference/API_GetTranscriptionJob.html) API.
+Run the following command to get the transcript:
+
+{{< command >}}
+$ awslocal transcribe get-transcription-job --transcription-job example
+
+{
+ "TranscriptionJob": {
+ "TranscriptionJobName": "example",
+ "TranscriptionJobStatus": "COMPLETED",
+ "LanguageCode": "en-IN",
+ "MediaFormat": "wav",
+ "Media": {
+ "MediaFileUri": "s3://foo/example.wav"
+ },
+ "Transcript": {
+ "TranscriptFileUri": "s3://foo/7844aaa5.json"
+ },
+ "CreationTime": "2022-08-17T14:04:39.277000+05:30",
+ "StartTime": "2022-08-17T14:04:39.308000+05:30",
+ "CompletionTime": "2022-08-17T14:04:57.400000+05:30",
+ }
+}
+
+$ awslocal s3 cp s3://foo/7844aaa5.json .
+$ jq .results.transcripts[0].transcript 7844aaa5.json
+
+"it is just a question of getting rid of the illusion that we are separate from nature"
+
+{{< / command >}}
+
+## Audio Formats
+
+The following input media formats are supported:
+
+- Adaptive Multi-Rate (AMR)
+- Free Lossless Audio Codec (FLAC)
+- MPEG-1 Audio Layer-3 (MP3)
+- MPEG-4 Part 14 (MP4)
+- OGG
+- Matroska Video files (MKV)
+- Waveform Audio File Format (WAV)
+
+## Supported Languages
+
+The following languages and dialects are supported:
+
+| Language | Language Code |
+| ---------------- | ------------- |
+| Catalan | `ca-ES` |
+| Czech | `cs-CZ` |
+| German | `de-DE` |
+| English, British | `en-GB` |
+| English, Indian | `en-IN` |
+| English, US | `en-US` |
+| Spanish | `es-ES` |
+| Farsi | `fa-IR` |
+| French | `fr-FR` |
+| Gujarati | `gu-IN` |
+| Hindi | `hi-IN` |
+| Italian | `it-IT` |
+| Japan | `ja-JP` |
+| Kazakh | `kk-KZ` |
+| Korean | `ko-KR` |
+| Dutch | `nl-NL` |
+| Polish | `pl-PL` |
+| Portuguese | `pt-BR` |
+| Russian | `ru-RU` |
+| Telugu | `te-IN` |
+| Turkish | `tr-TR` |
+| Ukrainian | `uk-UA` |
+| Uzbek | `uz-UZ` |
+| Vietnamese | `vi-VN` |
+| Chinese | `zh-CN` |
+
+## Resource Browser
+
+The LocalStack Web Application provides a Resource Browser for managing Transcribe Transcription Jobs.
+You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resource Browser** section, and then clicking on **Transcribe Service** under the **Machine Learning** section.
+
+
+
+
+
+The Resource Browser allows you to perform the following actions:
+
+- **Create Transcription Job**: Create a new transcription job by clicking on the **Create Transcription Job** button, and then providing the required details.
+- **View Transcription Job**: View the details of a specific transcription job by clicking on the job in the list.
+- **Delete Transcription Job**: Delete the transcription job by clicking on the **Actions** button followed by **Remove Selected** button.
+
+## Examples
+
+The following code snippets and sample applications provide practical examples of how to use Transcribe in LocalStack for various use cases:
+
+- [Serverless Transcription App using Transcribe, S3, Lambda, SQS, SES](https://github.com/localstack-samples/sample-serverless-transcribe)
+
+## Limitations
+
+Transcribe does not support speaker diarization and does not produce certain other numerical information, like confidence levels.
diff --git a/src/content/docs/aws/services/transfer.md b/src/content/docs/aws/services/transfer.md
new file mode 100644
index 00000000..468c6959
--- /dev/null
+++ b/src/content/docs/aws/services/transfer.md
@@ -0,0 +1,85 @@
+---
+title: "Transfer"
+linkTitle: "Transfer"
+tags: ["Ultimate"]
+description: >
+ Get started with Amazon Transfer on LocalStack
+---
+
+## Introduction
+
+The AWS Transfer API is a powerful tool that empowers users to establish FTP(S) servers with ease.
+These servers serve as gateways, allowing direct access to files residing in Amazon S3 buckets.
+This functionality streamlines file management processes, making it simpler and more efficient to handle data stored in S3 by providing a familiar FTP interface for users to interact with their files securely.
+Whether you're looking to facilitate file transfers or enhance your data access capabilities, the AWS Transfer API simplifies the process and extends the versatility of your cloud storage infrastructure.
+
+## Getting started
+
+This Python code demonstrates a basic workflow for transferring a file between a local machine and AWS S3 using the AWS Transfer Family service and FTP (File Transfer Protocol).
+
+```python
+import io
+import time
+import uuid
+import boto3
+from ftplib import FTP, FTP_TLS
+
+EDGE_URL = 'http://localhost:4566'
+USERNAME = 'user_123'
+BUCKET = 'transfer-files'
+S3_FILENAME = 'test-file-aws-transfer.txt'
+FTP_USER_DEFAULT_PASSWD = '12345'
+FILE_CONTENT = b'title "Test" \nfile content!!'
+
+# create bucket
+s3_client = boto3.client('s3', endpoint_url=EDGE_URL)
+s3_client.create_bucket(Bucket=BUCKET)
+transfer_client = boto3.client('transfer', endpoint_url=EDGE_URL)
+
+# create transfer server
+rs = transfer_client.create_server(
+ EndpointType='PUBLIC',
+ IdentityProviderType='SERVICE_MANAGED',
+ Protocols=['FTP']
+ )
+time.sleep(1)
+
+server_id = rs['ServerId']
+port = int(server_id[-4:])
+
+transfer_client.create_user(
+ ServerId=server_id,
+ HomeDirectory=BUCKET,
+ HomeDirectoryType='PATH',
+ Role='arn:aws:iam::testrole',
+ UserName=USERNAME
+)
+
+# upload file through AWS Transfer
+ftp = FTP()
+ftp.connect('localhost', port=port)
+result = ftp.login(USERNAME, FTP_USER_DEFAULT_PASSWD)
+assert 'Login successful.' in result
+ftp.storbinary(cmd='STOR %s' % S3_FILENAME, fp=io.BytesIO(FILE_CONTENT))
+ftp.quit()
+
+# download file through AWS S3
+rs = s3_client.get_object(Bucket=BUCKET, Key=S3_FILENAME)
+assert rs['Body'].read() == FILE_CONTENT
+```
+
+Please note that this code is a simplified example for demonstration purposes.
+In a production environment, you should use more secure practices, including setting proper IAM roles and handling sensitive credentials securely.
+Additionally, error handling and cleanup code may be needed to ensure the script behaves robustly in all scenarios.
+
+## Current Limitations
+
+The Transfer API does not provide a way to return the endpoint URL of created FTP servers.
+Hence, in order to determine the server endpoint, the local port is encoded as a suffix within the `ServerId` attribute, constituting the only numeric digits within the ID string.
+For example, assume the following is the response from the `CreateServer` API call, then the FTP server is accessible on port `4511` (i.e., `ftp://localhost:4511`):
+
+```json
+{
+ "ServerId": "s-afcedbffaecca4511"
+}
+```
diff --git a/src/content/docs/aws/services/verifiedpermissions.md b/src/content/docs/aws/services/verifiedpermissions.md
new file mode 100644
index 00000000..c94cf47c
--- /dev/null
+++ b/src/content/docs/aws/services/verifiedpermissions.md
@@ -0,0 +1,135 @@
+---
+title: "Verified Permissions"
+linkTitle: "Verified Permissions"
+description: Get started with Verified Permissions on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Amazon Verified Permissions is a scalable service for managing fine-grained permissions and authorization in custom applications.
+It helps secure applications by moving authorization logic outside the app and managing policies in one place, using the [Cedar policy language](https://docs.cedarpolicy.com/) to define access rules.
+It checks if a principal can take an action on a resource in a specific context in your application.
+
+LocalStack allows you to use the Verified Permissions APIs in your local environment to test your authorization logic, with integrations with other AWS services like Cognito.
+The supported APIs are available on our [API coverage page]({{< ref "coverage_verifiedpermissions" >}}), which provides information on the extent of Verified Permissions' integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to Verified Permissions and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how to create a Verified Permissions Policy Store, add a policy to it, and authorize a request with the AWS CLI.
+
+### Create a Policy Store
+
+To create a Verified Permissions Policy Store, use the [`CreatePolicyStore`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_CreatePolicyStore.html) API.
+Run the following command to create a Policy Store with Schema validation settings set to `OFF`:
+
+{{< command >}}
+$ awslocal verifiedpermissions create-policy-store \
+ --validation-settings mode=OFF \
+ --description "A local Policy Store"
+{{< /command >}}
+
+The above command returns the following response:
+
+```json
+{
+ "policyStoreId": "q5PCScu9qo4aswMVc0owNN",
+ "arn": "arn:aws:verifiedpermissions::000000000000:policy-store/q5PCScu9qo4aswMVc0owNN",
+ "createdDate": "2025-04-22T19:24:11.175557Z",
+ "lastUpdatedDate": "2025-04-22T19:24:11.175557Z"
+}
+```
+
+You can list all the Verified Permissions policy stores using the [`ListPolicyStores`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_ListPolicyStores.html) API.
+Run the following command to list all the Verified Permissions policy stores:
+
+{{< command >}}
+$ awslocal verifiedpermissions list-policy-stores
+{{< /command >}}
+
+### Create a Policy
+
+To create a Verified Permissions Policy, use the [`CreatePolicy`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_CreatePolicy.html) API.
+
+Create a JSON file named `static_policy.json` with the following content:
+
+```json
+{
+ "static": {
+ "description": "Grant the User alice access to view the trip Album",
+ "statement": "permit(principal == User::\"alice\", action == Action::\"view\", resource == Album::\"trip\");"
+ }
+}
+```
+
+You can then run this command to create the policy:
+{{< command >}}
+$ awslocal verifiedpermissions create-policy \
+ --definition file://static_policy.json \
+ --policy-store-id q5PCScu9qo4aswMVc0owNN
+{{< /command >}}
+
+Replace the policy store ID with the ID of the policy store you created previously.
+
+You should see the following output:
+
+```json
+{
+ "policyStoreId": "q5PCScu9qo4aswMVc0owNN",
+ "policyId": "MfsIseJDeZsr5WUm3tB4FX",
+ "policyType": "STATIC",
+ "principal": {
+ "entityType": "User",
+ "entityId": "alice"
+ },
+ "resource": {
+ "entityType": "Album",
+ "entityId": "trip"
+ },
+ "actions": [
+ {
+ "actionType": "Action",
+ "actionId": "view"
+ }
+ ],
+ "createdDate": "2025-04-22T19:25:25.161652Z",
+ "lastUpdatedDate": "2025-04-22T19:25:25.161652Z",
+ "effect": "Permit"
+}
+```
+
+### Authorize a request
+
+We can now make use of the Policy Store and the Policy to start authorizing requests.
+To authorize a request using Verified Permissions, use the [`IsAuthorized`](https://docs.aws.amazon.com/verifiedpermissions/latest/apireference/API_IsAuthorized.html) API.
+
+{{< command >}}
+$ awslocal verifiedpermissions is-authorized \
+ --policy-store-id q5PCScu9qo4aswMVc0owNN \
+ --principal entityType=User,entityId=alice \
+ --action actionType=Action,actionId=view \
+ --resource entityType=Album,entityId=trip
+{{< /command >}}
+
+You should get the following output, indicating that your request was allowed:
+
+```json
+{
+ "decision": "ALLOW",
+ "determiningPolicies": [
+ {
+ "policyId": "MfsIseJDeZsr5WUm3tB4FX"
+ }
+ ],
+ "errors": []
+}
+```
+
+## Current limitations
+
+- No Schema validation when creating a new schema using `PutSchema`, and no Policy validation using said schema when creating policies and template policies.
+- Only Cognito is supported as an `IdentitySource`, external OIDC providers are not yet implemented.
+- The validation around Identity Sources and JWT is not fully yet implemented: the identity source is not validated to have a valid `jwks.json` endpoint, and the issuer, signature and expiration of the incoming JWT are not validated.
diff --git a/src/content/docs/aws/services/waf.md b/src/content/docs/aws/services/waf.md
new file mode 100644
index 00000000..ca7b2ce3
--- /dev/null
+++ b/src/content/docs/aws/services/waf.md
@@ -0,0 +1,102 @@
+---
+title: "Web Application Firewall (WAF)"
+linkTitle: "Web Application Firewall (WAF)"
+description: Get started with Web Application Firewall (WAF) on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+Web Application Firewall (WAF) is a service provided by Amazon Web Services (AWS) that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.
+WAFv2 is the latest version of WAF, and it allows you to specify a single set of rules to protect your web applications, APIs, and mobile applications from common attack patterns, such as SQL injection and cross-site scripting.
+
+LocalStack allows you to use the WAFv2 APIs for offline web application firewall jobs in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_wafv2" >}}), which provides information on the extent of WAFv2 integration with LocalStack.
+
+## Getting started
+
+This guide is for users who are familiar with the AWS CLI and [`awslocal`](https://github.com/localstack/awscli-local) wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will walk you through creating, listing, tagging, and viewing tags for Web Access Control Lists (WebACLs) using the Web Application Firewall (WAF) service in a LocalStack environment using the AWS CLI.
+
+### Create a WebACL
+
+Start by creating a Web Access Control List (WebACL) using the [`CreateWebACL`](https://docs.aws.amazon.com/waf/latest/APIReference/API_CreateWebACL.html) API.
+Run the following command to create a WebACL named `TestWebAcl`:
+
+{{< command >}}
+$ awslocal wafv2 create-web-acl \
+ --name TestWebAcl \
+ --scope REGIONAL \
+ --default-action Allow={} \
+ --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName=TestWebAclMetrics
+
+{
+ "Summary": {
+ "Name": "TestWebAcl",
+ "Id": "f94fd5bc-e4d4-4280-9f53-51e9441ad51d",
+ "Description": "",
+ "ARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d"
+ }
+}
+
+{{< /command >}}
+
+Note the `Id` and `ARN` from the output, as they will be needed for subsequent commands.
+
+### List WebACLs
+
+To view all the WebACLs you have created, use the [`ListWebACLs`](https://docs.aws.amazon.com/waf/latest/APIReference/API_ListWebACLs.html) API.
+Run the following command to list the WebACLs:
+
+{{< command >}}
+$ awslocal wafv2 list-web-acls --scope REGIONAL
+
+{
+ "NextMarker": "Not Implemented",
+ "WebACLs": [
+ {
+ "Name": "TestWebAcl",
+ "Id": "f94fd5bc-e4d4-4280-9f53-51e9441ad51d",
+ "Description": "",
+ "ARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d"
+ }
+ ]
+}
+
+{{< /command >}}
+
+### Tag a WebACL
+
+Tagging resources in AWS WAF helps you manage and identify them.
+Use the [`TagResource`](https://docs.aws.amazon.com/waf/latest/APIReference/API_TagResource.html) API to add tags to a WebACL.
+Run the following command to add a tag to the WebACL created in the previous step:
+
+{{< command >}}
+$ awslocal wafv2 tag-resource \
+ --resource-arn arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d \
+ --tags Key=Name,Value=AWSWAF
+{{< /command >}}
+
+After tagging your resources, you may want to view these tags.
+Use the [`ListTagsForResource`](https://docs.aws.amazon.com/waf/latest/APIReference/API_ListTagsForResource.html) API to list the tags for a WebACL.
+Run the following command to list the tags for the WebACL created in the previous step:
+
+{{< command >}}
+$ awslocal wafv2 list-tags-for-resource \
+ --resource-arn arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d
+
+{
+ "TagInfoForResource": {
+ "ResourceARN": "arn:aws:wafv2:us-east-1:000000000000:regional/webacl/TestWebAcl/f94fd5bc-e4d4-4280-9f53-51e9441ad51d",
+ "TagList": [
+ {
+ "Key": "Name",
+ "Value": "AWSWAF"
+ }
+ ]
+ }
+}
+
+{{< /command >}}
diff --git a/src/content/docs/aws/services/xray.md b/src/content/docs/aws/services/xray.md
new file mode 100644
index 00000000..16f28f68
--- /dev/null
+++ b/src/content/docs/aws/services/xray.md
@@ -0,0 +1,129 @@
+---
+title: "X-Ray"
+linkTitle: "X-Ray"
+description: Get started with X-Ray on LocalStack
+tags: ["Ultimate"]
+---
+
+## Introduction
+
+[X-Ray](https://docs.aws.amazon.com/xray/latest/devguide/aws-xray.html) is a distributed tracing service that
+helps to understand cross-service interactions and facilitates debugging of performance bottlenecks.
+Instrumented applications generate trace data by recording trace segments with information about the work tasks of an
+application, such as timestamps, tasks names, or metadata.
+X-Ray supports different ways of [instrumenting your application](https://docs.aws.amazon.com/xray/latest/devguide/xray-instrumenting-your-app.html) including
+the [AWS X-Ray SDK](https://docs.aws.amazon.com/xray/latest/devguide/xray-instrumenting-your-app.html#xray-instrumenting-xray-sdk) and
+the [AWS Distro for OpenTelemetry (ADOT)](https://docs.aws.amazon.com/xray/latest/devguide/xray-instrumenting-your-app.html#xray-instrumenting-opentel).
+[X-Ray daemon](https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html) is an application that gathers
+raw trace segment data from the X-Ray SDK and relays it to the AWS X-Ray API.
+The X-Ray API can then be used to retrieve traces originating from different application components.
+
+LocalStack allows
+you to use the X-Ray APIs to send and retrieve trace segments in your local environment.
+The supported APIs are available on our [API Coverage Page]({{< ref "coverage_xray" >}}),
+which provides information on the extent of X-Ray integration with LocalStack.
+
+## Getting started
+
+This guide is designed for users new to X-Ray and assumes basic
+knowledge of the AWS CLI and our `awslocal` wrapper script.
+
+Start your LocalStack container using your preferred method.
+We will demonstrate how you can create a minimal [trace segment](https://docs.aws.amazon.com/xray/latest/devguide/xray-api-segmentdocuments.html#api-segmentdocuments-fields)
+and manually send it to the X-Ray API.
+Notice that this trace ingestion typically happens in the background, for example by the X-Ray SDK and X-Ray daemon.
+
+ [PutTraceSegments](https://docs.aws.amazon.com/xray/latest/api/API_PutTraceSegments.html).
+
+### Sending trace segments
+
+You can generates a unique trace ID and constructs a JSON document with trace information.
+It then sends this trace segment to the AWS X-Ray API using the [PutTraceSegments](https://docs.aws.amazon.com/xray/latest/api/API_PutTraceSegments.html) API.
+Run the following commands in your terminal:
+
+{{< command >}}
+$ START_TIME=$(date +%s)
+$ HEX_TIME=$(printf '%x\n' $START_TIME)
+$ GUID=$(dd if=/dev/random bs=12 count=1 2>/dev/null | od -An -tx1 | tr -d ' \t\n')
+$ TRACE_ID="1-$HEX_TIME-$GUID"
+$ END_TIME=$(($START_TIME+3))
+$ DOC=$(cat <
+Sending trace segment to X-Ray API: {"trace_id": "1-6501ee11-056ec85fafff21f648e2d3ae", "id": "6226467e3f845502", "start_time": 1694625297.37518, "end_time": 1694625300.4042, "name": "test.elasticbeanstalk.com"}
+{
+"UnprocessedTraceSegments": []
+}
+