Skip to content

code-lab-org/sos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

425 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SOS

This repository contains the codebase for Snow Observing Strategy (SOS) applications integrated within the Novel Observing Strategies Testbed (NOS-T).

Installation

NOS-T Tools Installation

To install the NOS-T library, follow the directions here.

AWS CLI Installation

To setup the Amazon Web Services (AWS) command line interface (CLI), follow the directions here.

Introduction

A single manager application is responsible for orchestrating the various applications and keeping a consistent time across applications. Upon initiation of the manager, various managed applications are triggered, each responsible for generating derived, merged datasets or raster layers sent as base64-encoded strings. Below is a table describing each application:

Application Purpose Data Source Developed Containerized
Manager Orchestrates applications, maintains time NA Y Y
Planner Selects best taskable observations on the basis of reward LIS Y N
Appender Aggregates planned taskable observations, filtering duplicates Planner Y N
Simulator Simulates satellite operations and determines when and where observations are collected Appender Y N

Applications Overview

Applications communicate via a RabbitMQ message broker utilizing the Advanced Message Queuing Protocol (AMQP) protocol. The figure below illustrates the overall workflow:

flowchart LR

    subgraph cluster0["S3 Bucket"]
        lis["LIS NetCDF"]
    end

    subgraph cluster1["Applications"]
        style cluster1 stroke-dasharray: 5 5
        planner["Planner"]
        style planner fill:red
        appender["Appender"]
        style appender fill:dodgerblue
        simulator["Simulator"]
        style simulator fill:green
    end

    subgraph cluster2["Outputs"]
        style cluster2 stroke-dasharray: 5 5
        sc_geojson["Selected Cells<br/>(GeoJSON)"]
        ag_geojson["Aggregated Selected Cells<br/>(GeoJSON)"]
    end

    subgraph cluster3["Visualization"]
        style cluster3 stroke-dasharray: 5 5
        cesium["Cesium Web<br/>Application"]
    end

    lis --> planner
    lis ~~~ appender
    lis ~~~ simulator
    planner -->|Write| sc_geojson
    appender -->|Append| ag_geojson
    simulator -->|Update| ag_geojson

    sc_geojson~~~cluster3
    ag_geojson --> cluster3
    ag_geojson -.->|Upload/Filter Daily| lis
Loading

Messaging Protocol

The SOS applications utilize the Advanced Message Queuing Protocol (AMQP) through a RabbitMQ event broker. These messages include:

Application Receives Sends
Planner Data availability messages from AWS Lambda function Selected cells are saved as GeoJSON file and the contents of this file are also sent as an AMQP message to the appender application
Appender Message from planner containing the selected cells Aggregates the selected cells into a record, filters duplicate rows, and sends an AMQP message to the simulator application
Simulator Message from the appender containing the aggregated selected cells record Simulate satellite operations and determines when and where observations are collected, sends to Cesium web application

Data Structure & Interfaces

The input data and output data generated by applications are uploaded onto an Amazon Web Services (AWS) Simple Storage Service (S3) bucket.

Note: The applications use the AWS SDK for Python, Boto3. Boto3 allows users to create, configure, and manage AWS services, including S3, Simple Notification Service (SNS), and Elastic Compute Cloud (EC2). Access to the AWS SDK is limited to SOS administrators as required by NASA's Science Managed Cloud Environment (SMCE).

flowchart TB
 subgraph Discover["Discover"]
        lis("LIS")
  end
 subgraph AWS["AWS"]
        S3("S3 Bucket")
  end
 subgraph NOS-T["NOS-T"]
        planner("Planner")
        appender("Appender")
        simulator("Simulator")
  end
    planner <-. AMQP .-> appender
    appender <-. AMQP .-> simulator
    NOS-T -- Boto3 --> AWS
    AWS -- AMQP or SNS --> Discover
    Discover -. Boto3 .-> AWS
    AWS -. AMQP or SNS .-> NOS-T
    style lis fill: Violet
    style S3 fill: Orange
    style planner fill: #ff0000, stroke: #333, stroke-width: 2px
    style appender fill: #1e90ff, stroke: #333, stroke-width: 2px
    style simulator fill: #008000, stroke: #333, stroke-width: 2px
    linkStyle 4 stroke: Violet,fill:none
    linkStyle 5 stroke: Violet,fill:none
Loading

The LIS inputs are stored in an S3 bucket, which the SOS applications access. The SOS applications then output data into an output directory, organized by the specific day and application. Below is an example:

flowchart LR
    subgraph S3Bucket["S3 Bucket"]
        subgraph Inputs["LIS Forecasts"]
        inputs --> LIS
        inputs --> vector
        LIS -.-> for1["LIS_HIST_201903010000.d01.nc"]
        LIS -.-> for2["LIS_HIST_201903020000.d01.nc"]
        LIS -.-> for3["LIS_HIST_201903030000.d01.nc"]
        LIS -.-> for4["LIS_HIST_201903040000.d01.nc"]
        LIS -.-> for5["LIS_HIST_201903050000.d01.nc"]
        LIS -.-> for6["LIS_HIST_201903060000.d01.nc"]
        LIS -.-> for7["LIS_HIST_201903070000.d01.nc"]
        LIS -.-> for8["LIS_HIST_201903080000.d01.nc"]
        LIS -.-> for9["LIS_HIST_201903090000.d01.nc"]
        LIS -.-> for10["LIS_HIST_201903100000.d01.nc"]
        vector -.-> geoj["WBDHU2.geojson"]
        end
        subgraph Outputs["NOS-T Application Ouputs"]
        outputs --> planner
        outputs --> appender
        outputs --> simulator
        planner -.-> d1p["2019-03-02"]
        planner -.-> d1p2[".<br/>.<br/>.<br/>."]
        planner -.-> d1p3["2019-03-10"]
        d1p -.-> selected["selected_cells.geojson"]
        d1p2 -.-> selected2[".<br/>.<br/>.<br/>."]
        d1p3 -.-> selected3["selected_cells.geojson"]
        appender -.-> d1a["2019-03-02"]
        appender -.-> d1a2[".<br/>.<br/>.<br/>."]
        appender -.-> d1a3["2019-03-10"]
        d1a -.-> appended["appended_cells.geojson"]
        d1a2 -.->appended2[".<br/>.<br/>.<br/>."]
        d1a3 -.-> appended3["appended_cells.geojson"]
        simulator -.-> d1s["2019-03-02"]
        simulator -.-> d1s2[".<br/>.<br/>.<br/>."]
        simulator -.-> d1s3["2019-03-10"]
        d1s -.-> simulated["completed_cells.geojson"]
        d1s2 -.->simulated2[".<br/>.<br/>.<br/>."]
        d1s3 -.-> simulated3["completed_cells.geojson"]
        end
    end

    style planner fill: #ff0000, stroke: #333, stroke-width: 2px
    style d1p fill: #ff0000, stroke: #333, stroke-width: 2px
    style d1p2 fill: #ff0000, stroke: #333, stroke-width: 2px
    style d1p3 fill: #ff0000, stroke: #333, stroke-width: 2px
    style selected fill: #ff0000, stroke: #333, stroke-width: 2px
    style selected2 fill: #ff0000, stroke: #333, stroke-width: 2px
    style selected3 fill: #ff0000, stroke: #333, stroke-width: 2px

    style appender fill: #1e90ff, stroke: #333, stroke-width: 2px
    style d1a fill: #1e90ff, stroke: #333, stroke-width: 2px
    style d1a2 fill: #1e90ff, stroke: #333, stroke-width: 2px
    style d1a3 fill: #1e90ff, stroke: #333, stroke-width: 2px
    style appended fill: #1e90ff, stroke: #333, stroke-width: 2px
    style appended2 fill: #1e90ff, stroke: #333, stroke-width: 2px
    style appended3 fill: #1e90ff, stroke: #333, stroke-width: 2px

    style simulator fill: #008000, stroke: #333, stroke-width: 2px
    style d1s fill: #008000, stroke: #333, stroke-width: 2px
    style d1s2 fill: #008000, stroke: #333, stroke-width: 2px
    style d1s3 fill: #008000, stroke: #333, stroke-width: 2px
    style simulated fill: #008000, stroke: #333, stroke-width: 2px
    style simulated2 fill: #008000, stroke: #333, stroke-width: 2px
    style simulated3 fill: #008000, stroke: #333, stroke-width: 2px

    style LIS fill:Violet
    style for1 fill:Violet
    style for2 fill:Violet
    style for3 fill:Violet
    style for4 fill:Violet
    style for5 fill:Violet
    style for6 fill:Violet
    style for7 fill:Violet
    style for8 fill:Violet
    style for9 fill:Violet
    style for10 fill:Violet
Loading

The flow of data between the various applications and systems is shown below:

sequenceDiagram
    box Red Discover
    participant L as Land<br/>Information<br/>System (LIS)
    end
    box Green Novel Observing Strategies Testbed (NOS-T)
    participant M as Manager
    participant P as Planner
    participant A as Appender
    participant S as Simulator
    participant C as Cesium Web<br/>Application
    end
    activate M
    activate C
    M->>C: Initialize
    M->>C: Start
    L-->>P: LIS NetCDF
    activate P
    Note over P: Maximize<br/>Reward<br/>Values
    P-->>A: Taskable<br/>Observations
    deactivate P
    activate A
    Note over A: Append Unique<br/>Taskable<br/>Observations
    A-->>S: Appended<br/>Taskable<br/>Observations
    deactivate A
    activate S
    Note over S: Simulate<br/>Satellite<br/>Operations
    box Blue Amazon Web<br/>Services (AWS)
    participant S3 as S3 Bucket
    end
    S-->>S3: Collected Taskable Observations
    deactivate S
    Note over C: Visualize<br/>Taskable<br/>Observations
    M->>C: Stop
    S3->>L: Collected Taskable Observations
    deactivate M
    deactivate C
Loading

Execution

The SOS applications can be executed using Conda or Docker. The steps for executing Conda are provided below, assuming you have following the NOS-T installation instructions and AWS CLI installation instructions.

YAML

In the sos directory, create a YAML file named sos.yaml with the following contents:

info:
  title: Novel Observing Strategies Testbed (NOS-T) YAML Configuration
  version: '1.0.0'
  description: Version-controlled AsyncAPI document for RabbitMQ event broker with Keycloak authentication within NOS-T
servers:
  rabbitmq:
    keycloak_authentication: False
    host: "localhost"
    port: 5672
    tls: False
    virtual_host: "/"
execution:
  general:
    prefix: sos
    wallclock_offset_refresh_interval: 60
    ntp_host: "pool.ntp.org"
  manager:
    sim_start_time: "2019-03-01T23:59:59+00:00"
    sim_stop_time: "2019-03-10T23:59:59+00:00"
    start_time:
    time_step: "0:00:01"
    is_scenario_time_step: True
    time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
    time_scale_updates: []
    time_status_step: "0:00:01" # 1 second * time scale factor
    is_scenario_time_status_step: False
    time_status_init: "2019-03-01T23:59:59+00:00"
    command_lead: "0:00:05"
    required_apps:
      - manager
      - planner
      - appender
      - simulator
    init_retry_delay_s: 5
    init_max_retry: 5
    set_offset: True
    shut_down_when_terminated: True
    enable_file_logging: True
  managed_applications:
    planner:
      time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
      time_step: "0:00:01" # 1 second * time scale factor
      is_scenario_time_step: True
      set_offset: True
      time_status_step: "0:00:10" # 10 seconds * time scale factor
      is_scenario_time_status_step: False
      time_status_init: "2019-03-01T23:59:59+00:00"
      shut_down_when_terminated: True
      manager_app_name: "manager"
      enable_file_logging: True
    appender:
      time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
      time_step: "0:00:01" # 1 second * time scale factor
      is_scenario_time_step: True
      set_offset: True
      time_status_step: "0:00:10" # 10 seconds * time scale factor
      is_scenario_time_status_step: False
      time_status_init: "2019-03-01T23:59:59+00:00"
      shut_down_when_terminated: True
      manager_app_name: "manager"
      enable_file_logging: True
    simulator:
      time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
      time_step: "0:01:00" # 1 second * time scale factor
      is_scenario_time_step: True
      set_offset: True
      time_status_step: "0:00:10" # 10 seconds * time scale factor
      is_scenario_time_status_step: False
      time_status_init: "2019-03-01T23:59:59+00:00"
      shut_down_when_terminated: True
      manager_app_name: "manager"
      enable_file_logging: True

Optional Freezes

flowchart LR
    A{"Scenario day change"} -- Freeze --> B{"Timed or indefinite?"}
    B -- Timed --> C["Resume after timed freeze<br>(1-2 hours)"]
    B -- Indefinite --> D["Data Upload Triggers Lambda Function<br>"]
    D -->F["Resume after S3 upload"]
    A -- No freeze --> E["Continue after scenario day change"]
    linkStyle 2 stroke:#00C853,fill:none
    linkStyle 3 stroke:#00C853,fill:none
    linkStyle 4 stroke:#D50000,fill:none
Loading

Depending on whether the applications are running in isolation or in integration with LIS, scenario time freezes may be required. To enhance flexibility, multiple freezes modes are possible and detailed below:

  • Indefinite Freeze: Useful when running Planner, Appender, and Simulator applications with LIS

    configuration_parameters:
      scenario_day_freeze:
        enabled: true
        mode: "indefinite"
  • Timed Freeze: Useful when running Planner, Appender, and Simulator applications with LIS

    configuration_parameters:
      scenario_day_freeze:
        enabled: true
        mode: "timed"
        duration: "0:02:00" # duration for timed freeze (HH:MM:SS format)
  • No Freeze: Useful when running Planner, Appender, and Simulator applications separately from LIS (e.g., experimental or development purposes).

    configuration_parameters:
      scenario_day_freeze:
        enabled: false
Lambda Resume Trigger

When the planner is configured with an indefinite freeze, an AWS Lambda function is used to resume the simulation after LIS uploads new data to S3. The source files are in the lambda/ directory.

File Description
lambda/lambda_function_nost.py Lambda handler — connects to the event broker and sends a resume request
lambda/deploy_lambda.sh Packages the handler + dependencies into ZIP files for upload
lambda/requirements.txt Python dependencies for the Lambda environment

How It Works

  1. LIS uploads a NetCDF file to the S3 bucket under inputs/LIS/assimilation/
  2. The S3 upload event triggers the Lambda function
  3. Lambda extracts the timestamp from the filename (e.g., LIS_HIST_202501020000.d01.nc2025-01-02T00:00:00)
  4. Lambda instantiates a NOS-T Application, connects to the event broker, and calls request_resume
  5. The Manager receives the resume request and resumes all frozen applications
---
config:
  look: neo
  theme: redux
---
sequenceDiagram
    participant LIS
    participant S3 as S3 Bucket
    participant Lambda as Lambda Function
    participant EB as Event Broker
    participant Manager

    LIS->>S3: Upload NetCDF to inputs/LIS/assimilation/
    S3->>Lambda: S3 upload event trigger
    Note over Lambda: Extract timestamp from filename<br/>(e.g., LIS_HIST_202501020000.d01.nc<br/>→ 2025-01-02T00:00:00)
    Lambda->>EB: Connect and send request_resume
    EB->>Manager: Resume request
    Manager->>Manager: Resume all frozen applications
Loading

Environment Variables

Configure these in the AWS Lambda Console under your function's configuration:

Variable Required Default Description
NOST_PREFIX Yes nost_sos Execution namespace/prefix — must match the prefix in sos.yaml
NOST_CONFIG_YAML Yes sos.yaml Path to the YAML configuration file
SECRET_NAME No Name of AWS Secrets Manager secret containing credentials (see below)
AWS_REGION No us-east-1 AWS region for Secrets Manager

Credentials

Lambda retrieves credentials from AWS Secrets Manager when SECRET_NAME is set. The secret must be a JSON object whose keys depend on the account type:

  • Service Account (currently used by the Lambda function):

    {
      "CLIENT_ID": "nost-client",
      "CLIENT_SECRET_KEY": "your-client-secret-key"
    }
  • User Account (requires uncommenting USERNAME/PASSWORD in the Lambda handler):

    {
      "USERNAME": "nost-user",
      "PASSWORD": "secure_password",
      "CLIENT_ID": "nost-client",
      "CLIENT_SECRET_KEY": "your-client-secret-key"
    }

If SECRET_NAME is not set, credentials must be available via environment variables or a .env file.

Deployment

Package the Lambda function and layer:

./lambda/deploy_lambda.sh

This creates two packages in lambda_package/:

  • lambda_layer.zip — Python dependencies in Lambda Layer format
  • lambda_function.zip — handler code (lambda_function.py) + sos.yaml

Upload both packages via the AWS Lambda Console:

  1. Create layer: Lambda → Layers → Create layer → Upload lambda_layer.zip → Set compatible runtime to Python 3.12
  2. Upload function code: Lambda → Your function → Code → Upload from → .zip file → Upload lambda_function.zip
  3. Attach layer: Lambda → Your function → Layers → Add a layer → Custom layers → Select the layer created in step 1

IAM Permissions

The Lambda execution role requires:

  • secretsmanager:GetSecretValue on the secret ARN (if using Secrets Manager)
  • s3:GetObject on the source bucket
  • CloudWatch Logs permissions (logs:CreateLogGroup, logs:CreateLogStream, logs:PutLogEvents)
  • VPC permissions if RabbitMQ is in a VPC (ec2:CreateNetworkInterface, ec2:DescribeNetworkInterfaces, ec2:DeleteNetworkInterface)

Below is a complete example showing the various freeze modes implemented in the YAML configuration file:

info:
  title: Novel Observing Strategies Testbed (NOS-T) YAML Configuration
  version: '1.0.0'
  description: Version-controlled AsyncAPI document for RabbitMQ event broker with Keycloak authentication within NOS-T
servers:
  rabbitmq:
    keycloak_authentication: False
    host: "localhost"
    port: 5672
    tls: False
    virtual_host: "/"
execution:
  general:
    prefix: sos
    wallclock_offset_refresh_interval: 60
    ntp_host: "pool.ntp.org"
  manager:
    sim_start_time: "2019-03-01T23:59:59+00:00"
    sim_stop_time: "2019-03-10T23:59:59+00:00"
    start_time:
    time_step: "0:00:01"
    is_scenario_time_step: True
    time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
    time_scale_updates: []
    time_status_step: "0:00:01" # 1 second * time scale factor
    is_scenario_time_status_step: False
    time_status_init: "2019-03-01T23:59:59+00:00"
    command_lead: "0:00:05"
    required_apps:
      - manager
      - planner
      - appender
      - simulator
    init_retry_delay_s: 5
    init_max_retry: 5
    set_offset: True
    shut_down_when_terminated: True
    enable_file_logging: True
  managed_applications:
    planner:
      time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
      time_step: "0:00:01" # 1 second * time scale factor
      is_scenario_time_step: True
      set_offset: True
      time_status_step: "0:00:10" # 10 seconds * time scale factor
      is_scenario_time_status_step: False
      time_status_init: "2019-03-01T23:59:59+00:00"
      shut_down_when_terminated: True
      manager_app_name: "manager"
      enable_file_logging: True
      configuration_parameters:
        scenario_day_freeze:        # See "Optional Freezes" section above for all modes
          enabled: true
          mode: "indefinite"        # or "timed" with duration: "0:02:00"
    appender:
      time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
      time_step: "0:00:01" # 1 second * time scale factor
      is_scenario_time_step: True
      set_offset: True
      time_status_step: "0:00:10" # 10 seconds * time scale factor
      is_scenario_time_status_step: False
      time_status_init: "2019-03-01T23:59:59+00:00"
      shut_down_when_terminated: True
      manager_app_name: "manager"
      enable_file_logging: True
    simulator:
      time_scale_factor: 24 # 1 simulation day = 60 wallclock minutes
      time_step: "0:01:00" # 1 second * time scale factor
      is_scenario_time_step: True
      set_offset: True
      time_status_step: "0:00:10" # 10 seconds * time scale factor
      is_scenario_time_status_step: False
      time_status_init: "2019-03-01T23:59:59+00:00"
      shut_down_when_terminated: True
      manager_app_name: "manager"
      enable_file_logging: True

Configuration Parameters

Each managed application supports a configuration_parameters block in sos.yaml under execution.managed_applications.<app_name>. These parameters control application-specific behavior beyond the standard NOS-T timing configuration.

Planner

Parameter Type Required Default Description
budget int Yes 50 Maximum number of observation cells the linear programming solver can select per planning cycle. Controls the trade-off between observation coverage and resource constraints.
norad_id int Yes NORAD catalog ID of the satellite used for orbit propagation (e.g., 38337 for GCOM W1). Used to fetch TLE data from Space-Track.org.
first_day_trigger bool No false When true, triggers the planner on the first simulation time tick even if no scenario day change has occurred. Useful for ensuring the first simulation day is processed.
scenario_day_freeze object No (disabled) Controls freeze behavior at scenario day boundaries. See Optional Freezes above for details.

Example:

managed_applications:
  planner:
    # ... standard NOS-T timing parameters ...
    configuration_parameters:
      budget: 50
      norad_id: 38337
      first_day_trigger: True
      scenario_day_freeze:
        enabled: true
        mode: "indefinite"

Appender

Parameter Type Required Default Description
set_expiration_time list[bool] Yes Single-element list (e.g., [true]). When true, observations are assigned an expiration date based on expiration_time. When false, observations never expire (set to a far-future date).
expiration_time list[int] Yes Single-element list (e.g., [7]). Number of days after the planner's selection time before an observation expires. Expired observations are excluded from the active set sent to the simulator. Only meaningful when set_expiration_time is [true].

Note: These parameters use single-element lists due to the YAML parsing convention — values are accessed as config.rc.application_configuration['set_expiration_time'][0].

Example:

managed_applications:
  appender:
    # ... standard NOS-T timing parameters ...
    configuration_parameters:
      set_expiration_time:
        - true
      expiration_time:
        - 7

Simulator

Parameter Type Required Default Description
constellation_capacity list[float] Yes Single-element list (e.g., [1.0]). Probability threshold (0.0–1.0) for collecting an observation. Each simulation day, a random value is generated; if it falls at or below this threshold, the satellite collects the observation. A value of 1.0 means observations are always collected; 0.5 means ~50% collection rate.
observation_interval list[int] Yes Single-element list (e.g., [30]). Minimum time interval in seconds between consecutive observation opportunities for a satellite. Controls how frequently the satellite can attempt collections along its ground track.

Example:

managed_applications:
  simulator:
    # ... standard NOS-T timing parameters ...
    configuration_parameters:
      constellation_capacity:
        - 1
      observation_interval:
        - 30

.env

Localhost

In the sos directory, create a .env file with the following content specific to your event broker running on local host:

USERNAME="admin"
PASSWORD="admin"
SPACETRACK_USERNAME="<Your Space-Track.org Username>"
SPACETRACK_PASSWORD="<Your Space-Track.org Password>"

Note: SpaceTrack credentials are required for fetching satellite Two-Line Element (TLE) data. Create a free account at Space-Track.org to obtain your credentials.

Cloud-Hosted

In the sos directory, create a .env file with the following content to access the event broker hosted on the Science Cloud:

  • Service Account:

    CLIENT_ID="<Request from NOS-T Operator>"
    CLIENT_SECRET_KEY="<Request from NOS-T Operator>"
    SPACETRACK_USERNAME="<Your Space-Track.org Username>"
    SPACETRACK_PASSWORD="<Your Space-Track.org Password>"
  • User Account:

    USERNAME="<Keycloak Username>"
    PASSWORD="<Keycloak Password>"
    CLIENT_ID="<Request from NOS-T Operator>"
    CLIENT_SECRET_KEY="<Request from NOS-T Operator>"
    SPACETRACK_USERNAME="<Your Space-Track.org Username>"
    SPACETRACK_PASSWORD="<Your Space-Track.org Password>"

Conda

Activate the Conda environment:

conda activate nost

Run each application in a separate terminal, making sure to start the manager application first:

  • Terminal 1:
python3 src/manager/main.py
  • Terminal 2:
python3 src/planner/main.py
  • Terminal 3:
python3 src/appender/main.py
  • Terminal 4:
python3 src/simulator/main.py

Below is an example:


Terminal running all four SOS applications.

Docker

The SOS applications can be run using Docker compose.

  1. Change directory to your cloned repo (i.e. sos/), which will be the working directory for this execution.

  2. Confirm prerequisites:

  3. Execute the containers using docker-compose:

    docker-compose up -d
    

    Notes:

    • To confirm Docker containers are running, run the command: docker ps. You should see four containers listed: manager, planner, appender, and simulator.
    • The manager container includes a health check (pgrep on its process, with a 15-second start period). The planner, appender, and simulator containers use depends_on: condition: service_healthy, so Docker Compose will wait until the manager is healthy before starting them.

    Environment Variables

    The following environment variables can be set per-container in the environment block of docker-compose.yml:

    Variable Default Description
    ENABLE_UPLOADS true Controls whether applications upload output files to the S3 bucket. Set to false to skip S3 uploads and only write files locally. The default docker-compose.yml sets this to false for all applications. Set to true when running with AWS access and you want outputs uploaded to S3.
    DOWNLOAD_CHECK_INTERVAL Interval in seconds between S3 download retry attempts (planner only).
    DOWNLOAD_MAX_ATTEMPTS Maximum number of S3 download retry attempts (planner only).
  4. To shutdown the Docker containers:

    docker-compose down
    

Cesium Visualization

Setting up a Cesium visualization requires you (i) set up an event broker on local host, (ii) acquire a Cesium access token, (iii) create an env.js file with credentials, and (iv) run an HTTP server to expose local files. Each of these steps are covered below.

Event Broker on Local Host

To setup an event broker on local host, follow the directions here.

Cesium Access Token

  1. Sign in or create an account at: https://cesium.com/ion/signin/tokens.

  2. Create a new access token by clicking the blue "Create token" button located in the upper left corner.

  3. Add the Asset "Blue Marble Next Generation July 2004" to your assets: https://ion.cesium.com/assetdepot/3845?query=Blue%20Mar. Click the blue 'Add to my assets' button located in the bottom right corner.

Env.js File

  1. In the sos/src/visualization directory, create a file named env.js file with the following contents:

    var HOST="localhost"
    var RABBITMQ_PORT=15670
    var USERNAME=  #Your RabbitMQ username
    var PASSWORD=  #Your RabbitMQ password
    var TOKEN=     #Cesium access token

    Note: Add your Cesium access token that you generated in the Cesium Access Token Section.

HTTP Server

  1. In the sos/src/visualization directory, run an HTTP server:

    python3 -m http.server 7000
  2. In your web browser, navigate to http://localhost:7000

  3. Finally, click on cesium_visualization.html. You should see a Cesium visualization web application running on local host.

About

Snow Observing Strategy

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages