NGINX Auto Configure from S3

This technical article will break down how to automatically configure a custom build of NGINX (using Alpine Linux) that runs in Fargate.

Why? Well, we want to enable encrypted data in transit through the stack of the AWS Fargate solution we are deploying. Our entry point is an AWS Application Load Balancer accepting traffic on port 443 for TLS communication. We have an ACM certificate stored in our Account that we have referenced and use to configure that.

From there, we have a Task running in a Service/Cluster within Fargate. This task is a RESTful Web Service. Our desire is not to configure that task to process TLS itself, due to unnecessary changes to the Containers.

So, what we will do is leverage NGINX as a reverse proxy and use S3 to automatically configure NGINX on the fly as the Container is launched! We accomplish this by extending the NGINX Alpine Linux container, adding a script to download the configuration from S3 upon launch, and voila done.

S3 Configuration

Part of our CloudFormation deployment script deploys a config bucket that we put configuration elements in. We will upload the following objects, with the key prefix nginx-configuration:

  • nginx-configuration/entire-chain.crt: The chain of TLS certificates we will use for configuring NGINX
  • nginx-configuration/server.key: The TLS private key we will use for configuring NGINX
  • nginx-configuration/api.conf: The NGINX configuration itself

Once these have been deployed, our Task Execution Role will need to enable a s3:GetObject. Additionally, the KMS key that you use to protect the files require the kms:Decrypt permission:

Note: We use exports for cross stack referencing of parameters

S3 Permissions

- PolicyName: s3-access-configuration
  PolicyDocument:
    Statement:
      - Effect: Allow
    Action:
      - s3:GetObject
    Resource:
      - !Sub 
      - "${__BucketPath__}/*"
      - __BucketPath__: 
        Fn::ImportValue:
            !Sub "${parRootStackName}:Core:S3ConfigBucket:Arn"
      

KMS Permissions

- PolicyName: kms-access-configuration
  PolicyDocument:
    Statement:
      - Effect: Allow
    Action:
      - kms:Decrypt
    Resource: 
      - Fn::ImportValue: !Sub "${parRootStackName}:Core:S3ConfigKey:Arn"

S3 File Download

What we do here is take advantage of Alpine Linux and NGINX running scripts located in the /docker-entrypoint.d folder. You can toss an executable in there and the boot process will run in. In this case, we are going to create the file s3-configuration.sh.

We are going to drive this file off environment variables, ensuring that we can have the most flexibility in configuring NGINX.

The variables are:

  • ENV_CONFIG_BUCKET: The bucket which the configuration files reside
  • ENV_PATH_DIRECTORIES: The directory where we will write the certificate and private key
  • ENV_CERT_KEY: The path in S3 to the certificate chain (leaf + roots)
  • ENV_PRIVATE_KEY: The path in S3 to the private key file
  • ENV_NGINX_KEY: The path in S3 to the NGINX configuration we are using

In here, add in:

# Source bucket
BUCKET="${ENV_CONFIG_BUCKET}"

# Where we write certificates to
PATH_DIRECTORIES="$ENV_PATH_DIRECTORIES"

# Variables
CERT_KEY="$ENV_CERT_KEY"
PRIVATE_KEY="$ENV_PRIVATE_KEY"
NGINX_KEY="$ENV_NGINX_KEY"

# Ensure directories exist that we will write to
mkdir -p "${PATH_DIRECTORIES}"

# Certificate 
aws s3api get-object --bucket "${BUCKET}" \
    --key "${CERT_KEY}" \
    "${PATH_DIRECTORIES}/tls-certificate.crt" \
     > /dev/null 2>&1

# Key
aws s3api get-object --bucket "${BUCKET}" \
    --key "${PRIVATE_KEY}" \
    "${PATH_DIRECTORIES}/tls-certificate.key" \
     > /dev/null 2>&1

# NGINX configuration 
aws s3api get-object --bucket "${BUCKET}" \
    --key "${NGINX_KEY}" \
    "/etc/NGINX/NGINX.conf" \
     > /dev/null 2>&1

Docker Configuration

The step we dislike, because it adds bloat is installing Python3 and the AWS CLI onto Alpine Linux. This goes from 22MB to 180MB. Our goal will be to migrate to a shell script only approach in the future. The goal right now is "get it working."

# The item we'll be pulling, Apache view
FROM nginx:alpine

# Install openssl and python
RUN apk add --update --no-cache openssl python3 curl

# Set link to pyhton
RUN ln -sf /usr/bin/python3 /usr/bin/python

# Install AWS CLI
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" \
    && unzip awscli-bundle.zip \
    && ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

# Auto download our S3 files
COPY s3-configuration.sh /docker-entrypoint.d/s3-configuration.sh
    
# Exposed ports
EXPOSE 443

Task Definition

Finally, our task definition. Here we set the environment variables. This will drive the deployment of the container and have you off and running.

- Name: "nginx"
  RepositoryCredentials: 
    CredentialsParameter: !Ref MyAppFormationDockerSecret
  Image: !Sub "monkton/NGINX-https-proxy:fargate"    
  PortMappings:
    - ContainerPort: 443
      HostPort: 443
      Protocol: tcp
  Environment: 
    # Bucket where config items are
    - Name: ENV_CONFIG_BUCKET
      Value: 
        Fn::ImportValue:
          !Sub "${parRootStackName}:Core:S3ConfigBucket"
    - Name: ENV_CERT_KEY
      Value: "NGINX-configuration/entire-chain.crt"
    - Name: ENV_PRIVATE_KEY
      Value: "NGINX-configuration/server.key"
    - Name: ENV_NGINX_KEY
      Value: "NGINX-configuration/admin.conf"
    - Name: ENV_PATH_DIRECTORIES
      Value: "/usr/local/custom_certs"
  LogConfiguration:
    LogDriver: awslogs
    Options:
      awslogs-region: !Ref AWS::Region
      awslogs-group: !Ref MyAppAdminNGINXLogGroup
      awslogs-stream-prefix: "MyApp-NGINX"

Note the RepositoryCredentials. Our containers are privately hosted in Docker and require login credentials, which are stored in a AWS::SecretsManager::Secret value.

MyAppFormationDockerSecret:
  Type: AWS::SecretsManager::Secret
  Properties:
    Name: !Sub "MyAppDockerLogin-${AWS::StackName}"
    KmsKeyId: 
      Fn::ImportValue: !Sub "${parRootStackName}:Core:SecretsKey:Arn"
    Description: "Login for Account"
    SecretString: !Sub '{ "username":"${parMyAppCloudFormationDockerUsername}", "password":"${parMyAppCloudFormationDockerPassword}" }'
    Tags:
      - Key: Name
        Value: !Join [ '-', [ "MyApp-secret-docker" , !Ref parRootStackName , Ref: "AWS::Region" ] ]

You will need to grant your ExecutionRoleArn for your Service the ability to read the secret itself and leverage the KMS key to decrypt.

MyAppFormationECSServiceExcecutionRole:
  Type: AWS::IAM::Role
  Properties:
    AssumeRolePolicyDocument:
      Statement:
      - Effect: Allow
        Principal:
          Service:
            - ecs-tasks.amazonaws.com
        Action:
          - sts:AssumeRole
    Path: "/"
    ManagedPolicyArns:
      - !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
    Policies:
    - PolicyName: secretsmanager-access
      PolicyDocument:
        Statement:
        - Effect: Allow
          Action:
            - secretsmanager:GetSecretValue
          Resource:
            - !Ref MyAppFormationDockerSecret
    - PolicyName: secretsmanager-kms
      PolicyDocument:
        Statement:
        - Effect: Allow
          Action:
            - kms:Decrypt
          Resource:
            - Fn::ImportValue: !Sub "${parRootStackName}:Core:SecretsKey:Arn"