What is an IDIQ

An IDIQ contract provides an indefinite quantity of supplies or services during a fixed period of time. Sometimes called "Task Orders" or "Delivery Order Contracts," IDIQ contracts are a secondary type of Indefinite Delivery Contract (IDC), which is a "vehicle that has been awarded to one or more vendors to facilitate the delivery of supply and service orders.”

Read More...

The MATTER IDIQ

Monkton, the industry leader in rapid solutions-based outcomes, was recognized in 2020 by the DoD with the issuance of a government wide, Indefinite-Delivery/ Indefinite-Quantity contract. Mobile Apps to the Tactical Edge Ready (MATTER) supports secure, edge-based mobility and is available to all Federal government agencies to issue task orders against—enabling rapid acquisition to achieve results faster.

Read More...

Automated Multi-Region deployments in AWS: Lambda

Amazon's Lambda service is perhaps one of our favorites there is, because it lets you just hand over code to AWS and lets AWS deal with the complexities of executing it. Best of all, it is cost effective: instead of servers running 24/7 and wasting resources and money, it only charges you when your code needs to be executed. I'd even argue its more environmentally friendly, since you only burn what you use!

This does come at some "cost." Lambda executions can be more expensive when you aggregate them out compared to other computing. But, the benefit on the flip sides is, you don't have to worry about managing servers (Even ECS with EC2 you need to monitor servers), patching, etc. So, is that a cost you'd be willing to pay for?

Debugging can be a bit more difficult. Getting your application running and tested the first go can be a bit painful if something isn't working as expected. Once you get it cracked for the first time, the following are a breeze.

But, again—this all falls back to, making life easier. Our goal is to not have to manage servers, ever. Lambda (and to be far, Fargate) enable this.

Read More...

AWS CodePipeline Random Musings

We have laid out the following accounts for our core DSOP architecture:

  • DSOP CodePipelines
  • DSOP CodeCommit
  • DSOP Tooling

Each of these provides us with different outcomes and tries to limit access to developers and ops. Traditionally, many would combine the CodeCommit and CodePipelines accounts into one. This is an ok strategy, but could potentially cause some issues with separation of duties. Our goal is to break that and enable CodeCommit and CodePipeline to reside in different accounts.

There are a lot of "Gotchas" in the process of developing Pipelines. For one, creating a pipeline that works on multiple branches is basically impossible with CodeCommit feeding directly into CodePipeline. You need a 1:1 pipeline for your branch. So, if you are using a branch development strategy, you will create a lot of pipelines. This becomes a tangled mess if you need to update those pipelines if they still exist.

Breaking our code into accounts for CodeCommit and CodePipeline, helped enable this. Our strategy follows below.

Read More...

Automated Multi-Region deployments in AWS: DynamoDB

Virtually all of our data storage we do is within DynamoDB. We have chosen DynamoDB over others because there is literally nothing to manage—it is Platform as a Service ("PaaS") up and down.

You define a table, indices, and you are off and running.

For those that don't know, DynamoDB is analogous to a "NoSQL" database. Think of it as a large hash table that provides single digit millisecond response times. It scales so well it is what Amazon (proper) uses to drive the Amazon Store.

In 2017, Amazon launched "Global Tables" for DynamoDB. Global Tables enable you to define a table in one to many regions. DynamoDB automatically syncs to other regions without you doing additional work.

Thus, you can easily have multi-region capabilities with virtually no overhead. We'll dig into DynamoDB and Global Tables in this article. We will focus only on Global Tables.

Read More...

Automated Multi-Region deployments in AWS: Gotchas

"Gotcha" maybe a bit over the top, but perhaps "caveats" is a better term. Leveraging StackSets alone can cause some order of operation issues, as well as adding multi-region on top of it.

We will discuss these caveats more in depth in other articles, but wanted to touch on StackSets up front, since they underpin everything we will do.

With StackSets and applying them to OUs, automated deployment of them works like a charm, most of the time. As we laid out in the Intro, we deploy all of our IaC as StackSets into OU targets. We do this to automate deployments and ensure we have a consistent deployment throughout all of our Accounts for an application.

This also enables us to create private tenets for customers that only they access, with minimal overhead.

Our entire cloud journey is to remove overhead and reduce maintenance needs and build more awesome things.

Read More...

Automated Multi-Region deployments in AWS

The tides have changed on resiliency of building applications that reside in the cloud. We were told for years that "Be Multi-Availability Zone" was the means to have resilient cloud apps. But, the outages that have hit every major Cloud Service Provider ("CSP") recently show that it isn't always a strategy if your aim is extremely high availablity.

So, we need to think bigger. But, this comes at increased cost and increased complexity. The fact is, there just aren't a whole lot of organizations doing multi-region deployments—let alone ones talking about it. This series hopes to assist in filling that gap.

We decided to author a series of blog posts on how to build resilient cloud applications that can span multiple regions in AWS, specifically AWS GovCloud. Our goal here is uptime to both push data to and retrieve data from a cloud application. This series will touch on several things which we are focusing on, building Web Apps and Web APIs that process data.

Most of our applications use several AWS core technologies (listed below). We have made a concerted effort to migrate to pure Platform as a Service ("PaaS") where we can. We want to avoid IaaS totally, as it requires additional management of resources. We can't tell you how all of this will work with Lift and Shift, as our engineering is centered around using cloud native services.

The goal for us and the reason for the cloud is, let someone else do the hard work. For our cloud based solutions, we do not use Kubernetes ("k8s"), at all. We find the overhead to be too cumbersome when we can allow AWS to do all the management for us. When we cut over to edge computing k8s becomes a viable solution.

At a high level, we use the following services to build and deliver applications:

  • AWS Lambda and/or AWS ECS Fargate for compute
  • AWS DynamoDB for data storage (Global Tables)
  • AWS S3 for object storage
  • AWS Kinesis + AWS S3 for long term logging of applications to comply with DoD SRG and FedRAMP logging

Now, there are a lot of applications that may need to use more services. Things like Athena or QuickSite maybe necessary, but we consider (at least for the solutions we are building) for those to be ancillary services to the core applications. For instance, in these applications if you can't get to QuickSite to visualize some data for an hour—its not that big of a deal (at least for this solution). But, if you can't log data from the field real time, that is a big deal.

Read More...

Custom CloudFormation Resource for looking up config data

This project, CloudFormation Lookup enables you to build CloudFormation templates that pull configuration data from DynamoDB as a dictionary. Consider the scenario where you are using CodeBuild to deploy apps into a series of AWS accounts you control. Each of those accounts may have differing configuration data, depending on the intent of the account. For instance, perhaps you deploy an application across segregated tenants for customers? Each of those tenets may have different configurations, like DNS host names.

This project can be found here on GitHub

As of right now with CloudFormation, there is no means to pull that data, on a per account basis. To solve this problem, we have developed a custom CloudFormation resource, that enables you to define resource, as below in your CloudFormation template:

# Looks up properties for automated deployments that we store within DynamoDB as configuration.
# The properties are looked up by key. The return values could be strings, string lists, 
# numbers, or maps. Maps enable you to put a bunch of values in the resulting data structure. 
PropertiesLookupTest:
  Type: Custom::PropertiesLookup
  Version: '1.0'
  Properties:
    # Identified the Lambda function which we will invoke for this custom resource
    ServiceToken: !Sub "arn:${AWS::Partition}:lambda:${AWS::Region}:${AWS::AccountId}:function:dsop-tools-lookup"
    # This is the lookup value that we will attempt to find from 
    # storage. We key this value on the name of the application PLUS
    # the account identifier. We may want to deploy everything with the same
    # configuration—or potentially have different tenets which have account
    # specific configurations. 
    Value: !Sub "/website/${AWS::AccountId}/${AWS::Region}"
    # This looks for the default value that maybe stored in the key. 
    # This is useful for auto deployed StackSet Instances in AWS. 
    DefaultLookupValue: "/website/default"
    # This default value, in JSON format if the value or default 
    # lookup are not able to be discerned
    DefaultValueAsJson: "{ \"domainName\": \"smoke-test.monkton.io\" }"
Read More...

Automating iOS App Development with CI/CD Pipelines with macOS Build Servers

As part of our series on building iOS apps, we will walk through configuring a build server for doing so. This build server can also be used for building macOS apps as well.

This write up is intended to not solve all your CI/CD issues for building apps for iOS, but more of a "bare bones" build server that will help you scale your DevSecOps pipelines for mobile.

To be up front about this, automating builds on macOS has a few pain points. In the pursuit of building a more secure OS, macOS can tend to be on the difficult side for build automation.

For instance, configuring a "headless" build server with FileVault enabled is impossible at this point. So, you cannot VNC into a server sitting in a server rack without doing so locally. Setting up an "auto login" via macOS with FileVault also will not work, because FileVault does not allow that. So, one must take these issues into account.

Without logging in, you cannot (in this instance of this build server) run the GitLab Runners.

So, options can be limited depending on what you are attempting to do. To work around this, you may want to have your macOS boot volume not encrypted and store all your data in an encrypted volume. This will enable the macOS build server to book and auto-login to enable jobs to run.

For GitLab, you need no ingress point to access the build server, only egress to ping your GitLab repo. So, one could drop this box in a private subnet that has some outbound egress and be somewhat comfortable with the security around it.

Automating a lot of these steps hasn't been easy, there are a lot of password and confirmation prompts that require a user to do something.

Read More...

Cross Account DynamoDB Access

We at Monkton use DynamoDB a lot for storage. It is extremely fast and scalable. A lot of the work we do is in AWS GovCloud, so this post will be geared towards that, but easily portable to other regions. We spent some time digging around and being frustrated trying to get this to work and wanted to share lessons learned to avoid those headaches.

Defining the need

We are helping build a new set of services, part of our multi-account architecture is a centralized "Identity SaaS" service. While we have micro-services available in that account to read/write to the "Identity SaaS" service DynamoDB, we opted to read/write directly to it, for other trusted services and accounts. This was simply a performance choice on our end to speed things up. We wanted to avoid creating a HTTPS request, waiting for it to do its thing in DynamoDB, and return—when we could do it directly using the same logic.

Many considerations

Part of how to configure this is understanding where and what services we will be using. For this project, we are using Lambda and ECS Fargate to deploy backend services. For the purposes of this demo, we are looking at Fargate, but lessons apply to Lambda as well. Part of that is following "Best Practices" and deploying these services into VPCs with private subnets.

Read More...