Migration to GitLab

We at Monkton began working closely with the GitLab federal team late in 2018. We chatted with their fantastic federal sales and SA teams and decided to “pull” (har har) GitLab into the Mission Mobility offering.

People many still get confused by GitLab… Everyone was probably more familiar with GitHub that got acquired by Microsoft earlier in 2018. But that is quickly changing.

We were big GitHub users, but with GitHub, you only get part of the equation. We had to string together Jenkins to perform several functions. It became a headache on some levels to manage it all.

GitLab for us, has become so much more. Besides source control, the continuous integration and continuous delivery (CI/CD) that GitLab offers integrated into its product are unmatched.

We are now even using GitLab to house our HR information, Board Meetings, the whole gamut.

Unique Requirements

The decision to migrate to GitLab, for us, was more of eating our own dog food that we are going to help customers with. We decided on GitLab Self Hosted Ultimate, which gets all the features for $99/m per user. With everything GitLab brings to the table, it is moderately priced.

You simply cannot deliver secure enterprise grade software without CI/CD.

Our customers are pretty unique. FedRAMP and DoD SRG mean we need to play in that space too. We spent about three months building and configuring the infrastructure in Amazon Web Service (AWS) for FedRAMP HIGH. Leveraging CloudFormation from AWS, Docker, and GitLab Ultimate Enterprise, we were able to build out a foundation that anyone can spin up. More so, we performed disaster recovery — over and over again — until we were comfortable with putting our IP in GitLab self hosted.

We can tear down our entire infrastructure and rebuild it from backup in about thirty minutes.

While most development could work easily at lower classification levels, FedRAMP HIGH gives us a unique advantage. FedRAMP HIGH translates more to NSS than CUI/FOUO, meaning the solution stack we are using and delivering meets and exceeds what should typically be used for development. This is covered by AWS Services in Scope documentation for FedRAMP.

The core components in AWS are: RDS Postgres, ElasticCache with Redis. Both for FedRAMP HIGH are approved and ready for use. Of course EC2 is there and S3 for persistent storage and compute.

Repeatable Results

Part of our drive to leverage GitLab is building a better and more secure DevSecOps pipeline. Our goal is repeatability and delivery of software efficiently. Once cannot do this and achieve this result if you haven’t built the proper DevOps pipe.

The hodgepodge of tools to accomplish this prior to us heading to GitLab was becoming cumbersome.

Monkton being a lean startup that isn’t looking to hire people for the sake of hiring people — automation is the key to our future success.

SOC 2

In 2019, Monkton will be starting a SOC 2 audit. SOC 2 will validate our processes internally to build and deliver software in a secure repeatable manner.

SOC 2 is an in depth audit of everything from management, board of directors, hiring practices, security practices, development tools, development processes, privacy, the list goes on.

GitLab enables us to define the proper metrics, checks, and auditing to deliver software that conforms to SOC2. It enables testing, workflows, oversight, and delivery — all in one package. This takes time and configuration, but well worth it in the end.

Docker

GitLab often refers to deploying using their packaging, but we prefer Docker for a litany of reasons. It is easier on our end to manage and configure. With CloudFormation, we can simply update a single field and voila — everything it updated.

This has some limitations in AWS for FedRAMP HIGH and the like. We have to deploy in EC2 with Autoscaling, until ECS is approved (at time of writing in 3PAO review). We prefer PaaS whenever we can.

Disaster Recovery

Part of what any enterprise needs is disaster recovery. This took a bit of time to get right. We are deploying into AWS and GitLab has the ability to push backups to S3 automatically.

We accomplished this by configuring a cron job in the Docker container host. We created a command that would invoke the GitLab admin function to perform the backup at a set interval (6AM UTC) to S3. Additionally, we have to backup and restore the GitLab secrets file. This was a hard lesson to learn — it isn’t automatically included.

So, when we restore a GitLab instance, it is three steps: 1) Restore from S3 2) Restore secrets (from S3) 3) and finally reconfigure the omnibus.

Runners

For CI/CD, GitLab has a concept of “Runner” that execute build actions on. From testing, running scripts, building, delivery — Runners are the components of GitLab that performs the actions.

Runners are Linux (or Windows and macOS) instances that can host Docker images and execute other tasks. We leverage Runners with custom Docker containers. There are two main containers images we have developed:

.NET Core

Rebar Server is built using .NET Core (2.1 currently), Microsofts latest version of .NET that can run on macOS, Linux, or Windows. We bake in a few custom components to this image to build Rebar Server and other .NET Core based products.

Android

Rebar’s Android SDK requires the latest and greatest Android SDK from Google and the Android NDK for compiling both the Kotlin and C/C++ code that comprises of the Rebar Android SDK.

Runners, cont...

Both of these containers are pulled from Docker Hub (Will be migrating to AWS Elastic Container Registry). We’ve had some issues of getting private registries working as we intend, but are working through them.

The Runner will pull down the image, build the software in that container (so a baselined image), to produce repeatable results and builds.

For our Runners, we have also built out an AWS CloudFormation script. This will configure the server, install Docker, and automatically register the runner with our GitLab instance. Automation…

iOS Runners

iOS is a different beast. You cannot build iOS and macOS software on anything other than macOS. To accomplish this, we have procured a Mac Mini with 16GB of RAM to perform the builds.

Additionally, we have attached quite a few iPads and iPhones to the Mac Mini build server — this is so we can perform automated UI testing on real devices. We will be folding in AWS Device Farm eventually, but this is an easy local solution. Additionally, having devices locally enables us to test different configurations — such as MDM, iOS versions, jailbroken devices, etc.

Back to the runner. We configure the Runner on macOS to run as shell. Meaning, the Runner will register with the GitLab host and pull down the project to build locally.

From there, we can script out the entire build process, perform unit testing, UI testing, etc.

Additionally, we are working on building out an MDM configuration for the macOS Runners. The goal there is to leverage Apple’s DEP to auto configure our build servers. From configuring Xcode, to setting up the Runners, to installing code signing entitlements — we want to automate everything.

Conclusion

We hope to have everything migrated over with the full CI/CD by the end of February. It isn’t a complex process, but reengineering a few things takes time.

GitLab will be a core component of Mission Mobility and ensures that software will be delivered in a repeatable fashion with known results.