As part of an upcoming post around how we achieved Blue/Green functionality within AWS for I wanted to cover off a bit of a technical hurdle we overcame this week around how to build and test a web app in AWS CodeBuild.
So what’s the big deal? AWS CodeBuild lets you use a whole bunch of curated containers that have all kinds of frameworks and tools built in. If that’s not enough for you out of the box the buildspec gives you excellent control over running scripts including installing packages in Ubuntu. Even then if none of that works for you, you can simply curate your own docker image, publish it to the Elastic Container Registry (ECR) (though more on limitations of ECR in CI/CD in another post), or the Docker Hub. With all these tools and approaches at my disposal, there’s no way I can’t have an AWS CodeBuild environment that meets my needs. But, spoiler alert, the environment isn’t my problem – it’s the build and test framework my developers have implemented.
My developers have built, implemented, and are running a manual build and integration process built on Angular, and here I go trying to automate the whole thing. There’s really no reason to invent the wheel, the build and deployment process they manually execute works, and they have an intimate knowledge of how it works. I much prefer to automate that process, and hand it back to them to maintain, then to use some new technology or process they’re not familiar with. With that in mind, I need to take what they’ve done and make it Just Work™.
There were two primary barriers to successfully transitioning their process into AWS Codebuild:
- The routes that the web app utilize are different between the development and public clouds, requiring a different environment.ts environment file for Angular builds for the respective clouds
- Their unit tests, and code coverage via Karma rely on Chrome, and AWS CodeBuild is a console based, GUI-less environment, that runs in the least possible permissions available to a Linux user
Let’s tackle these one at a time.
Issue #1
This one was actually a fairly easy solve for us, since in the serverless micro service world we already deal with our development and public clouds differently. We do this by maintaining two separate buildspec files within the same GitHub repository. The CloudFormation stack that creates the CodeBuild project creates a Development and a Public cloud build project and references the appropriate buildspec file.
"DevCodebuildProject": { "Type": "AWS::CodeBuild::Project", "Properties": { "Name": { "Fn::Sub": "${WebAppName}-dev" }, "Description": { "Fn::Sub": "Manages building and deploying ${WebAppName} for dev cloud" }, "ServiceRole": { "Fn::ImportValue": "CodebuildDevRoleArn" }, "Artifacts": { "Packaging": "NONE", "Type": "CODEPIPELINE", "Name": { "Fn::Sub": "platform-webapp-${WebAppName}-dev" } }, "TimeoutInMinutes": 60, "Environment": { "ComputeType": "BUILD_GENERAL1_SMALL", "PrivilegedMode": false, "Image": "aws/codebuild/nodejs:8.11.0", "Type": "LINUX_CONTAINER" }, "Source": { "BuildSpec": "deployment/buildspec-dev.yml", "Type": "CODEPIPELINE" } } }
In the snippit above, you can see in the Source object, we’re specifying the BuildSpec file is in the “deployment” folder, and is named buildspec-dev.yml. We do the same for the Public CodeBuild project, and then we can manage the builds differently for the individual clouds.
The next step is to curate different environment files for use by Angular depending on which cloud we’re building for. This is done by creating two files
- environment-dev.ts
- environment-public.ts
Within each of these files, we’ve exported environment variables to be used by Angular during the build process. This is required because we utilize a different route structure for the development cloud (app.dev.api.mitel.io) versus the public cloud (app.api.mitel.io), and the app needs to know which route to call based on where it’s been deployed. This file would look similar to the following
export const environment = { production: true, signInUrl: 'https://mydomain.io/signin', adminUrl: 'https:/admin.mydomain.io/2017-09-01', authUrl: 'https://auth.mydomain.io/2017-09-01/authsvc' cloud: 'public' };
Now that I have these files, I can reference them in my .angular-cli.json file:
"environments": { "dev": "environments/environment.dev.ts", "public": "environments/environment.public.ts" }
Phew, now that we have all that configuration done – the last step is to reference these environments in my package.json file, to make it easier to run in the buildspec file:
"scripts": { "ng": "ng", "build-public": "ng build --prod --env=public", "build-dev": "ng build --env=dev", }
Boom! That’s it. Now I can do a very simple command in buildspec to build differently based on the development or public cloud:
build: commands: - echo Starting app build process on `date` - npm run build-dev
Ok…I know I said that was the easy one…but it ended up being a few steps and a few files. But now that you have this structure in place, it’s extremely extensible to have different builds for a multitude of different reasons, test implementations, builds for automation and scale testing, production builds…the list goes on.
Now, onto…
Issue #2
In case you don’t want to scroll up, the problem here is how do we run Karma tests that require a Chome browser in a non-interactive shell environment. Google saved me on this one and pointed me in the right direction. By no means do I take credit for this solution, it’s just always good to have the right solution plastered all over the internet to make it easier to find for folks in the future.
Step one was to install Chrome into my build environment as part of my buildspec. I could have done this as part of a custom curated CodeBuild image, but I don’t currently maintain my own images and as a team of me I’m not interested in starting.
Adding this to my buildspec was a quick and easy fix:
install: commands: - echo Installing Chrome - apt-get -y install google-chrome-stable
Now I have Chrome installed, but how do I tell Karma to run Chrome in a way that it’s not expecting to have GUI displayed?
This is achieved in your Karma configuration file (karma.conf.js) using a custom launcher. There’s three keys to success here:
- Launching Chrome in headless mode
- Disabling GPU acceleration in Chrome
- Running outside of the Sandbox mode in Chrome
Running Chrome in headless mode ensures that a GUI isn’t attempted to be displayed in an OS mode where no GUI is available (or required). Disabling GPU acceleration is required since AWS CodeBuild has no GPU and we don’t want Chrome thinking there’s one available to use. Finally, because AWS CodeBuild runs you in a low permissive mode we need to take Chrome out of its normal Sandbox mode.
Everyone will automatically tell you that taking Chrome out of Sandbox mode is very dangerous, and I will 100% agree…when applied to user sessions. In this case, my environment is created purely for the purpose of this build right now and is deleted and never re-used again, I’m not entirely worried about if tabs within the browser can spy on one another. More on what the Sandbox mode is, and what it does, is very simply described in this one page comic from Google.
All three of these things can be easily achieved using the following 6-ish lines of code in the Karma config:
browsers: ['chrome_headless'], customLaunchers: { chrome_headless: { base: 'ChromeHeadless', flags: ['--disable-gpu', '--no-sandbox'], displayName: 'Chrome Headless' } }
And we’re done. Enjoy your successful builds, tests, and code coverage of Angular apps in AWS CodeBuild – complete with differing builds based on the target implementation.
Hopefully if this doesn’t solve an immediate technical challenge, it gives you some ideas for the future of your CICD pipelines for Angular apps.
Catch you next time.
Cheers,
James.