Why and how to do DevSecOps

Simon Morse Practice Director Security at Versent

Simon Morse

Security Architect, Versent

January 24, 2019

Security and DevOps terminology arguments, and how they relate to IT Transformation.

I have a long running dispute over terminology with my boss. He is a purist and is trying to use“SecDevOps” and “DevSecOps” to differentiate the different senses in which DevOps and Security are able to be successfully combined (Note: “Security atSpeed” is also popular, but equally confusing…)

The banter relates to whether by DevSecOps we mean the injection of Security intoDevOps; or the application of DevOps techniques to Security Operations. I end up on the pragmatic side of the argument and am content with the reality that the industry has settled on “DevSecOps” to describe both.  The end target is the same – a unified IT shop covering all aspects of development, operations and security well.  The means by which the transformation takes place can vary a bit though, so let’s talk about how this works in practice.

DevOps Applied to Security Operations

In my previous article, I outlined the case for DevSecOps and gave an example of how this can work in a Continuous Integration Pipeline with the use of Metrics.  This delegates security design and implementation to DevOps and Security take up the role of measuring and governing the security quality.  This is squarely at the “inject security into DevOps” end of the spectrum, so to illustrate the alternative sense in which the term is used, let’s look at a standard Security Operations deployment pattern for network ingress / egress, and what happens to this under a DevOps approach.

The classic approach for managing information flows in and out of an organisation is to construct a dedicated DMZ through which all traffic from the organisationi s consolidated and then flows via common infrastructure.  This is often referred to as a choke point architecture.  The advantage of this style of solution is that all traffic is able to be controlled from a central point and various capabilities implemented (e.g. deep packet inspection, data leakage prevention, DDoS protection).  These capabilities are typically evaluated and implemented via large projects that allow for enterprise-wide testing to validate that the solution works across down stream systems, external business and retail consumers.

This solution comes from an era where infrastructure changes were largely manual.  When done well, the positive characteristics of this approach are efficiency, discipline, consistency and visibility.  Here’s a stylised view:

                                                                                                   Choke point Deployment Architecture


1.     It creates single points of operational failure that require complex dynamic routing solutions to handle failure scenarios;

2.     There is a brittle approach to security such that a compromise of the perimeter can result in wholesale compromise of the enterprise;

3.     Basic functions such as applications of patches, let alone version upgrades or alternative solution implementations, typically require a unified change window across the entire enterprise;

4.    The cost model is either a “first one on the bus pays for the bus” approach or a centralised operational cost that is difficult to tie back to specific business lines to demonstrate value.

The sorts of solutions that a DevOps style implementation come up with are able to preserve the positive characteristics, but also address some key negative aspects of chokepoint topologies.  The key characteristic of a typical solution is an Infrastructure as Code (IaaC)style implementation to create a series of standardised but highly parallelised points at which traffic can flow in and out of an organisation.  This covers networking, network access control, proxy / inspection infrastructure, integration into SIEM and other logging solutions and any storage required. These paths are coupled with the various workloads and external consumers and form a set of “pre-fabricated” operational units.

                                                                                                               Parallel Network Architecture


1.    This allows for complete operational independence of the units;

2.     The security of these units and any external systems are also able to stand and fall independently of each other;

3.     There is an option to selectively place differing implementations of a capability (DLP, DDoS etc.) or at the least roll them out progressively to cater for issues of backward compatibility, differing testing cycles and prescriptive compliance obligations;

4.    And there is the option for cost show-back or charge-back where consumption-based infrastructure is in play.

There are some tips and traps with parallelised IaaC to make sure that the discipline, coverage and cost efficiency benefits of a centralised solution are retained.  These include:

1.     Make sure to take a declarative approach to building infrastructure (e.g. AWS CloudFormation) to design once and reuse across all paths;

2.     Use automation to enforce consistency in deployment;

3.     Force regular rebuilds to minimise configuration drift;

4.     Take advantage of commoditised infrastructure to allow for self-healing and auto-scaling;

5.     Pick your security vendors carefully (some don’t easily support a consumption based usage model and still run with a pay-per-deployment approach);

6.     Pre-bake infrastructure and present as an artefact to allow for prompt application of fixes / vendor patches


The keenly observant among you may have noticed that first sense of DevSecOps (injecting security into DevOps)refers primarily to Continuous Integration techniques, and the other toContinuous Delivery.  Put in another way, we are attempting to put constraints and / or increase discipline in the development process in the first sense of DevSecOps; in the second, we are attempting to improve the consistency, flexibility and visibility of the runtime solution.  In Versent we use“Guardrails” to describe the first and “Trust and Verify” for the second. Here’s a comparison of the approaches:

These are simply just ways to deal with aspects of the technology space where we have high levels of control, and others where we don’t.  A guardrails approach is great for defining up-front security, but it can never design and build in countermeasures to anticipate every possibility.  A good Security Operations team will have their finger on the pulse and so can constructively feed back information to developers on what is happening as threats emerge.

Hopefully this provides a feel for how a fully two-way relationship can be established to move organisations towards a more integrated approach and deliver increased levels of security.  At Versent, my team and I spend our days helping customers to reconnect Security with the rest of the IT organisation, usually through practical approaches just like this. These range from solutions in areas of traditional security responsibility such as role-based permissions management and audit log management, as well embedding security more deeply into areas that don’t get much security visibility such as microservices-based applications and cloud based deployments.

In my day job, I work for Versent as our Practice Director for Security and Identity in Sydney.  We do a bunch of stuff related to Public Cloud migration, Identity and Access Management solutioning, and transformation to API based Serverless and Containerised solutions.  It’s my job to help our teams adopt the approaches described here and help our customers move and adapt to take best advantage of them. Thanks for reading, Simon Morse.


Great Tech-Spectations

Great Tech-Spectations

The Versent & AWS Great Tech-Spectations report explores how Aussies feel about tech in their everyday lives and how it measures up to expectations. Download the report now for a blueprint on how to meet consumer’s growing demands.