Evolution of “trust” in the world of cloud

Ashish Rajan Security Architect at Versent

Ashish Rajan

Senior Security Consultant, Versent

November 21, 2017

Long before cloud and data centres, everyone had their personal servers to host their websites, applications etc. These servers used to be in a “server room” on a particular floor or a section of the building, where only authorised employees (sysadmins aka the chosen ones) were allowed access.

When companies grew in size and started expanding geographical locations, these server rooms had to be moved into separate physical locations. These physical locations were facilitated by what we now call “data-centers”. This was the beginning of “trust” in a service provider context for everyone in IT. Questions like — Can we trust someone with our servers? What about the sensitive data we process on these servers? What if, they decide to access these sensitive servers? were regular conversations for a lot of sysadmins and security folks in such growing organisations back in those times.

They were being asked to trust a service provider to take care of their crown jewels aka servers while providing only pre-scheduled physical access to the facility or remote connections to the servers in return.

To build a higher level of trust for “data-centres” in a security conscious organisation, there were global and national standards established by reputed bodies for “data-centres” to abide by and operate with. Any data-centre with the approved standards meant that the organisations can trust them if they trusted the standard and can be more confident to use their services.

This was a good option for the organisation from both monetary and logistics point of view.

They could lower their server capacity and location cost while still being able to access the servers remotely over a secure connection in a few mins.

Once, big organisations started trusting these service providers, the SMB’s followed straight after. From this point on, everyone has started loving “data-centres” and a trust model was formed.

This laid foundation for the first version of the “Shared Responsibility Model”, which stipulates who is responsible and for what in this trust model.

Second evolution of trust/ Current trust model

As organisations grew larger and started opening offices in different countries, it became harder to manage these global “data-centres” at a reasonable cost.

Then came the “outsourcers”, the service providers who offered to take care of your data centre servers on behalf of your organisation at a minimal cost, while maintaining 24/7/365 service, with an actual person at the end of the support number.

There was a shift of trust at this point, where again we were asking ourselves the same questions as we did with “data-centres”. This time, in return we would not be given any access to servers. We would have to raise a support ticket and pay $ each time we want to change anything on the server.

This time, we decided the trust can be achieved by onboarding these external service providers as employees of the organisation. This gave us the trust that incase of an incident they can disable all these external users who have access to their servers. This reduced the sysadmin roles to mostly manage and follow-up server requests, limited access to servers themselves and actioning any urgent request for which someone has to be physically present. (What if this really happened, can an organisation really pick up from that point — some food for thought).

Future/Next trust model

Over time, with the growth in outsourcing companies and “data-centres” and growing organisations, the side effects of such a trust model started appearing. Outsourcing companies were taking more organisations onboard as a way of growing and with cost optimisation even in these organisations only the required employees were kept in the workforce. This meant that every request to these “outsourced” aka “vendor managed” support/service desk took a lot longer to fulfill.

Sometimes a request like “Urgent — create a new server for Application X” will take 5–6 weeks.

When the limited resourced “employees” at the other end of the phone/ service desk portal have multiple clients requesting similar requests, I can understand the delay. Technical folks reading this know that this is a 1-day job (at max) if the hardware is already present, depending on who you talk to.

Enter Cloud, with the promise of low cost and ease of access to resources in minutes (haven’t we heard that before!).

What cloud providers offer is an extension of the “data-center” services. Similar to “data-centers” these cloud providers have to get their physical premises approved against similar, sometimes even more stringent global standards to house servers for both public and private industries.

They also gave their customers a Shared Responsibility Model. e.g AWSAzure.

I personally do not see a huge difference between what was asked from thought leaders with the first trust model and the new trust model which is marketted and championed by a lot of engineers, managers, startups, tech companies.

I believe as security and technology champions in any organisation, especially in organisations which are going through this change of trust, with the right security controls a good trust model can be established.

I currently help people develop this trust model and there are many others who do the same. It’s only a matter of time before we all love “cloud” and “trust” it.

Share

Great Tech-Spectations

Great Tech-Spectations

The Versent & AWS Great Tech-Spectations report explores how Aussies feel about tech in their everyday lives and how it measures up to expectations. Download the report now for a blueprint on how to meet consumer’s growing demands.