The Bleeding Edge: Leveraging Application Service Architectures for Trust

Person typing
Author: Ed Moyle, CISSP
Date Published: 1 May 2024
Read Time: 8 minutes
Related: ISACA - The Digital Trust Leader

It is a truism that complexity in technology is steadily increasing and has been for quite some time. Consider, for example, just the business applications that we have in our enterprises. For most of us, these consist of a hodge-podge collection of SaaS, on-premise applications, desktop apps, mobile apps, and maybe even other specialized applications like chatbots and others.

It should come as no surprise then that the complexity of how we develop, test, release, and maintain applications has likewise increased. When you consider the technologies that are driving modern application development like Infrastructure as Code (IaC) (e.g. Terraform), application containerization (e.g. Podman, Docker), and orchestration (e.g. Kubernetes), things can get quite overwhelming unless you are steeped in that world.

In fact, oftentimes trust professionals are behind the curve on the application side of things. This is not only due to the increasing complexity we just alluded to, but also because it involves skills we haven’t (at least historically) as a discipline invested heavily in, such as application and software security.

What this means in practice is that there is often a disconnect between those of us chartered with fostering digital trust and optimizing risk and those tasked with building out the applications that service internal users, customers, partners, and other stakeholders. This disconnect is unfortunate because it limits the ability of practitioners to participate in and leverage many of the technologies that underpin modern application development. This is a lost opportunity in my opinion. Why? Because each of these technologies can be tapped for the betterment of the trust/risk/security posture in the organization.

As an example of what I mean, consider IaC technologies like Terraform, Ansible, and CloudFormation. We know (in fact, we have examined it in this very column) that these technologies can, under the right circumstances and with planning, bolster our trust and risk posture. For example, they can allow security and audit teams to get an authoritative view of what has been fielded (as IaC artifacts are declarative artifacts of the end-state environment). Meaning, that we can look at a Terraform artifact and understand exactly what is to be fielded, in what configuration state, with what parameters, etc. Likewise, depending on the technology in use, we might have the ability to directly query the running state to answer questions about it. So rather than having to rely on vulnerability scanners or other “secondhand” data about the configuration of a node, we can get answers directly and authoritatively.

What this means in practice is that there is often a disconnect between those of us chartered with fostering digital trust and optimizing risk and those tasked with building out the applications that service internal users, customers, partners, and other stakeholders.

While many of the technologies and strategies used in modern application development can have value to the trust practitioner, one area that is particularly useful to discuss is microservice architectures, particularly service mesh. These architectural strategies represent a veritable “tectonic shift” in the way that we (from a digital trust point of view) can potentially interact with application components. They can open up visibility to the underlying application, work synergistically with our controls to create opportunities for visibility and/or protective strategies, and help us in a myriad of other ways as well.

Trust Properties of Microservices and Service Mesh

Let’s begin by looking first at microservices. For those that are unfamiliar with the term, a microservice architecture essentially refers to a strategy whereby an application is “modularized” into small, atomic elements of business logic that are accessible to each other via simple, ubiquitous protocols.

By doing so, components implementing business logic are made available either singularly or in combination for use within the broader application context. While many potential approaches and protocols for data exchange are viable under this model [e.g., Remote Procedure Call (RPC), Representational State Transfer (REST), GraphQL, Simple Object Access Protocol (SOAP), etc.], in practice, most modern applications heavily leverage REST. Specifically, they use HTTP over Transport Layer Security (TLS) as a transport mechanism, coupled with JavaScript Object Notation (JSON) for data serialization (and sometimes parameter “marshaling” where a URL query string would be clunky).

From a trust point of view, this approach on its own has some advantages over monolithic application design. First, the data exchange protocol (HTTPS) is one that all of us are going to be very familiar with from the outset. This allows us to use familiar technologies [e.g., reverse and forward HTTP proxies, TLS mutual authentication, website scanning tools, web application firewalls (WAF), HTTP headers, etc.] as potential strategies to enforce trust goals within the application and as potential security controls governing interactions between application components.

Beyond this, the fact that the services in this model are using a known and very well-studied protocol is beneficial to us. To see why, compare securing a REST service with a less common data exchange methodology–for example, something like Java RMI. While you can absolutely build in the same trust strategies to both, actual enforcement can be more challenging in the RMI use case because most of us will have less familiarity with the underlying protocol and thereby less familiarity in knowing how to secure it.

The point is, assuming you have the right skills available to you, you can turn this architecture to your advantage. There are several ways you might do this, but we will just list a few here, such as:

  • Leverage the fact that it is HTTP to funnel traffic through a central point (e.g. using a gateway like KrakenD for example).
  • Leverage Swagger or similar documentation to assist with application threat modeling and/or other trust/risk assessment and review tasks.
  • Decouple authentication from the application logic by using technologies like tokens, JSON Web Token (JWT), TLS mutual authentication (i.e., client certificates), or other measures to validate the identity of the caller.
  • Decouple encryption of data in transit from the underlying application (since the TLS implementation is handled by the underlying web server stack).

These examples serve to illustrate that there is nothing to fear for the trust practitioner in the case of microservices. In fact, embracing microservices helps us drive toward a better trust posture overall.

Building on this concept, other technologies like service mesh architecture allow us to take this philosophy even further. For those who are not familiar with the concept, a service mesh is a strategy built on microservices that allows you to decouple inter-component communication even further from the business logic. Specifically, the mesh consists of services built so that how they interact with each other is both variable and software defined.

In a typical implementation, this is accomplished via a “sidecar” proxy: a small HTTP proxy that sits close to an application node implementing a given service (i.e., a given API endpoint). From an application development point of view, this architecture allows you to “rewire” services at will: you can change where they are hosted, scale them up, add redundancy, scale them down, change how they interact, etc.—all without having to make application-layer changes or write application code. Consider changing the URL of a given endpoint: you can change where a given service “lives” (i.e., the endpoint URL) dynamically without having to modify other services so they know where/how to call it just by amending the proxies used by the caller and the calling module.

From a security and trust point of view, this can provide significant value as well. Because you can dynamically adjust how application nodes communicate, you can add controls, add monitoring, or take any number of other trust-impacting actions without modifying the application code. As an example, you might have an application that, today, does not require TLS between components. But what happens if you change your mind tomorrow and you decide to make it a requirement? Because the mesh is configurable at runtime, you can make a change like that fairly quickly.

The point of all this is not necessarily that we should all implement microservices and service mesh. I am instead suggesting that we “lean in” to technologies like these and use them to our advantage to solve trust-related and risk-optimization goals. Each specific technology of course accomplishes this in a different way. The point is that while it can sometimes seem like new application development and implementation strategies do nothing but add to complexity (and hence sometimes lead us to be resistant to them), they are often a blessing in disguise.

The point of all this is not necessarily that we should all implement microservices and service mesh. I am instead suggesting that we “lean in” to technologies like these and use them to our advantage to solve trust-related and risk-optimization goals.

In the case of microservices and service mesh, for example, they can contribute directly to an increase in our ability to achieve digital trust goals. Of course, it does require that we understand how and why application developers are employing the strategies that they are. It also requires that we have a two-way trusted and trustworthy relationship with the development community to know where and how they use them. And perhaps most importantly, it requires that we have more than a passing familiarity with them so that we can be educated enough to know how to turn them to our advantage. With this in mind, looking to build awareness of and skill in the approaches used by your development teams is time well spent—as is investing in building relationships with the development teams themselves.

ED MOYLE | CISSP

Is currently chief information security officer for Drake Software. In his years in information security, Moyle has held numerous positions including director of thought leadership and research for ISACA®, application security principal for Adaptive Biotechnologies, senior security strategist with Savvis, senior manager with CTG, and vice president and information security officer for Merrill Lynch Investment Managers. Moyle is co-author of Cryptographic Libraries for Developers and Practical Cybersecurity Architecture and a frequent contributor to the information security industry as an author, public speaker, and analyst.