In this article, we decouple applications from an OAuth authorization server.
With the evolution of 5G networks and the expansion of the “always on” world in which we live, online service providers are experiencing a demand explosion while their customers still expect lightning fast end user experiences. To address this, organizations are scaling their application servers and containers to meet the load. However, there will come a point where they face diminished returns on this tactic.
Each application/microservice access involves validating the OAuth access token. This means for each request microservice needs to invoke OAuth-AS for token validations. This approach results in a tight coupling between protected applications and OAuth-AS. This approach may result in some additional issues, such as:
- Latency: As microservices and OAuth-AS are usually deployed in different infrastructure networks, there is an inherent latency between these components. Invoking OAuth-AS for every resource access can lead to bottlenecks on OAuth-AS(AM) servers.
- Local caching: Some deployments leverage local caching in edge components such as Token Validation Microservice caching. In order to horizontally scale these microservice containers can be added/removed dynamically. This means application requests can be routed to any microservice container behind load balancers. This results in inefficient usage of this local cache. Note latest versions of IG provide websocket notifications for token revocations but as these notifications are sent to all IG instances in the cluster, scaling these microservices can lead to inefficient notifications broadcast to all instances. Also, this mechanism mandates that AM is reachable from each microservice container which may not be feasible in all cloud deployments.
- Security: Some deployments completely decouple OAuth-RS from OAuth-AS by performing local stateless JwT token validation checks. Although local validations can include token signature validation, expiry checks, etc, it is not possible to locally check if the access token has been revoked or not. Hence for security reasons, this option is not recommended as users/applications may end up using revoked tokens.
To resolve these issues, a shared denylist cache can be leveraged for securely and efficiently decoupling microservices from an OAuth authorization server.
This approach leverages a shared denylist cache like Redis between edge components like microservices and the OAuth-AS. This denylist cache is updated when tokens are revoked and checked every time by the token validation microservice.
Leveraging this Shared Denylist Cache provides these benefits:
- Reduced latency as this cache is close to edge components.
- External Shared Denylist Cache as requests can be routed to any microservice container.
- Secure token validation checks as revoked tokens are added in this Denylist Cache. Expired tokens are auto purged from this cache.
Refer to the sequence diagram below for the complete flow:
- Refer to this README for configuring various ForgeRock components.
- Redis needs to be installed so that it can be leveraged as a shared cache. This cache needs to be reachable from AM and Token Validation Microservice.
- A custom filter is used by IG for updating cache with revoked tokens.
- A custom access token resolver is used by the Token Validation Microservice for checking if token is revoked.
Testing use cases
Refer to the Postman collection for various REST APIs used for testing the use cases that follow.
- Acquire an access token from IG (Acting as AM Reverse Proxy).
- Use this access token to access microservice-A via IG (Acting as OAuth-RS). Access should be granted and corresponding response is returned.
- (Optional) Introspect this token via Token Validation Microservice.
- Revoke access/revoke token via IG (Acting as AM Reverse Proxy).
- Use this access token to access microservice-A via IG (Acting as OAuth-RS). This returns a 401 unauthorized error response.
- (Optional) Introspect this token via Token Validation Microservice. This returns an “active:false” response.
Response time is one of the crucial User Experience requirements for many web applications. According to various surveys, users typically leave the website if response time is more than a few seconds.
Leveraging a Shared Denylist Cache eliminates the dependency on AM servers, thereby removing any latency between microservices and AM servers. This allows microservices to securely and efficiently scale independently of Access Management infrastructure, thereby improving user response times.