Recently, I started working with a new platform called Apigee Edge. Edge is an API Gateway, and it is intended to mediate API calls between client applications and the backend system. By doing this, Edge can:
- Offer a single point-of-access to an organization APIs;
- Leverage security across different APIs;
- Implement monetization, throttling and quota management policies;
- Monitor and analyze API traffic;
- Translate between different protocols and message formats;
- Route dynamically between different endpoints;
- Enrich messages with extra data.
Apigee was bought by Google in 2016 and Edge is now the de facto official API Gateway for Google Cloud Platform (GCP). Edge can run either on a private cloud or a public one: GCP, off course, but also AWS and others.
Topics in this article
So, how does it work?
To understand how Edge works, you first need to get familiar with a few key concepts:
- A proxy endpoint (PE) is the endpoint that you’ll expose to your client apps. It’s what your clients will see and what they will “understand” as your API. A proxy endpoint is the entry point for messages in Apigee.
- A target endpoint (TE) represents the endpoints that your API will connect to, the backend systems. They are the exit points of messages.
- A policy is a logic implementing unit that represents some action within Edge, for example, enforcing OAUTH authentication, transforming the message from XML to JSON, updating a variable, etc. In other words, a policy is a mediator that will process your message.
- The request/response flow (or simply “flow”) represents the path a message goes through within Edge. Every message is received by a proxy endpoint and may pass through a number of policies before being forwarded to the target endpoint. Any response from the TE will also be processed before being sent back to the client app – those phases are logically known as the request and response phases.
Figure 1: request/response flow on Edge
The policies that Edge offers can be classified by:
- Traffic management: control quotas, concurrency, as well as caching.
- Mediation: message parsing, validation and transformation.
- Security: access control with OAuth, Api Keys, etc.
- Extension: define custom policies, such as services callout or scripts.
Publishing and monitoring your APIs.
Edge also has a powerful monitoring module that allows you to check the operation of your APIs, including pre-configured dashboards for the most common metrics such as traffic, errors, geomapping, etc. You can also define your own KPIs, and create reports based on them, as well as define alerts based on those metrics.
Figure 2: proxy performance dashboard
Edge allows you to easily create a fully customizable Developer’s Portal, where developers can study and register to use your APIs. It offers a complete solution for managing access control to your APIs, including Oauth2, SAML and other authentication mechanisms. Finally, Edge has a built-in two-level caching mechanism that operates transparently for the developer.
In our experience so far, Edge has shown to be a complex yet easy to use platform, making it a powerful tool to build integration scenarios upon.
Getting our hands dirty.
Let’s see a practical example of Edge capabilities. We had a use case for a client who demanded that when we got an incoming request, it was necessary to first check whether we had a BPEL process instance already running in the backend for that particular transaction. If so, we could reuse it, otherwise, we needed to start a new process.
In practical terms, this meant that we needed first to make a call to a specific endpoint to inquiry about the running BPEL instances. Depending on the response, we would either direct the request to the appropriate instance, or call the “create BPEL process” endpoint. This was an unusual scenario for an API management platform to handle, something that would usually fall more into the scope of an Enterprise Serial Bus (ESB). However, it was quite straightforward to handle it on Edge.
Figure 3: chain of policies (mediators) that process the request message
The above figure shows the chain of policies (mediators) used to process the incoming request of a proxy service. Notice the policies signaled with arrows:
(1) a standard policy used to enforce OAuth authentication for this proxy.
(2) a Call mediator that makes the request to get the running processes.
(3) a process the response and, in case there’s no process already running for the particular transaction uses…
(4) … another Call to initialize the process.
From there the processing follows normally, doing some more verifications before forwarding the request to the target endpoint. When the response is received, any policies in the Response lane would be executed (not shown here).
Contact Polarising to know more about our solutions at email@example.com.
System Integrations Specialist