RSS

API Monitoring News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is monitoring their API operations.

A Health Check Response Format for HTTP APIs

My friend Irakli Nadareishvili, has published a new health check response format for HTTP APIs that I wanted to make sure was documented as part of my research. The way I do this is write a blog post, forever sealing this work in time, and adding it to the public record that is my API Evangelist brain. Since I use my blog as a reference when writing white papers, guides, blueprints, policies, and other aspects of my work, I need as many references to usable standards like this.

I am going to just share the introduction from Irakli’s draft, as it says it all:

The vast majority of modern APIs driving data to web and mobile applications use HTTP [RFC7230] as a transport protocol. The health and uptime of these APIs determine availability of the applications themselves. In distributed systems built with a number of APIs, understanding the health status of the APIs and making corresponding decisions, for failover or circuit-breaking, are essential for providing highly available solutions.

There exists a wide variety of operational software that relies on the ability to read health check response of APIs. There is currently no standard for the health check output response, however, so most applications either rely on the basic level of information included in HTTP status codes [RFC7231] or use task-specific formats.

Usage of task-specific or application-specific rformats creates significant challenges, disallowing any meaningful interoperability across different implementations and between different tooling.

Standardizing a format for health checks can provide any of a number of benefits, including:

  • Flexible deployment - since operational tooling and API clients can rely on rich, uniform format, they can be safely combined and substituted as needed.
  • Evolvability - new APIs, conforming to the standard, can safely be introduced in any environment and ecosystem that also conforms to the same standard, without costly coordination and testing requirements. This document defines a “health check” format using the JSON format [RFC7159] for APIs to use as a standard point for the health information they offer. Having a well-defined format for this purpose promotes good practice and tooling.

Here is an example JSON response, showing the standard in action:

I have seen a number of different approaches to providing health checks in APIs, from a single ping path, to proxying of the Docker Engine API for the microservices Docker container. It makes sense to have a standard for this, and I’ll reference Irakli’s important work from here on out as I’m advising on projects, or implementing my own.


Orchestrating API Integration, Consumption, and Collaboration with the Postman API

You hear me say it all the time–if you are selling services and tooling to the API sector, you should have an API. In support of this way of thinking I like to highlight the API service providers I work with who follow this philosophy, and today’s example is from (Postman](https://getpostman.com). If you aren’t familiar with Postman, I recommend getting acquainted. It is an indispensable tool for integrating, consuming, and collaborating around the APIs you depend on, and are developing. Postman is essential to working with APIs in 2018, no matter whether you are developing them, or integrating with 3rd party APIs.

Further amplifying the usefulness of Postman as a client tool, the Postman API reflects the heart of what Postman does as not just a client, but a complete life cycle tool. The Postman API provides five separate APIs, allowing you orchestration your API integration, consumption, and collaboration environment.

  • Collections - The /collections endpoint returns a list of all collections that are accessible by you. The list includes your own collections and the collections that you have subscribed to.
  • Environments - The /environments endpoint returns a list of all environments that belong to you. The response contains an array of environments’ information containing the name, id, owner and uid of each environment.
  • Mocks - This endpoint fetches all the mocks that you have created.
  • Monitors - The /monitors endpoint returns a list of all monitors that are accessible by you. The response contains an array of monitors information containing the name, id, owner and uid of each monitor.
  • User - The /me endpoint allows you to fetch relevant information pertaining to the API Key being used.

The user, collections, and environments APIs reflect the heart of the Postman API client, where mocks and monitors reflects its move to be a full API life cycle solution. This stack of APIs, and the Postman as a client tool reflects how API development, as well as API operation should be conducted. You should be maintaining collections of APIs that exist within many environments, and you should always be mocking interfaces as you are defining, designing, and developing them. You should then also be monitoring all the APIs you depend–whether or not the APIs are yours. If you depend on APIs, you should be monitoring them.

I’ve long been advocating that someone development an API environment management solution for API developers, providing a single place we can define, store, and share the configuration, keys, and other aspects of integration with the APIs we depend on. The Postman collections and environment APIs is essentially this, plus you get all the added benefits of the services and tooling that already exist as part of the platform. Demonstrating why as an API service provider, you want to be following your own advice and having an API, because you never know when the core of your solution, or even one of the features could potentially become baked into other applications and services, and be the next killer feature developers can’t do without.


Connecting Service Level Agreement To API Monitoring

Monitoring your API availability should be standard practice for internal and external APIs. If you have the resources to custom build API monitoring, testing, and performance infrastructure, I am guessing you already have some pretty cool stuff in place. If you don’t, then you should not be reinventing the wheel out there, and you should be leveraging one of the existing API monitoring services out there on the market. When you are getting started with monitoring your APIs I recommend you begin with uptime and downtime, and once you deliver successfully on that front, I recommend you work on API performance, and the responsiveness of your APIs.

You should begin with making sure you are delivering the service level agreement you have in place with your API consumers. What, you don’t have a service level agreement? No better time to start than now. If you don’t already have an explicitly stated SLA in place, I recommend creating one internally, and see what you can do to live up to your API SLA, then once you ensure things are operating at acceptable levels, you share with your API consumers. I am guessing they will be pretty pleased to hear that you are taking the initiative to offer an SLA, and are committed enough to your API to work towards such a high bar for API operations.

To help you manage defining, and then ultimately monitoring and living up to your API SLA, I recommend taking a look at APIMetrics, who is obsessively focused on API quality, performance, and reliability. They spend a lot of time monitoring public APIs, and have developed a pretty sophisticated approach to ranking and scoring your API to ensure you meet your SLA. As you can see in the picture for this story, the APIMetrics administrative dashboard provides a pretty robust way for you to measure any API you want, and establish metrics and triggers that let you know if you’ve met or failed to meet your SLA requirements. As I said, you could start out by monitoring internally if you are nervous about the results, but once you are ready to go prime time you have the tools to help you regularly reporting internally, as well as externally to your API consumers.

I wish that every stop along the life cycle had a common definition for defining a specific aspect of service level agreements, and was something that multiple API providers could measure and report upon similar to what APIMetrics does for monitoring and performance. I’d like to see API design begin to have a baseline definition, that was verifiable through a common set of machine readable API assertions. I’d love for API plans, pricing, and even terms of service measurable, reportable, in a similar way. These are all things that should be observable through existing outputs, and reflected as part of service level agreements. I’d love to see the concept of the SLA evolve to cover all aspects of the quality of service beyond just availability. APIMetrics provides a good look at how the services we use to manage our APIs can be used to define the level of service we provide, something that we can be emulating more across our API operations.


A Couple More Questions For The Equifax CEO About Their Breach

Speaking to the House Energy and Commerce Committee, former Equifax CEO Richard Smith pointed the finger at a single developer who failed to patch the Apache Struts vulnerability. Saying that protocol was followed, and a single developer was responsible, shifting the blame away from leadership. It sounds like a good answer, but when you operate in the space you understand that this was a systemic failure, and you shouldn’t be relying on a single individual, or even a single piece of scanning software to verify the patch was applied. You really should have many layers in place to help prevent breaches like we saw with Equifax.

If I was interviewing the CEO, I’d have a few other questions for him, getting at some of the other systemic and process failures based upon his lack of leadership, and awareness:

  • API Monitoring & Testing - You say the scanner for the Apache Struts vulnerability failed, but what about other monitoring and testing. The plugin in questions was a REST plugin, that allowed for API communication with your systems. Due to the vulnerability, extra junk information was allowed to get through. Where were your added API request and response integrity testing and monitoring process? Sure you were scanning for the vulnerability, but are you keeping an eye on the details of the data being passed back and forth? API monitoring & testing has been around for many years, and service providers like Runscope do this for a living. What other layers of monitoring and testing were in place?
  • API Management - When you expose APIs like you did from Apache Struts, what does the standardized management approach look like? What sort of metering, logging, rate limiting, and analysis occurs on each endpoint, and verification occurs, ensuring that only required clients should have access? API management has been standard procedure for over a decade now for exposing APIs like this both internally and externally. Why didn’t your API management process stop this sort of breach after only a couple hundred record went out? API management is about awareness regarding access to all your resources. You should have a dashboard, or at least some reports that you view as a CEO on this aspect of operations.

These are just two examples of systems and processes that should have been in place. You should not be depending on a single person, or a single tool to catch this type of security incident There should be many layers in place, with security triggers, and notifications in place. Your CTO should be in tune with all of these layers, and you as the CEO should be getting briefed on how they work, and have a hand in making sure they are in place. I’m guessing that your company is doing APIs, but is dramatically behind the times when it comes to commonplace API management practices. This is your fault as the CEO. This is not the fault of a single employee, or any group of employees.

I am guessing that as a CEO you are more concerned with the selling of this data, than you are of securing it in storage, or transit. I’m guessing you are intimately aware of the layers that enable you to generate revenue, but you are barely investing in the technology and processes to do this securely, while respecting the privacy of your users. They are just livestock to you. They are just products on a shelf. It shows your lack of leadership to point the finger at a single person, or single piece of technology. There should have been many layers in place to catch this type of breach beyond a single vulnerability. It demonstrates your lack of knowledge regarding modern trends in how we secure and provide access to data, and you should never have been put in charge of such a large data brokerage company.


When To Build Or Depend On An API Service Provider

I am at that all too familiar place with a project where I am having to decide whether I want to build what I need, or depend on an API service provider. As an engineer it is always easy to think you can just build what you need, but the more experience you have, you begin to realize this isn’t always the smartest move. I’m at that point with API monitoring. I have a growing number of endpoints that I need to make sure are alive and active, but I also see an endless road map of detailed requests when it comes to granularity of what “alive and active” actually means.

At first I was just going to use my default cron job service to hit the base url and API paths defined in my OpenAPI for each project, checking for the expected HTTP status code. Then I thought I better start checking for a valid schema. Then I thought I better start checking for valid data. My API project is an open source solution, and I thought about each of my clients and implementations as me for testing and monitoring for their needs. Then I thought, no way!! I’m just going to use Runscope, and build in documentation and processes that each of my clients and implementations can also use Runscope to dial in monitoring and testing of their API on their own terms.

Since all of my API projects is OpenAPI driven, and Runscope is an OpenAPI driven API service provider (as ALL should be), I can use this as the seed for setting up testing and monitoring. Not all of my API implementations will be using 100% of the microservices I’m defining, or 100% of the API paths available fo each of the microservices I’m defining. Each microservice has it’s core set of paths that deliver the service, but then I’m also bundling in database, server, DNS, logging and other microservice operational level APIs that not all my implementations will care about monitoring (sadly). So it is important for my clients and implementations to be easily select with APIs they care about monitoring, which OpenAPI will help do the heavy lifting. When it comes to exactly what API monitoring and testing means to them, I’ll rely on Runscope to do the heavy lifting.

If Runscope didn’t have the ability to import an OpenAPI to plant the seeds for API testing and monitoring I might have opted to just build out a basic solution myself. The manual process of setting up my API monitoring and testing for each client would quickly become more work than just building a solution–even if it was nowhere near as good as Runscope. However, we are increasingly living in an OpenAPI driven API lifecycle where service providers of all shapes and sizes allow for the importing and exporting of common API definition formats like OpenAPI. Helping API providers and architects like myself stick to what we do best, and not reinvent the wheel for each stop along the API lifecycle.

Disclosure: Runscope is an API Evangelist partner.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Understanding Global API Performance At The Multi-Cloud Level

APIMetrics has a pretty addictive map showing the performance of API calls between multiple cloud providers, spanning many global regions. The cloud location latency map “shows relative performance of a standard, reference GET request made to servers running on all the Google locations and via the Google global load balancer. Calls are made from AWS, Azure, IBM and Google clouds and data is stored for all steps of the API call process and the key percentiles under consideration.”

It is interesting to play with the destination of the API calls, changing the region, and visualizing how API calls begin to degrade to different regions. It really sets the stage for how we should start thinking about the deployment, monitoring, and testing of our APIs. Region, by region, getting to know where our consumers are, and making sure APIs are deployed within the cloud infrastructure that delivers the best possible performance. It’s not just testing your APIs in a single location from many locations, it is also rethinking where your APIs are deployed, leveraging a multi-cloud reality and using all the top cloud provider, while also making API deployment by region a priority.

I’m a big fan of what APIMetrics is doing with the API performance visualizations and mapping. However, I think their approach to using HTTPbin is a significant part of this approach to monitoring and visualizing API performance at the multi-cloud level, while also making much of the process and data behind it all public. I want to put some more thought into how they are using HTTPbin behind this approach to multi-cloud API performance monitoring. I feel like there is potential her for applying this beyond just API performance, and think about other testing, security, and critical aspects of reliability and doing business online with APIs today.

After thinking where else this HTTPbin approach to data gathering could be applied, I want to think more about how the data behind APIMetrics cloud location latency map can be injected into other conversations, when it comes where we are deploying APIs, and running our API tests. Eventually I would like to see this type of multi-cloud API performance data alongside data for security and privacy compliance data, and even the regulations of each country as they apply to specific industries. Think about a time when we can deploy our APIs exactly where we want them based upon performance, privacy, security, regulations, and other critical aspects of doing business in the Internet age.


Making Sure Your API Service Connects To Other Stops Along The API Lifecycle

I am continuing my integration platform as a service research, and spending a little bit of time trying to understand how API providers are offering up integrations with other APIs. Along the way, I also wanted to look at how API service providers are doing it as well, opening themselves up to other stops along n API lifecycle. To understand how API service providers are allowing their users to easily connect to other services I’m taking a look at how my partners are handling this, starting with connected services at Runscope.

Runscope provides ready to go integration of their API monitoring and testing services with twenty other platforms, delivering a pretty interesting Venn diagram of services along the API lifecycle:

  • Slack - Slack to receive notifications from Runscope API test results and Traffic Alerts.
  • Datadog - Datadog to create events and metrics from Runscope API test results.
  • Splunk Cloud - Splunk Cloud to create events for API test results.
  • PagerDuty - A PagerDuty service to trigger and resolve incidents based on Runscope API test results or Traffic Alerts.
  • Amazon Web Services - Amazon Web Services to import tests from API Gateway definitions.
  • Ghost Inspector - Ghost Inspector to run UI tests from within your Runscope API tests.
  • New Relic Insights - New Relic Insights to create events from Runscope API test results.
  • Microsoft Teams - Microsoft Teams to receive notifications from Runscope API test results.
  • HipChat - HipChat to receive notifications from Runscope API test results and Traffic Alerts.
  • StatusPage.io - StatusPage.io to create metrics from Runscope API test results.
  • Big Panda - Big Panda to create alerts from Runscope API test results.
  • Keen IO - Keen IO to create events from Runscope API test results.
  • VictorOps - A VictorOps service to trigger and resolve incidents based on Runscope API test results or Traffic Alerts.
  • Flowdock - Flowdock to receive notifications from Runscope API test results and Traffic Alerts.
  • AWS CodePipeline - Integrate your Runscope API tests into AWS CodePipeline.
  • Jenkins - Trigger a test run on every build with the Jenkins Runscope plugin.
  • Zapier - integrate with 250+ services like HipChat, Asana, BitBucket, Jira, Trello and more.
  • OpsGenie - OpsGenie to send alerts from Runscope API test results.
  • Grove - Grove to send messages to your IRC channels from Runscope API test results and Traffic Alerts.
  • CircleCI - Run your API tests after a completed CircleCI build.

Anyone can integrate API monitoring and testing into operation using the Runscope API, but these twenty services are available by default to any user, immediately opening up several important layers of our API operations. Immediately you see the messaging, notifications, chat, and other support layers. Then you see the continuous integration / deployment, code, and SDK layers. Then you come across Zapier, which opens up a whole other world of endless integration possibilities. I see Runscope owning the monitoring, testing, and performance stops along the API lifecycle, but their connected services puts other stops like deployment, management, logging, analysis, and many others also within reach.

I am working on a way to track the integrations between API providers, and API service providers. I’d like to be able to visualize the relationships between providers, helping me see the integrations that are most important to different groups of end users. I’m a big advocate for API providers to put iPaaS services like Zapier and DataFire to work, opening up a whole world of integrations to their developers and end users. I also encourage API service providers to work to understand how Zapier can open up other stops along the API lifecycle. Next, everyone should be thinking about deeper integrations like Runscope is doing with their connected services, and make sure you always publish a public page showcasing integrations, making it part of documentation, SDKs, and other aspects of your API service platform.


API SDK Licensing Notifications Using VersionEye

I have been watching VersionEye for a while now. If you aren’t familiar, they provide a service that will notify you of security vulnerabilities, license violations and out-dated dependencies in your Git repositories. I wanted to craft a story specifically about their licensing notification services, which can check all your open source dependencies against a license white list, then notify you of violations, and changes at the SDK licensing level.

The first thing I like here, is the notion of an API SDK licensing whitelist. The idea that there is a service that could potentially let you know which API providers have SDKs that are licensed in a way that meets your integration requirements. I think it helps developers who are building applications on top of APIs understand which APIs they should or shouldn’t be using based upon SDK licensing, while also providing an incentive for API providers to get their SDKs organized, including the licensing–you’d be surprised at how many API providers do not have their SDK house in order.

VersionEye also provides CI/CD integration, so that you can stop a build based on a licensing violation. Injecting the politics of API operations, from an API consumers perspective, into the application lifecycle. I’m interested in VersionEye’s CI/CD, as well as security vulnerabilities, but I wanted to make sure this approach to keeping an eye on SDK licensing was included in my SDK, monitoring, and licensing research, influencing my storytelling across these areas. Some day all API providers will have a wide variety of SDKs available, each complete with clear licensing, published on Github, and indexed as part of an APIs.json. We just aren’t quite there yet, and we need more services like VersionEye to help build awareness at the API SDK licensing level to get us closer to this reality.


Containerized Microservices Monitoring Driving API Infrastructure Visualizations

While I track on what is going on with visualizations generated from data, I haven’t seen much when it comes to API driven visualizations, or specifically visualization about API infrastructure, that is new and interesting. This week I came across an interesting example in a post from Netsil about mapping microservices so that you can monitor them. They are a pretty basic visualization of each database, API, and DNS element for your stack, but it does provide solid example of visualizing not just the deployment of database and API resources, but also DNS, and other protocols in your stack.

Netsil microservices visualization is focused on monitoring, but I can see this type of visualization also being applied to design, deployment, management, logging, testing, and any other stop along the API lifecycle. I can see API lifecycle visualization tooling like this becoming more common place, and play more of a role in making API infrastructure more observable. Visualizations are an important of the storytelling around API operations that moves things from just IT and dev team monitoring, making it more observable by all stakeholders.

I’m glad to see service providers moving the needle with helping visualize API infrastructure. I’d like to see more embeddable solutions deployed to Github emerge as part of API life cycle monitoring. I’d like to see what full life cycle solutions are possible when it comes to my partners like deployment visualizations from Tyk and Dreamfactory APIs, and management visualizations with 3Scale APIs, and monitoring and testing visualizations using Runscope. I’ll play around with pulling data from these provides, and publishing to Github as YAML, which I can then easily make available as JSON or CSV for use in some basic visualizations.

If you think about it, thee really should be a wealth of open source dashboard visualizations that could be embedded on any public or private Github repository, for every API service provider out there. API providers should be able to easily map out their API infrastructure, using any of the API service providers they are using already using to operate their APIs. Think of some of the embeddable API status pages we see out there already, and what Netsil is offering for mapping out infrastructure, but something for ever stop along the API life cycle, helping deliver visualizations of API infrastructure no matter which stop you find yourself at.


HTTP Status Codes Are An Essential Part Of API Design And Deployment

It takes a lot of work provide a reliable API that people can depend on. Something your consumers can trust, and will provide them with consistent, stable, meaningful, and expected behavior. There are a lot of affordances built into the web, allowing us humans to get around, and make sense of the ocean of information on the web today. These affordances aren’t always present with APIs, and we need to communicate with our consumers through the design of our API at every turn.

One area I see IT and developer groups often overlook when it comes to API design and deployment are HTTP Status Codes. That standardized list of meaningful responses that come back with every web and API request:

  • 1xx Informational - An informational response indicates that the request was received and understood. It is issued on a provisional basis while request processing continues.
  • 2xx Success - This class of status codes indicates the action requested by the client was received, understood, accepted, and processed successfully.
  • 3xx Redirection - This class of status code indicates the client must take additional action to complete the request. Many of these status codes are used in URL redirection.
  • 4xx Client errors - This class of status code is intended for situations in which the client seems to have errored.
  • 5xx Server error - The server failed to fulfill an apparently valid request.

Without HTTP Status codes, application won’t every really know if their API request was successful or not, and even if an application can tell there was a failure, it will never understand why. HTTP Status Codes are fundamental to the web working with browsers, and apis working with applications. HTTP Status Codes should never be left on the API development workbench, and API providers should always go beyond just 200 and 500 for every API implementation. Without them, NO API platform will ever scale, and support any number of external integrations and applications.

The most important example I have of the importance of HTTP Status Codes I have in my API developers toolbox is when I was working to assist federal government agencies in becoming compliant with the White House’s order for all federal agencies to publish a machine readable index of their public data inventory of their agency website. As agencies got to work publishing JSON and XML (an API) of their data inventory, I got to work building an application that would monitor their progress, indexing the available inventory, and providing a dashboard the the GSA and OMB could use to follow their progress (or lack of).

I would monitor the dashboard in real time, but weekly I would also go through many of the top level cabinet agencies, and some of the more prominent sub agencies, and see if there was a page available in my browser. There were numerous agencies who I found had published their machine readable public data inventory, but had returned a variety of HTTP status codes other than 200-resulting in my monitoring application to consider the agency not compliant. I wrote several stories about HTTP Status Codes, in which the GSA, and White House groups circulated with agencies, but ultimately I’d say this stumbling block was one of the main reasons that cause this federated public data API project to stumble early on, and never gain proper momentum–a HUGE loss to an open and more observable federal government. ;-(

HTTP Status Codes aren’t just a nice to have thing when it comes to APIs, they are essential. Without HTTP Status Codes each application will deliver unreliable results, and aggregate or federated solutions that are looking to consume many APIs will become much more difficult and costly to develop. Make sure you prioritize HTTP Status Codes as part of your API design and deployment process. At the very least make sure all five layers of HTTP Status Codes are present in your release. You can always get more precise and meaningful with specific series HTTP status codes later on, but ALL APIs should be employing all five layers of HTTP Status Codes by default, to prevent friction and instability in every application that builds on top of your APIs.


API Environment Portability

I was reading the post from Runscope on copying environments using their new API. I was looking through the request and response structure for their API, it looks like a pretty good start when it comes to what I’d call API environment portability. I’m talking about allowing us to define, share, replicate, and reuse the definitions for our API environments across the services and tools we are depending on.

If our API environment definitions shared a common schema, and API like Runscope provides, I could take my Runscope environment settings, and use them in my Stoplight, Restlet Client, Postman, and other API services and tooling. It would also help me templatize and standardize my development, staging, production, and other environments across the services I use. Assisting me in keeping my environment house in order, and also something that I can use to audit and turn over my environments to help out with security.

It is just a thought. An API environment API, possessing an evolving but common schema just seems like one of those things that would make the entire API space work a little smoother. Making our API environments exportable, importable, and portable just seems like it would help us think through when it comes setting up, configuring, managing, and evolving our API environments–who knows maybe someday we’ll have API service providers who help us manage our API environments, dictating how they are used across the growing number of API services we are depending on.

Disclosure: Runscope and Restlet are API Evangelist partners.


Validating My API Schema As Part of My API Security Practices

I am spending more time thinking about the unknown unknowns when it comes to API security. This means thinking beyond the usual suspects when thinking about API security like encryption, API keys, and OAuth. As I monitor the API space I’m keeping an eye out for examples of what might be security concerns that not every API provider is thinking about. [I found one recently in ARS Technica, about the Federal Communication Commission (FCC) leaking the email addresses through the CC API for anyone who submitted feedback as part of any issues like the recent Net Neutrality discussion.

It sounds like the breach with the FCC API was unintentional, but it provides a pretty interesting example of a security risk that could probably be mitigated with some basic API testing and monitoring, using common services like Runscope, or Restlet Client. Adding a testing and monitoring layer to your API operations helps you look beyond just an API being up or down. You should be validating that each endpoint is returning the intended/expected schema. Just this little step of setting up a more detailed monitor can give you that brief moment to think a little more deeply about your schema–the little things like whether or not you should be sharing the email addresses of thousands, or even millions of users.

I’m working on a JSON Schema for my Open Referral Human Services API right now. I want to be able to easily validate any API as human services compliant, but I also want to be able to setup testing and monitoring, as well as security checkups by validating the schema. When it comes to human services data I want to be able to validate every field present, ensuring only what is required gets out via the API. I am validating primarily to ensure an API and the resulting schema is compliant with HSDS/A standards but seeing this breach at the FCC has reminded me that taking the time to validate the schema for our APIs can also contribute to API security–for those attacks that don’t come from outside, but from within.

Disclosure: Restlet Client and Runscope are API Evangelist partners.


APIs For Monitoring The Performance Of Your APIs

I am a big fan of API providers who also have APIs. It may sound silly to say, but you would be surprised how many companies are selling services to API providers and do not actually have an API themselves. So, anytime I find a good example of API service providers launching new APIs that help API providers be more successful, I’m all over it with a story.

Today’s example is from my friends over at Runscope with their API Metrics API that lets you “retrieve your API tests performance metrics for each individual test, keep a pulse on your API’s performance over time, and create custom internal or external dashboards with it”. You can filter the request by using 3 different parameters:

  • region - The service region you’re using to run your tests (e.g. us1, us2, eu1, etc.)
  • timeframe - Hour, day, week, or month. Depending on the timeframe you use, the interval between the response times will be different.
  • environment_uuid - Filter by a specific environment, such as test, production, etc.

That is a pretty healthy example of everything that is API for me–an API that helps you make sure your APIs are performing as expected. You can not just understand how well your API responds, you can dial that in by region, and paint a clear picture of how well you are doing over time. I like that you can create internal dashboards for communicating this with your organization, but I also like their approach to providing external API performance dashboards so much I am going to add it to my list of building blocks I track on as part of my API performance research.

Aight. That concludes today’s showcase of an API service provider making sure they are practicing what they preach and providing APIs for their valuable services. Honestly, I find this to be a fascinating layer of the API sector–the API layer that can orchestrate APIs. I enjoy thinking about what is possible when your APIs have APIs–it makes something like API performance a much more obtainable, scalable, and as Runscope does it, something you can easily communicate with your internal stakeholders and your API community.


The Depth And Dimensions Of Monitoring API Operations

When I play with my Hitch service I am always left thinking about the many dimensions of API monitoring. When you talk about API monitoring in the tech sector conversations almost always start with the API providers and the technical details monitoring of individual APIs. Hopefully, these discussions also focus on API monitoring from the API consumers point of view, but I wanted to also shine a light on companies like Hitch who are adding an additional dimension from the API service provider view of things–which is closer to my vantage point as an analyst.

I am an advisor to Hitch because they are a different breed of API monitoring service, that isn’t just focused on the APIs. Hitch brings in the wider view of monitoring the entire operations of an API–if documentation changes, an SDK on Github, or update via Twitter, or a pricing change, you get alerted. As a developer I enjoy being made aware of what is going on across operations, keeping me in tune with not just the technical, but also the business and politics of API platform operations.

Another reason I like Hitch, and really the reason behind me writing this post, is that they are helping API providers think about the bigger picture of API monitoring. Helping them think deeply, as well as getting their shit together when it comes to regularly sending out the critical signals us API consumers are tuning into. When you are down in the trenches of operating an API at a large company, it is easy to get caught up in the internal vacuum, forgetting to properly communicate and support your community–Hitch helps keep this bubble from forming, assisting you in keeping an external focus on your community.

If you are just embarking on your API journey I recommend tuning into API Evangelist first. ;-) However, if you are unsure of how to properly communicate and support your community I recommend you talk to the Hitch team. They’ll help get you up to speed on the best practices when it comes to API operations, and understand how to send the right signals to your community–something that will make or break your API efforts, so please don’t ignore it.

Disclosure: I am an advisor for Hitch, and they are my friends.


Its Not Just The Technology: API Monitoring Means You Care

I was just messing around with a friend online about monitoring of our monitoring tools, where I said that I have a monitor setup to monitor whether or not I care about monitoring. I was half joking, but in reality, giving a shit is actually a pretty critical component of monitoring when you think about it. Nobody monitors something they don’t care about. While monitoring in the world of APIsn might mean a variety of things, I’m guessing that caring about those resources is a piece of every single monitoring configuration.

This has come up before in conversation with my friend Dave O’Neill of APIMetrics, where he tells stories of developers signing up for their service, running the reports they need to satisfy management or customers, then they turn off the service. I think this type of behavior exists at all levels, with many reasons why someone truly doesn’t care about a service actually performing as promised, and doing what it takes to rise to the occasion–resulting in the instability, and unreliability that APIs that gets touted in the tech blogosphere.

There are many reasons management or developers will not truly care when it comes to monitoring the availability, reliability, and security of an API. Demonstrating yet another aspect of the API space that is more business and politics, than it is ever technical. We are seeing this play out online with the flakiness of websites, applications, devices, and the networks we depend on daily, and the waves of breaches, vulnerabilities, and general cyber(in)security. This is a human problem, not a technical, but there are many services and tools that can help mitigate people not caring.


A Ranking Score to Determine If Your API Was SLA Compliant

I talked about Google's shift towards providing an SLA across their cloud services last week, and this week I'd like to highlight APIMetric's Cloud API Service Consistency (CASC) score, and how it can be applied to determine if an API is meeting their SLA or not. APIMetric came up with the CASC Score as part of their APImetrics Insights analytics package, and has shown been very open with the API ranking system, as well as the data behind.

The CASC score provides us with an API reliability and stability ranking for us to apply across our APIs, providing one very important layer of a comprehensive API rating system that we can use across the industries being impacted by APIs. I said in my story about Google's SLAs that companies should have an API present for their APIs. They will also need to ensure that 3rd party API service providers like APIMetrics are gathering the data, and providing us with a CASC score for all the APIs we depend on in our businesses. 

I know that external monitoring service providers like APIMetrics, and API ranking systems like the CASC score make some API providers nervous, but if you want to sell to the enterprise, government, and other critical infrastructure, you will need to over it. I also recommend that you work hard to reduce barriers for API service providers to get at the data they need, as well as get out there way when it comes to publish the data publicly, and sharing or aggregating it as part of industry level rating for groups of APIs.

If we want this API economy, that we keep talking about to scale, we are going to have to make sure we have SLAs, and ensuring we are meeting them throughout our operations. SLA monitoring helps us meet our contractual engagements with our customers, but they are also beginning to contribute to an overall ranking system being used across the industries we operate in. This is something that is only going to grow and expand, so the sooner we get the infrastructure in place to determine our service level benchmarks, monitor and aggregate the corresponding data, the better off we are going to be in a fast-growing API economy.


Look Across My API Monitoring API Methods By Grouping Them Using Tag

Last week I was playing with defining API monitoring APIs so I can map to each stop along the API life cycle. I took three of the API monitoring services I use (APIMetrics, API Science, and Runscope), and like I do for other areas along the API life cycle, and for common API stacks, I profiled their APIs using the OpenAPI Spec. This is standard operating procedure for any of my research areas, in that part of profiling each company's operations, I profile the API surface area in detail.

For each of my research projects, I will include this listing of each API endpoint available as part of the work. As I was adding one for my API monitoring research, I had a thought--I wanted to reorganize the endpoints, across the three API monitoring service providers, and group them by tag. So I started playing with a new way to look at the APIs available in any given APIs.json driven collection.

This is a listing of API resources available in this projects APIs.json, organized by tag.

Account

Auth

  • Delete an Authentication Setting - (DELETE) - /auth/{id}/
  • Get an existing Authentication Setting - (GET) - /auth/{id}/
  • List Authentication Settings - (GET) - /auth/
  • Update an existing Authentication Setting - (PUT) - /auth/{id}/

Buckets

Calls

Checks

Contacts

Deployments

Messages

Monitors

Reports

Shared Environments

Tags

  • List All Tags - (GET) - /tags

Templates

Test Environments

Test Steps

Tests

Tokens

Workflows

If you mouse over each actual endpoint, it will tell you the host of the API it is for. I am just playing around. I have no idea what value this would present for anyone, except for just helping provide a new dimension for viewing the APIs involved. For me, this particular one helps me understand API resources across many providers, while also encouraging me to think more critically about how I tag the APIs I define using OpenAPI Spec.

You can view the listing by provider, as well as listing by tag, for my API monitoring research. I will be adding these two views to all of my core research areas, and the API stacks I define as I have time, but I thought it would be interesting to add to my own API stack, which is probably the most defined of all of my stacks--here is listing by provider, and listing by tag, for my API Evangelist stack.

We'll see how this plays out as I roll out for more of my research. I am sure I will learn a lot along the way, by adding new APIs.json driven dimensions like these. I'd like to eventually have a whole toolbox of these types of views, and even some APIs.json and OpenAPI Spec driven visualizations.


Defining API Monitoring APIs So I Can Map To Each Stop Along The API Life Cycle

I am going through each of the 35+ areas of the APi space that I monitor, working to bring alive the over 900 stops along the API life cycle, that I have identified through my research. I'm still working through prototypes for my life cycle explorer, but the current version has organizations, tools, links, and questions, along with the title and description of each stop of the life cycle journey I am trying to bring into focus.

Part of my approach in identifying the different lines, areas, and stops along this life cycle involves taking a look at the approach of leading API providers, as well as service being offered by companies selling their solutions to these API providers--giving me two sides of the API life cycle coin. In the last couple months I have also found another way to identify potential building blocks, and round off the ones I have, through the API definitions of leading API providers.

All I do, is craft an OADF file for each of the API service providers I track on, within each area of my research. I'm spending time tonight working on my API monitoring research, so I look at three of the service providers I track on, who have APIs. The OADF specs are not complete, but provide me a baseline definition for the surface area of each API, something I'll round out with more use. Here are the endpoints I have from each provider so far.

API Science Monitors API (oadf)

  • Get All Contacts - (GET) - /contacts.json
  • Create a Contact - (POST) - /contacts.json
  • Delete a Contact - (DELETE) - /contacts/{id}.json
  • Get a Specific Contact - (GET) - /contacts/{id}.json
  • Update a Contact - (PATCH) - /contacts/{id}.json
  • Get All Monitors - (GET) - /monitors
  • Create a Monitor - (POST) - /monitors
  • Apply Actions to Multiple Monitors - (PUT) - /monitors
  • Get a Specific Monitor - (GET) - /monitors/{id}
  • Get Checks For A Monitor - (GET) - /monitors/{id}/checks.json
  • Performance Report - (GET) - /monitors/{id}/performance
  • Show a Monitors Templates - (GET) - /monitors/{id}/templates
  • Get a Template - (GET) - /monitors/{id}/templates/{templates]
  • Create a Template - (POST) - /monitors/{id}/templates/{templates]
  • Testing your Monitor - (GET) - /monitors/{id}/test
  • Uptime Report - (GET) - /monitors/{id}/uptime.json
  • List All Tags - (GET) - /tags

Runscope API (oadf)

  • Account Resource - (GET) - /account
  • Teams Resource - (GET) - /teams/{teamId}/people
  • Team integrations list - (GET) - /teams/{teamId}/integrations
  • Returns a list of buckets. - (GET) - /buckets
  • Create a new bucket - (POST) - /buckets
  • Returns a single bucket resource. - (GET) - /buckets/{bucketKey}
  • Delete a single bucket resource. - (DELETE) - /buckets/{bucketKey}
  • Retrieve a list of messages in a bucket - (GET) - /buckets/{bucketKey}/messages
  • Clear a bucket (remove all messages). - (DELETE) - /buckets/{bucketKey}/messages
  • Create a message - (POST) - /buckets/{bucketKey}/messages
  • Retrieve a list of error messages in a bucket - (GET) - /buckets/{bucketKey}/errors
  • Retrieve the details for a single message. - (GET) - /buckets/{bucketKey}/messages/{messageId}
  • Returns a list of tests. - (GET) - /buckets/{bucketKey}/tests
  • Create a test. - (POST) - /buckets/{bucketKey}/tests
  • Delete a single test. - (DELETE) - /buckets/{bucketKey}/tests/{testId}
  • List test steps for a test. - (GET) - /buckets/{bucketKey}/tests/{testId}/steps
  • Add new test step. - (POST) - /buckets/{bucketKey}/tests/{testId}/steps
  • Update the details of a single test step. - (PUT) - /buckets/{bucketKey}/tests/{testId}/steps/{stepId}
  • Delete a step from a test. - (DELETE) - /buckets/{bucketKey}/tests/{testId}/steps/{stepId}
  • Return details of the test's environments (only those that belong to the specified test) - (GET) - /buckets/{bucketKey}/tests/{testId}/environments
  • Create new test environment. - (POST) - /buckets/{bucketKey}/tests/{testId}/environments
  • Update the details of a test environment. - (PUT) - /buckets/{bucketKey}/tests/{testId}/environments/{environmentId}
  • Returns list of shared environments for a specified bucket. - (GET) - /buckets/{bucketKey}/environments
  • Create new shared environment. - (POST) - /buckets/{bucketKey}/environments
  • Update the details of a test environment. - (PUT) - /buckets/{bucketKey}/environments/{environmentId}

APIMetrics Auth API (oadf)

  • List Authentication Settings - (GET) - /auth/
  • Delete an Authentication Setting - (DELETE) - /auth/{id}/
  • Get an existing Authentication Setting - (GET) - /auth/{id}/
  • Update an existing Authentication Setting - (PUT) - /auth/{id}/

APIMetrics Calls API (oadf)

  • List API Calls - (GET) - /calls/
  • Create new API Call - (POST) - /calls/
  • List API Calls by Authentication - (GET) - /calls/auth/{auth_id}/
  • Delete an API Call - (DELETE) - /calls/{id}/
  • Get an existing API Call - (GET) - /calls/{id}/
  • Update an existing API Call - (PUT) - /calls/{id}/
  • Trigger an API Call to run - (POST) - /calls/{id}/run
  • List Stats from before a date for an API Call - (GET) - /calls/{id}/stats/before
  • List Stats since a date for an API Call - (GET) - /calls/{id}/stats/since

APIMetrics Deployments API (oadf)

  • List all Deployment - (GET) - /deployments/
  • Create a new Deployment - (POST) - /deployments/
  • Get all Deployments for an API Call - (GET) - /deployments/call/{call_id}/
  • Get all Deployments for a Workflow - (GET) - /deployments/workflow/{workflow_id}
  • Delete a Deployment - (DELETE) - /deployments/{id}/
  • Get an existing Deployment - (GET) - /deployments/{id}/
  • Update an existing Deployment - (PUT) - /deployments/{id}/

APIMetrics Reports API (oadf)

  • List all Reports - (GET) - /reports/
  • Create a new Report - (POST) - /reports/
  • Delete a Report - (DELETE) - /reports/{id}/
  • Get an existing Report - (GET) - /reports/{id}/
  • Update an existing Report - (PUT) - /reports/{id}/

APIMetrics Tokens API (oadf)

  • List Auth Tokens - (GET) - /tokens/
  • Create a new Auth Token - (POST) - /tokens/
  • Get all tokens for an Authentication Setting - (GET) - /tokens/auth/{auth_id}/
  • Delete an Auth Token - (DELETE) - /tokens/{id}/
  • Get an existing Auth Token - (GET) - /tokens/{id}/
  • Update an Auth Token - (PUT) - /tokens/{id}/

APIMetrics Workflows API (oadf)

  • List all Workflows - (GET) - /workflows/
  • Create new Authentication Settings - (POST) - /workflows/
  • Delete a Workflow - (DELETE) - /workflows/{id}/
  • Get an existing Workflow - (GET) - /workflows/{id}/
  • Trigger a Workflow to run now - (POST) - /workflows/{id}/
  • Create a new Workflow - (PUT) - /workflows/{id}/

When you compare the definitions for these API service providers, you are comparing apples to oranges, even though they exist in the same layer of the API space. To me, having them defined, will allow me to slowly weave them into my master list of common building blocks for API monitoring.

What really excites me, is that for each potential stop along the API monitoring line, I might be able to actually link to specific API endpoints, and even down to the verb level. For example, I could link to the endpoint for creating a new test for API Science, APIMetrics, and Runscope, with a single button or widget. 

I find the API definitions for API service providers to be more interesting them some of the features they showcase via their site. I will be continuing to identify the API service providers that I track on who have APIs, and defining them using OADF. You can find the APIs for my API monitoring research available in the projects APIs.json file, as well as each individual APIs.json and OADF files listed on the API monitoring service providers page.


Evolving My API Stack To Be A Public Repo For Sharing API Discovery, Monitoring, And Rating Information

My API Stack began as a news site, and evolved into a directory of the APIs that I monitor in the space. I published APIs.json indexes for the almost 1000 companies I am trackig on, with almost 400 OADF files for some of the APIs I've profiled in more detail. My mission around the project so far, has been to create an open source, machine readable repo for the API space.

I have  had two recent occurrences that are pushing me to expand on my API Stack work. First, I have other entities who want to contribute monitoring data and other elements I would like to see collected, but haven't had time. The other is I that I have started spidering the URLs of the API portals I track on, and need a central place to store the indexes, so that others can access.

Ultimately I'd like to see the API Stack act as a public repo, where anyone can grab the data they need to discovery, evaluate, integrate, and stay in tune with what APIs are doing, or not doing. In addition to finding OADF, API Blueprint, and RAML files by crawling and indexing API portals, and publishing in a public repo, I want to build out the other building blocks that I index with APIs.json, like pricing, and TOS changes, and potentially monitoring, testing, performance data available.

Next I will publish some pricing, monitoring, and portal site crawl indexes to the repo, for some of the top APIs out there, and start playing with the best way to store the JSON, and other files, and provide an easy way explore and play with the data. If you have any data that you are collecting, and would like to contribute, or have a specific need you'd like to see tracked on, let me know, and I'll add to the road map.

My goal is to go for quality and completeness of data there, before I look to scale, and expand the quantity of information and tooling available. Let me know if you have any thoughts or feedback.


API Monitoring Should Be Baked Into Your API Strategy By Default

As I've written several posts on the recent Amazon API Gateway release, one of the side things I noted about the API solution from AWS, was that API monitoring is baked in by default. As stated on the AWS API Gateway page:

After your API is deployed, Amazon API Gateway provides you with a dashboard to visually monitor calls to your services using Amazon CloudWatch, so you see performance metrics and information on API calls, data latency, and error rates.

This may seem like common sense to many people who have been in the API space for a while now, but for many API designers, architects, developers, and business folk who are just getting going, API monitoring may not be default for all implementations.

To me, AWS baking in API monitoring by default, demonstrates that the world of API monitoring has matured, marking an important milestone that all API providers should pay attention to. I've been watching this space grow over the last couple years, and similar to the API management space, the AWS release reflects the overall health of the API sector.

If you are operating any API In 2015, monitoring should be standard operating procedure, alongside your API documentation.


Growth Of Bug Bounties, Importance of 3rd Party Monitoring, and Operating On The Open Internet


Going Beyond Just API Status And Providing An Official API Monitoring Service(s) With Your API

I've long advocated that an API Status page should be a required building block for any API operation. As I work on monitoring for my own master API stack, and as I read stories like Pingometer Keeps Your Uptime In The 9’s With Twilio SMS, I'm thinking we need more that just a simple status report for our APIs.

From an API provider standpoint, we should have a more nuanced view of our API availability, beyond just up or down--something popular API integration service providers like Runscope and API Science have been saying for a while. If our APIs are publicly available, I'm going to even suggest providers start actively sharing their API monitoring strategy, and resulting data with the ecosystem--something I'm going to also be advocating for inclusion in the APIs.json index.

From an API consumer standpoint, you are going to want a real-time awareness about the availability of all the APIs you depend on, and much like the provider standpoint, this view needs to be more nuanced than just whether a service is up or down. Developers need to be making tactical, run-time decisions based upon API monitoring, as well as be able to make longer term strategic decisions about which APIs they depend on, based upon API monitoring exhaust. 

I like Twilio's style for showcasing Pingometer as a solution. I'm thinking every API provider should cozy up with an API monitoring service provider for their own needs, while also establishing an approach they can also share with their API consumers, to meet their needs as well. I'm currently adding API Science and Runscope as default building blocks for my API Stack, something you will find indexed as part of each of my services APIs.json file.

I'm working to coherently separate out the benefits API service providers like API Science and Runscope bring to the table for both API providers, and API consumers, in a way that is also in alignment with a common set of building blocks, that everyone involved can benefit from, throughout the API life-cycle.


Updating My WalkSensor Monitoring Network Profile For The Month

I am updating my WalkSensor profile for the month. I like to take a look at my profile each month, stay up to date on the providers that I contribute data to, and consider any possible new additions to my profile. Right now I gather data for 28 separate companies, organizations, and the city of Portland, OR where I live. When I walk to work each day, which is about 1.4 miles, I scan about 93 separate sensors, for all of my 28 companies, organizations and government agencies.

I started walking to work each day for my health, and I kept walking because of WalkSensor. I get to gather data from water, electrical, weather, and other sensors place around my neighborhood and larger city. Rather than companies connecting each sensor directly on the Internet for information gathering, it was more cost effective, and secure to give each sensor the ability to transmit a data signal for a small radius, say like 30 feet. These signals can be picked up by any regular mobile smart phone, but it requires the WalkSensor app to actually accept, decrypt, and store each signals message.

When I get to work, or get back home, whichever is the first location I connect to a secure wifi connection, the gathered data is transmitted to each companies WalkSensor server. The process is a win for everyone involved--each company, organization, or government agency is able to deploy low cost sensors, and I am able to generate some revenue, pushing me to walk to work, rather than driving. I get to choose who I gather data for, and I only work for them if I have an affinity with their mission, and data collection goals.

I do not make much money, about $50.00 a month, but it makes it worth it. I can earn extra money by taking walks in the evening or on weekends. Companies have specific data gathering needs, and my WalkSensor app will aggregate potential opportunities for me, and plan a trip for me, encouraging me to go for a walk. I like that I am supporting a different vision of the Internet of Things, one that doesn’t involve everything being connected to the Internet, and allows for much more sensible and controlled way to connect devices to a central online platform.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.