RSS

API Monitoring News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is monitoring their API operations.

Helping The Federal Government Get In Tune With Their API Uptime And Availability

Nobody likes to be told that their APIs are unreliable, and unavailable on a regular basis. However, it is one of those pills that ALL APIs have to swallow, and EVERY API provider should be paying for an external monitoring service to tell us when are APIs are up or down. Having a monitoring service to tell us when our APIs are having problems, complete with a status dashboard, and history of our API’s availability are essential building blocks of any API provider. If you expect consumers to use your API, and bake it into their systems and applications, you should committed to a certain level of availability, and offering a service level agreement if possible.

My friends over at APImetrics monitor APIs across multiple industries, but we’ve been partnering to keep an eye on federal government APIs, in support of my work in DC. They’ve recently shared an informative dashboard tracking on the performance of federal government APIs, providing an interesting view of the government API landscape, and the overall reliability of APIs they provide.

They continue by breaking down the performance of federal government APIs, including how the APIs perform across multiple North American regions across four of the leading cloud providers:

Helping us visualize the availability of federal government APIs for the last seven days, by applying their APImetrics CASC score:

I know it sucks being labeled as one of the worst performing APIs, but you also have the opportunity to be named one the best performing APIs. ;-) This is a subject that many private sector companies struggle with, and the federal government has an extremely poor track record for monitoring their APIs, let alone sharing the information publicly. Facing up to this stuff sucks, and you are forced to answer some difficult questions about your operations, but it is also something can’t be ignored away when you have a public API

You can view the US Government API Performance Dashboard for July 2018 over at APImetrics. If you work for any of these agencies and would like to have a conversation your API monitoring, testing, and performance strategy, I am happy to talk. I know the APImetrics team are happy to help to, so don’t stay in denial about your API performance and availability. Don’t be embarrassed. Tackle the problem head on, improve your overall quality of service, and then having an API monitoring and performance dashboard publicly available like this won’t hurt nearly as much–it will just be a normal part of operating an API that anyone can depend on.


Business Concerns Around APIs Monitoring Their API Consumption

I heard an interesting thing from a financial group the other day. They won’t use APIs because they know that API providers are logging and tracking their every API request, and they don’t want any 3rd party providers to know what they are doing. It is the business version of privacy and security concerns, where businesses are starting to get nervous about the whole app, big data, and API economy. Exposing another way in which bad behavior in the space will continue to come back and hurt the wider community–all because we refuse to reign in the most badly behave around us, and because we all want to be able to exploit and capture value via our APIs–sometimes at all costs.

I’ve been a big advocate of keying up valuable data and content, requiring developers to authenticate before they get access resources, allowing API providers to log and analyze all API traffic. When done right, this can help API providers better understand how consumers are using their resources, but it can also be something that quickly becomes more surveillance than it is ever about awareness building, and will be something that will run developers off when they become aware of what is happening. This is one of the reasons I have surveillance as a stop along the API lifecycle, so that I can better understand this shift as it occurs, and track on the common ways in which APIs are used to surveil businesses and individuals.

As surveillance, and the extraction of individual, business, and government value continues at the API layer, the positive views of APIs will continue to erode. They will eventually be seen as only about giving away your digital assets for free, and be seen as a mechanism for surveillance and exploitation. While I miss the glory days of the API space, I don’t miss the naivety of the times. I do not have any delusions that technology is inherently good, or even neutral anymore. I expect the business and political factions of the web to use APIs for bad. I’ll keep preaching that they can be used for good, but I’ll also be calling out the ways in which they are burning bridges, and running people off. As the badly behaved tech giants, and venture backed startups keep mining and surveilling their way to new revenue streams.

When APIs fall out of popularity, and companies, organizations, institutions, and government agency fall back to more proprietary, closed-doors approaches to making digital resources available, I won’t be surprised. I’ll know that it wasn’t due to any technical fault of APIs, it is because of the business and political motivations of API providers. Profits at all costs. Not actually caring about the privacy and security of consumers. Believing they aren’t the worst behaved, and looking the other way when partners and other actors are badly behaved. It is just businesses in the end. It isn’t personal. I’m just hoping that individuals begin to wake up to the monitoring and exploitation that is occurring, in the same way businesses are–otherwise, we are all truly screwed.


Working To Keep Programming At Edges Of The API Conversation

I’m fascinated by the dominating power of programming languages. There are many ideological forces at play in the technology sector, but the dogma that exists within each programming language community continues to amaze me. The potential absence of programming language dogma within the world of APIs is one of the reasons I feel it has been successful, but alas, other forms of dogma tends to creep in around specific API approaches and philosophies, making API evangelism and adoption always a challenge.

The absence of programming languages in the API design, management, and testing discussion is why they have been so successful. People in these disciplines have ben focused on the language agnostic aspects of just doing business with APIs. It is also one of the reasons the API deployment conversation still is still so fragmented, with so many ways of getting things done. When it comes to API deployment, everyone likes to bring their programming language beliefs to the table, and let it affect how we actually deliver this API, and in my opinion, why API gateways have the potential to make a comeback, and even excel when it comes to playing the role of API intermediary, proxy, and gateway.

Programming language dogma is why many groups have so much trouble going API first. They see APIs as code, and have trouble transcending the constraints of their development environment. I’ve seen many web or HTTP APIs called Java API, Python APIs, or reflect a specific language style. It is hard for developers to transcend their primary programming language, and learn multiple languages, or think in a language agnostic way. It is not easy for us to think out of our boxes, and consider external views, and empathize with people who operate within other programming or platform dimensions. It is just easier to see the world through our lenses, making the world of APIs either illogical, or something we need to bend to our way of doing things.

I’m in the process of evolving from my PHP and Node.js realm to a Go reality. I’m not abandoning the PHP world because many of my government and institutional clients still operate in this world, and I’m using Node.js for a lot of serverless API stuff I’m doing. However I can’t ignore the Go buzz I keep coming stumbling upon. I also feel like it is time for a paradigm shift, forcing me out of my comfort zone and push me to think in a new language. This is something I like to do every five years, shifting my reality, keeping me on my toes, and forcing me to not get too comfortable. I find that this keeps me humble and thinking across programming languages, which is something that helps me realize the power of APIs, and how they transcend programming languages, and make data, content, algorithms, and other resources more accessible via the web.


A Microprocurement Package For API Monitoring And Testing

I’m kicking off a micro-procurement project with the Department of Veterans Affairs (VA) this week. I’m going to to be conducting one of my API low hanging fruit campaigns for them, where I help them identify the best possible data sets available across their public websites for turning into APIs. The project is one of many small projects the federal agency is putting out there to begin working on their agency wide API platform they are calling Lighthouse. Laying the groundwork for better serving veterans through a robust stack of microservices that each do one thing really well, but can be used in concert to deliver the applications the agency needs to meet their mission.

While not specifically a project to develop a microservice. The landscape analysis project is an API focused research project, that is testing a new procurement model called microprocurement. At my government consulting agency partnership Skylight Digital, my partner in crime Chris Cairns has been pushing for a shift in how government tackles technology projects, pushing them to do in smaller chunks that essentially can be put on the credit card in less than 10K increments. So far we’ve been doing consulting, research, and training related projects like how to create a bug bounty program, and internal evangelism strategies. Now we are kicking our campaign into high gear and pushing more agencies to think about microprocurement for microservices–the VA was the first agency to buy into this new way of thinking about how government IT gets delivered.

I am working with all of my partners to help me craft potential services that would fit into the definition of a microprocurement project. I’ve been working to educate people about the process so that more API experts are on hand to respond when agencies publish microprocurement RFPs on Github like the VA did, but I also want to make sure and have a suite of microprocurement packages that federal agencies can choose from as well. I’ve been working with my partner Dave O’Neill over at API Metrics to provide me with some detail on an API testing microprocurement package that federal agencies could buy into for less than 10K, providing the follow value:

  • API discovery and creation of a Postman collection
  • Annual monitoring for up to 10 APIs including the following:
  • Weekly and monthly emailed quality statements – CASC scores and SLOs/SLA attainment
  • Interface to named operations tooling for alerts
  • Public dashboards for sharing API status and SLA with all stakeholders.

Providing a pretty compelling annual API monitoring package that government agencies could put on the credit card, and help ensure the quality of service provided across federal agencies when it comes to providing APIs. Something that is desperately needed across federal agencies who in my experience are mostly conducting “API nonitoring”, where they are not monitoring anything. Ideally, ALL federal agencies provide an SLA with the public and private APIs they are serving up, and relying on outside entities like API Metrics to be doing the monitoring from an external perspective, and potentially even different regions of the US–ensuring everyone has access to government services.

I’ll publish anther story after I kick off my work with the VA. I’ll also keep beating the drum about microprocurement opportunities like this one with API Metrics. If you are an API service provider who has an interesting package you think federal agencies would be interested in–let me know. Also, if you are a government agency and would like to learn more about how to conduct microprocurement projects feel free to drop me a line, or reach out to Chris Cairns (@cscairns) over at Skylight Digital–he can set you up. We are just getting going with this types of projects, but I’m pretty optimistic about the potential they can bring to the table when it comes to delivering IT projects–an approach that reflects the API way of doing things in small, meaningful chunks, and won’t break the bank.


OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract Each OpenAPI can be used to generate a mock representation of the service using Postman, Stoplight.io, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.


Aggregating Multiple Github Account RSS Feeds Into Single JSON API Feed

Github is the number one signal in my API world. The activity that occurs via Github is more important than anything I find across Twitter, Facebook, LinkedIn, and other social channels. Commits to repositories and the other social activity that occurs around coding projects is infinitely more valuable, and telling regarding what a company is up to, than the deliberate social media signals blasted out via other channels is. I’m always working to dial in my monitoring of Github using the Github API, but also via the RSS feeds that are present on the public side of the platform.

I feel RSS is often overlooked as an API data source, but I find that RSS is not only alive and well in 2018, it is something that is actively used on many platforms. The problem with RSS for me, is the XML isn’t always conducive to working with in many of my JavaScript enabled applications, and I also tend to want to aggregate, filter, and translate RSS feeds into more meaningful JSON. To help me accomplish this for Github, I crafted a simple PHP RSS aggregator and converter script which I can run in a variety of situations. I published the basic script to Github as a Gist, for easy reference.

The simple PHP script just takes an array of Github users, loops through them, pulls their RSS feeds, and then aggregates them into a single array, sorts by date, and then outputs as JSON. It is a pretty crude JSON API, but it provides me with what I need to be able to use these RSS feeds in a variety of other applications. I’m going to be mining the feeds for a variety of signals, including repo and user information, which I can then use within other applications. The best part is this type of data mining doesn’t require a Github API key, and is publicly available, allowing me to scale up much further than I could with the Github API alone.

Next, I have a couple of implementations in mind. I’m going to be creating a Github user leaderboard, where I stream the updates using Streamdata.io to a dashboard. Before I do that, I will have to aggregate users and repos, incrementing each commit made, and publishing as a separate JSON feed. I want to be able to see the raw updates, but also just the most changed repositories, and most active users across different segments of the API space. Streamdata.io allows me to take these JSON feeds and stream them to the dashboard using Server-Sent Events(SSE), and then applying each update using JSON Patch. Making for a pretty efficient way to put Github to work as part of my monitoring of activity across the API space.


More Outputs Are Better When It Comes To Establishing An API Observability Ranking

I’ve been evolving an observability ranking for the APIs I track on for a couple years now. I’ve bene using the phrase to describe my API profiling and measurement approach since I first learned about the concept from Stripe. There are many perspectives floating around the space about what observability means in the context of technology, however mine is focused completely on APIs, and is more about communicating with external stakeholders, more than it is just about monitoring of systems. To recap, the Wikipedia definition for observability is:

Formally, a system is said to be observable if, for any possible sequence of state and control vectors, the current state can be determined in finite time using only the outputs (this definition is slanted towards the state space representation). Less formally, this means that from the system’s outputs it is possible to determine the behavior of the entire system. If a system is not observable, this means the current values of some of its states cannot be determined through output sensors.

Most of the conversations occurring in the tech sector are focused on monitoring operations, and while this is a component of my definition, I lean more heavily on the observing occurring beyond just internal groups, and observability being about helping keep partners, consumers, regulators, and other stakeholders be more aware regarding how complex systems work, or do not work. I feel that observability is critical to the future of algorithms, and making sense of how technology is impacting our world, and APIs will play a critical role in ensuring that the platforms have the external outputs required for delivering meaningful observability.

When it comes to quantifying the observability of platforms and algorithms, the more outputs available the better. Everything should have APIs for determining the inputs and outputs of any algorithm, or other system, but there should also be operational level APIs that give insight into the underlying compute, storage, logging, DNS, and other layers of delivering technological solutions. There should also be higher level business layer APIs surrounding communication via blog RSS, Twitter feeds, and support channels like email, ticketing, and other systems. The more outputs around platform operations there are, the more we can measure, quantify, and determine how observable a platform is using the outputs that already exist for ALL the systems in use across operations.

To truly measure the observability of a platform I need to be able to measure the technology, business, and politics surrounding its operation. If communication and support exist in the shadows, a platform is not observable, even if there are direct platform APIs. If you can’t get at the operational layer like logging, or possibly Github repositories used as part of continuous integration or deployment pipelines, observability is diminished. Of course, not all of these outputs should all be publicly available by default, but in many cases, there really is no reason they can’t. At a minimum there should be API access, with some sort of API management layer in place, allowing for 3rd party auditors, and analysts like me to get at some, or all of the existing outputs, allowing us to weigh in on an overall platform observability workflow.

As I continue to develop my API observability ranking algorithm, the first set of values I calculate are the number of existing outputs an API has. Taking into consideration the scope of core and operational APIs, but also whether I can get at operations via Twitter, Github, LinkedIn, and other 3rd party APIs. I find many of these channels more valuable for understanding platform operations, than the direct APIs themselves. Chatter by employees, and commits via Github can provide more telling signals about what is happening, than the intentional signals emitted directly by the platform itself. Overall, the more outputs available the better. API observability is all about leveraging existing outputs, and when companies, organizations, institutions, and government agencies are using solutions that have existing APIs, they are more observable by default, which can have a pretty significant impact in helping us understand the impact a technological solution is having on everyone involved.


A Health Check Response Format for HTTP APIs

My friend Irakli Nadareishvili, has published a new health check response format for HTTP APIs that I wanted to make sure was documented as part of my research. The way I do this is write a blog post, forever sealing this work in time, and adding it to the public record that is my API Evangelist brain. Since I use my blog as a reference when writing white papers, guides, blueprints, policies, and other aspects of my work, I need as many references to usable standards like this.

I am going to just share the introduction from Irakli’s draft, as it says it all:

The vast majority of modern APIs driving data to web and mobile applications use HTTP [RFC7230] as a transport protocol. The health and uptime of these APIs determine availability of the applications themselves. In distributed systems built with a number of APIs, understanding the health status of the APIs and making corresponding decisions, for failover or circuit-breaking, are essential for providing highly available solutions.

There exists a wide variety of operational software that relies on the ability to read health check response of APIs. There is currently no standard for the health check output response, however, so most applications either rely on the basic level of information included in HTTP status codes [RFC7231] or use task-specific formats.

Usage of task-specific or application-specific rformats creates significant challenges, disallowing any meaningful interoperability across different implementations and between different tooling.

Standardizing a format for health checks can provide any of a number of benefits, including:

  • Flexible deployment - since operational tooling and API clients can rely on rich, uniform format, they can be safely combined and substituted as needed.
  • Evolvability - new APIs, conforming to the standard, can safely be introduced in any environment and ecosystem that also conforms to the same standard, without costly coordination and testing requirements. This document defines a “health check” format using the JSON format [RFC7159] for APIs to use as a standard point for the health information they offer. Having a well-defined format for this purpose promotes good practice and tooling.

Here is an example JSON response, showing the standard in action:

I have seen a number of different approaches to providing health checks in APIs, from a single ping path, to proxying of the Docker Engine API for the microservices Docker container. It makes sense to have a standard for this, and I’ll reference Irakli’s important work from here on out as I’m advising on projects, or implementing my own.


Orchestrating API Integration, Consumption, and Collaboration with the Postman API

You hear me say it all the time–if you are selling services and tooling to the API sector, you should have an API. In support of this way of thinking I like to highlight the API service providers I work with who follow this philosophy, and today’s example is from (Postman](https://getpostman.com). If you aren’t familiar with Postman, I recommend getting acquainted. It is an indispensable tool for integrating, consuming, and collaborating around the APIs you depend on, and are developing. Postman is essential to working with APIs in 2018, no matter whether you are developing them, or integrating with 3rd party APIs.

Further amplifying the usefulness of Postman as a client tool, the Postman API reflects the heart of what Postman does as not just a client, but a complete life cycle tool. The Postman API provides five separate APIs, allowing you orchestration your API integration, consumption, and collaboration environment.

  • Collections - The /collections endpoint returns a list of all collections that are accessible by you. The list includes your own collections and the collections that you have subscribed to.
  • Environments - The /environments endpoint returns a list of all environments that belong to you. The response contains an array of environments’ information containing the name, id, owner and uid of each environment.
  • Mocks - This endpoint fetches all the mocks that you have created.
  • Monitors - The /monitors endpoint returns a list of all monitors that are accessible by you. The response contains an array of monitors information containing the name, id, owner and uid of each monitor.
  • User - The /me endpoint allows you to fetch relevant information pertaining to the API Key being used.

The user, collections, and environments APIs reflect the heart of the Postman API client, where mocks and monitors reflects its move to be a full API life cycle solution. This stack of APIs, and the Postman as a client tool reflects how API development, as well as API operation should be conducted. You should be maintaining collections of APIs that exist within many environments, and you should always be mocking interfaces as you are defining, designing, and developing them. You should then also be monitoring all the APIs you depend–whether or not the APIs are yours. If you depend on APIs, you should be monitoring them.

I’ve long been advocating that someone development an API environment management solution for API developers, providing a single place we can define, store, and share the configuration, keys, and other aspects of integration with the APIs we depend on. The Postman collections and environment APIs is essentially this, plus you get all the added benefits of the services and tooling that already exist as part of the platform. Demonstrating why as an API service provider, you want to be following your own advice and having an API, because you never know when the core of your solution, or even one of the features could potentially become baked into other applications and services, and be the next killer feature developers can’t do without.


Connecting Service Level Agreement To API Monitoring

Monitoring your API availability should be standard practice for internal and external APIs. If you have the resources to custom build API monitoring, testing, and performance infrastructure, I am guessing you already have some pretty cool stuff in place. If you don’t, then you should not be reinventing the wheel out there, and you should be leveraging one of the existing API monitoring services out there on the market. When you are getting started with monitoring your APIs I recommend you begin with uptime and downtime, and once you deliver successfully on that front, I recommend you work on API performance, and the responsiveness of your APIs.

You should begin with making sure you are delivering the service level agreement you have in place with your API consumers. What, you don’t have a service level agreement? No better time to start than now. If you don’t already have an explicitly stated SLA in place, I recommend creating one internally, and see what you can do to live up to your API SLA, then once you ensure things are operating at acceptable levels, you share with your API consumers. I am guessing they will be pretty pleased to hear that you are taking the initiative to offer an SLA, and are committed enough to your API to work towards such a high bar for API operations.

To help you manage defining, and then ultimately monitoring and living up to your API SLA, I recommend taking a look at APIMetrics, who is obsessively focused on API quality, performance, and reliability. They spend a lot of time monitoring public APIs, and have developed a pretty sophisticated approach to ranking and scoring your API to ensure you meet your SLA. As you can see in the picture for this story, the APIMetrics administrative dashboard provides a pretty robust way for you to measure any API you want, and establish metrics and triggers that let you know if you’ve met or failed to meet your SLA requirements. As I said, you could start out by monitoring internally if you are nervous about the results, but once you are ready to go prime time you have the tools to help you regularly reporting internally, as well as externally to your API consumers.

I wish that every stop along the life cycle had a common definition for defining a specific aspect of service level agreements, and was something that multiple API providers could measure and report upon similar to what APIMetrics does for monitoring and performance. I’d like to see API design begin to have a baseline definition, that was verifiable through a common set of machine readable API assertions. I’d love for API plans, pricing, and even terms of service measurable, reportable, in a similar way. These are all things that should be observable through existing outputs, and reflected as part of service level agreements. I’d love to see the concept of the SLA evolve to cover all aspects of the quality of service beyond just availability. APIMetrics provides a good look at how the services we use to manage our APIs can be used to define the level of service we provide, something that we can be emulating more across our API operations.


A Couple More Questions For The Equifax CEO About Their Breach

Speaking to the House Energy and Commerce Committee, former Equifax CEO Richard Smith pointed the finger at a single developer who failed to patch the Apache Struts vulnerability. Saying that protocol was followed, and a single developer was responsible, shifting the blame away from leadership. It sounds like a good answer, but when you operate in the space you understand that this was a systemic failure, and you shouldn’t be relying on a single individual, or even a single piece of scanning software to verify the patch was applied. You really should have many layers in place to help prevent breaches like we saw with Equifax.

If I was interviewing the CEO, I’d have a few other questions for him, getting at some of the other systemic and process failures based upon his lack of leadership, and awareness:

  • API Monitoring & Testing - You say the scanner for the Apache Struts vulnerability failed, but what about other monitoring and testing. The plugin in questions was a REST plugin, that allowed for API communication with your systems. Due to the vulnerability, extra junk information was allowed to get through. Where were your added API request and response integrity testing and monitoring process? Sure you were scanning for the vulnerability, but are you keeping an eye on the details of the data being passed back and forth? API monitoring & testing has been around for many years, and service providers like Runscope do this for a living. What other layers of monitoring and testing were in place?
  • API Management - When you expose APIs like you did from Apache Struts, what does the standardized management approach look like? What sort of metering, logging, rate limiting, and analysis occurs on each endpoint, and verification occurs, ensuring that only required clients should have access? API management has been standard procedure for over a decade now for exposing APIs like this both internally and externally. Why didn’t your API management process stop this sort of breach after only a couple hundred record went out? API management is about awareness regarding access to all your resources. You should have a dashboard, or at least some reports that you view as a CEO on this aspect of operations.

These are just two examples of systems and processes that should have been in place. You should not be depending on a single person, or a single tool to catch this type of security incident There should be many layers in place, with security triggers, and notifications in place. Your CTO should be in tune with all of these layers, and you as the CEO should be getting briefed on how they work, and have a hand in making sure they are in place. I’m guessing that your company is doing APIs, but is dramatically behind the times when it comes to commonplace API management practices. This is your fault as the CEO. This is not the fault of a single employee, or any group of employees.

I am guessing that as a CEO you are more concerned with the selling of this data, than you are of securing it in storage, or transit. I’m guessing you are intimately aware of the layers that enable you to generate revenue, but you are barely investing in the technology and processes to do this securely, while respecting the privacy of your users. They are just livestock to you. They are just products on a shelf. It shows your lack of leadership to point the finger at a single person, or single piece of technology. There should have been many layers in place to catch this type of breach beyond a single vulnerability. It demonstrates your lack of knowledge regarding modern trends in how we secure and provide access to data, and you should never have been put in charge of such a large data brokerage company.


When To Build Or Depend On An API Service Provider

I am at that all too familiar place with a project where I am having to decide whether I want to build what I need, or depend on an API service provider. As an engineer it is always easy to think you can just build what you need, but the more experience you have, you begin to realize this isn’t always the smartest move. I’m at that point with API monitoring. I have a growing number of endpoints that I need to make sure are alive and active, but I also see an endless road map of detailed requests when it comes to granularity of what “alive and active” actually means.

At first I was just going to use my default cron job service to hit the base url and API paths defined in my OpenAPI for each project, checking for the expected HTTP status code. Then I thought I better start checking for a valid schema. Then I thought I better start checking for valid data. My API project is an open source solution, and I thought about each of my clients and implementations as me for testing and monitoring for their needs. Then I thought, no way!! I’m just going to use Runscope, and build in documentation and processes that each of my clients and implementations can also use Runscope to dial in monitoring and testing of their API on their own terms.

Since all of my API projects is OpenAPI driven, and Runscope is an OpenAPI driven API service provider (as ALL should be), I can use this as the seed for setting up testing and monitoring. Not all of my API implementations will be using 100% of the microservices I’m defining, or 100% of the API paths available fo each of the microservices I’m defining. Each microservice has it’s core set of paths that deliver the service, but then I’m also bundling in database, server, DNS, logging and other microservice operational level APIs that not all my implementations will care about monitoring (sadly). So it is important for my clients and implementations to be easily select with APIs they care about monitoring, which OpenAPI will help do the heavy lifting. When it comes to exactly what API monitoring and testing means to them, I’ll rely on Runscope to do the heavy lifting.

If Runscope didn’t have the ability to import an OpenAPI to plant the seeds for API testing and monitoring I might have opted to just build out a basic solution myself. The manual process of setting up my API monitoring and testing for each client would quickly become more work than just building a solution–even if it was nowhere near as good as Runscope. However, we are increasingly living in an OpenAPI driven API lifecycle where service providers of all shapes and sizes allow for the importing and exporting of common API definition formats like OpenAPI. Helping API providers and architects like myself stick to what we do best, and not reinvent the wheel for each stop along the API lifecycle.

Disclosure: Runscope is an API Evangelist partner.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Understanding Global API Performance At The Multi-Cloud Level

APIMetrics has a pretty addictive map showing the performance of API calls between multiple cloud providers, spanning many global regions. The cloud location latency map “shows relative performance of a standard, reference GET request made to servers running on all the Google locations and via the Google global load balancer. Calls are made from AWS, Azure, IBM and Google clouds and data is stored for all steps of the API call process and the key percentiles under consideration.”

It is interesting to play with the destination of the API calls, changing the region, and visualizing how API calls begin to degrade to different regions. It really sets the stage for how we should start thinking about the deployment, monitoring, and testing of our APIs. Region, by region, getting to know where our consumers are, and making sure APIs are deployed within the cloud infrastructure that delivers the best possible performance. It’s not just testing your APIs in a single location from many locations, it is also rethinking where your APIs are deployed, leveraging a multi-cloud reality and using all the top cloud provider, while also making API deployment by region a priority.

I’m a big fan of what APIMetrics is doing with the API performance visualizations and mapping. However, I think their approach to using HTTPbin is a significant part of this approach to monitoring and visualizing API performance at the multi-cloud level, while also making much of the process and data behind it all public. I want to put some more thought into how they are using HTTPbin behind this approach to multi-cloud API performance monitoring. I feel like there is potential her for applying this beyond just API performance, and think about other testing, security, and critical aspects of reliability and doing business online with APIs today.

After thinking where else this HTTPbin approach to data gathering could be applied, I want to think more about how the data behind APIMetrics cloud location latency map can be injected into other conversations, when it comes where we are deploying APIs, and running our API tests. Eventually I would like to see this type of multi-cloud API performance data alongside data for security and privacy compliance data, and even the regulations of each country as they apply to specific industries. Think about a time when we can deploy our APIs exactly where we want them based upon performance, privacy, security, regulations, and other critical aspects of doing business in the Internet age.


Making Sure Your API Service Connects To Other Stops Along The API Lifecycle

I am continuing my integration platform as a service research, and spending a little bit of time trying to understand how API providers are offering up integrations with other APIs. Along the way, I also wanted to look at how API service providers are doing it as well, opening themselves up to other stops along n API lifecycle. To understand how API service providers are allowing their users to easily connect to other services I’m taking a look at how my partners are handling this, starting with connected services at Runscope.

Runscope provides ready to go integration of their API monitoring and testing services with twenty other platforms, delivering a pretty interesting Venn diagram of services along the API lifecycle:

  • Slack - Slack to receive notifications from Runscope API test results and Traffic Alerts.
  • Datadog - Datadog to create events and metrics from Runscope API test results.
  • Splunk Cloud - Splunk Cloud to create events for API test results.
  • PagerDuty - A PagerDuty service to trigger and resolve incidents based on Runscope API test results or Traffic Alerts.
  • Amazon Web Services - Amazon Web Services to import tests from API Gateway definitions.
  • Ghost Inspector - Ghost Inspector to run UI tests from within your Runscope API tests.
  • New Relic Insights - New Relic Insights to create events from Runscope API test results.
  • Microsoft Teams - Microsoft Teams to receive notifications from Runscope API test results.
  • HipChat - HipChat to receive notifications from Runscope API test results and Traffic Alerts.
  • StatusPage.io - StatusPage.io to create metrics from Runscope API test results.
  • Big Panda - Big Panda to create alerts from Runscope API test results.
  • Keen IO - Keen IO to create events from Runscope API test results.
  • VictorOps - A VictorOps service to trigger and resolve incidents based on Runscope API test results or Traffic Alerts.
  • Flowdock - Flowdock to receive notifications from Runscope API test results and Traffic Alerts.
  • AWS CodePipeline - Integrate your Runscope API tests into AWS CodePipeline.
  • Jenkins - Trigger a test run on every build with the Jenkins Runscope plugin.
  • Zapier - integrate with 250+ services like HipChat, Asana, BitBucket, Jira, Trello and more.
  • OpsGenie - OpsGenie to send alerts from Runscope API test results.
  • Grove - Grove to send messages to your IRC channels from Runscope API test results and Traffic Alerts.
  • CircleCI - Run your API tests after a completed CircleCI build.

Anyone can integrate API monitoring and testing into operation using the Runscope API, but these twenty services are available by default to any user, immediately opening up several important layers of our API operations. Immediately you see the messaging, notifications, chat, and other support layers. Then you see the continuous integration / deployment, code, and SDK layers. Then you come across Zapier, which opens up a whole other world of endless integration possibilities. I see Runscope owning the monitoring, testing, and performance stops along the API lifecycle, but their connected services puts other stops like deployment, management, logging, analysis, and many others also within reach.

I am working on a way to track the integrations between API providers, and API service providers. I’d like to be able to visualize the relationships between providers, helping me see the integrations that are most important to different groups of end users. I’m a big advocate for API providers to put iPaaS services like Zapier and DataFire to work, opening up a whole world of integrations to their developers and end users. I also encourage API service providers to work to understand how Zapier can open up other stops along the API lifecycle. Next, everyone should be thinking about deeper integrations like Runscope is doing with their connected services, and make sure you always publish a public page showcasing integrations, making it part of documentation, SDKs, and other aspects of your API service platform.


API SDK Licensing Notifications Using VersionEye

I have been watching VersionEye for a while now. If you aren’t familiar, they provide a service that will notify you of security vulnerabilities, license violations and out-dated dependencies in your Git repositories. I wanted to craft a story specifically about their licensing notification services, which can check all your open source dependencies against a license white list, then notify you of violations, and changes at the SDK licensing level.

The first thing I like here, is the notion of an API SDK licensing whitelist. The idea that there is a service that could potentially let you know which API providers have SDKs that are licensed in a way that meets your integration requirements. I think it helps developers who are building applications on top of APIs understand which APIs they should or shouldn’t be using based upon SDK licensing, while also providing an incentive for API providers to get their SDKs organized, including the licensing–you’d be surprised at how many API providers do not have their SDK house in order.

VersionEye also provides CI/CD integration, so that you can stop a build based on a licensing violation. Injecting the politics of API operations, from an API consumers perspective, into the application lifecycle. I’m interested in VersionEye’s CI/CD, as well as security vulnerabilities, but I wanted to make sure this approach to keeping an eye on SDK licensing was included in my SDK, monitoring, and licensing research, influencing my storytelling across these areas. Some day all API providers will have a wide variety of SDKs available, each complete with clear licensing, published on Github, and indexed as part of an APIs.json. We just aren’t quite there yet, and we need more services like VersionEye to help build awareness at the API SDK licensing level to get us closer to this reality.


Containerized Microservices Monitoring Driving API Infrastructure Visualizations

While I track on what is going on with visualizations generated from data, I haven’t seen much when it comes to API driven visualizations, or specifically visualization about API infrastructure, that is new and interesting. This week I came across an interesting example in a post from Netsil about mapping microservices so that you can monitor them. They are a pretty basic visualization of each database, API, and DNS element for your stack, but it does provide solid example of visualizing not just the deployment of database and API resources, but also DNS, and other protocols in your stack.

Netsil microservices visualization is focused on monitoring, but I can see this type of visualization also being applied to design, deployment, management, logging, testing, and any other stop along the API lifecycle. I can see API lifecycle visualization tooling like this becoming more common place, and play more of a role in making API infrastructure more observable. Visualizations are an important of the storytelling around API operations that moves things from just IT and dev team monitoring, making it more observable by all stakeholders.

I’m glad to see service providers moving the needle with helping visualize API infrastructure. I’d like to see more embeddable solutions deployed to Github emerge as part of API life cycle monitoring. I’d like to see what full life cycle solutions are possible when it comes to my partners like deployment visualizations from Tyk and Dreamfactory APIs, and management visualizations with 3Scale APIs, and monitoring and testing visualizations using Runscope. I’ll play around with pulling data from these provides, and publishing to Github as YAML, which I can then easily make available as JSON or CSV for use in some basic visualizations.

If you think about it, thee really should be a wealth of open source dashboard visualizations that could be embedded on any public or private Github repository, for every API service provider out there. API providers should be able to easily map out their API infrastructure, using any of the API service providers they are using already using to operate their APIs. Think of some of the embeddable API status pages we see out there already, and what Netsil is offering for mapping out infrastructure, but something for ever stop along the API life cycle, helping deliver visualizations of API infrastructure no matter which stop you find yourself at.


HTTP Status Codes Are An Essential Part Of API Design And Deployment

It takes a lot of work provide a reliable API that people can depend on. Something your consumers can trust, and will provide them with consistent, stable, meaningful, and expected behavior. There are a lot of affordances built into the web, allowing us humans to get around, and make sense of the ocean of information on the web today. These affordances aren’t always present with APIs, and we need to communicate with our consumers through the design of our API at every turn.

One area I see IT and developer groups often overlook when it comes to API design and deployment are HTTP Status Codes. That standardized list of meaningful responses that come back with every web and API request:

  • 1xx Informational - An informational response indicates that the request was received and understood. It is issued on a provisional basis while request processing continues.
  • 2xx Success - This class of status codes indicates the action requested by the client was received, understood, accepted, and processed successfully.
  • 3xx Redirection - This class of status code indicates the client must take additional action to complete the request. Many of these status codes are used in URL redirection.
  • 4xx Client errors - This class of status code is intended for situations in which the client seems to have errored.
  • 5xx Server error - The server failed to fulfill an apparently valid request.

Without HTTP Status codes, application won’t every really know if their API request was successful or not, and even if an application can tell there was a failure, it will never understand why. HTTP Status Codes are fundamental to the web working with browsers, and apis working with applications. HTTP Status Codes should never be left on the API development workbench, and API providers should always go beyond just 200 and 500 for every API implementation. Without them, NO API platform will ever scale, and support any number of external integrations and applications.

The most important example I have of the importance of HTTP Status Codes I have in my API developers toolbox is when I was working to assist federal government agencies in becoming compliant with the White House’s order for all federal agencies to publish a machine readable index of their public data inventory of their agency website. As agencies got to work publishing JSON and XML (an API) of their data inventory, I got to work building an application that would monitor their progress, indexing the available inventory, and providing a dashboard the the GSA and OMB could use to follow their progress (or lack of).

I would monitor the dashboard in real time, but weekly I would also go through many of the top level cabinet agencies, and some of the more prominent sub agencies, and see if there was a page available in my browser. There were numerous agencies who I found had published their machine readable public data inventory, but had returned a variety of HTTP status codes other than 200-resulting in my monitoring application to consider the agency not compliant. I wrote several stories about HTTP Status Codes, in which the GSA, and White House groups circulated with agencies, but ultimately I’d say this stumbling block was one of the main reasons that cause this federated public data API project to stumble early on, and never gain proper momentum–a HUGE loss to an open and more observable federal government. ;-(

HTTP Status Codes aren’t just a nice to have thing when it comes to APIs, they are essential. Without HTTP Status Codes each application will deliver unreliable results, and aggregate or federated solutions that are looking to consume many APIs will become much more difficult and costly to develop. Make sure you prioritize HTTP Status Codes as part of your API design and deployment process. At the very least make sure all five layers of HTTP Status Codes are present in your release. You can always get more precise and meaningful with specific series HTTP status codes later on, but ALL APIs should be employing all five layers of HTTP Status Codes by default, to prevent friction and instability in every application that builds on top of your APIs.


API Environment Portability

I was reading the post from Runscope on copying environments using their new API. I was looking through the request and response structure for their API, it looks like a pretty good start when it comes to what I’d call API environment portability. I’m talking about allowing us to define, share, replicate, and reuse the definitions for our API environments across the services and tools we are depending on.

If our API environment definitions shared a common schema, and API like Runscope provides, I could take my Runscope environment settings, and use them in my Stoplight, Restlet Client, Postman, and other API services and tooling. It would also help me templatize and standardize my development, staging, production, and other environments across the services I use. Assisting me in keeping my environment house in order, and also something that I can use to audit and turn over my environments to help out with security.

It is just a thought. An API environment API, possessing an evolving but common schema just seems like one of those things that would make the entire API space work a little smoother. Making our API environments exportable, importable, and portable just seems like it would help us think through when it comes setting up, configuring, managing, and evolving our API environments–who knows maybe someday we’ll have API service providers who help us manage our API environments, dictating how they are used across the growing number of API services we are depending on.

Disclosure: Runscope and Restlet are API Evangelist partners.


Validating My API Schema As Part of My API Security Practices

I am spending more time thinking about the unknown unknowns when it comes to API security. This means thinking beyond the usual suspects when thinking about API security like encryption, API keys, and OAuth. As I monitor the API space I’m keeping an eye out for examples of what might be security concerns that not every API provider is thinking about. [I found one recently in ARS Technica, about the Federal Communication Commission (FCC) leaking the email addresses through the CC API for anyone who submitted feedback as part of any issues like the recent Net Neutrality discussion.

It sounds like the breach with the FCC API was unintentional, but it provides a pretty interesting example of a security risk that could probably be mitigated with some basic API testing and monitoring, using common services like Runscope, or Restlet Client. Adding a testing and monitoring layer to your API operations helps you look beyond just an API being up or down. You should be validating that each endpoint is returning the intended/expected schema. Just this little step of setting up a more detailed monitor can give you that brief moment to think a little more deeply about your schema–the little things like whether or not you should be sharing the email addresses of thousands, or even millions of users.

I’m working on a JSON Schema for my Open Referral Human Services API right now. I want to be able to easily validate any API as human services compliant, but I also want to be able to setup testing and monitoring, as well as security checkups by validating the schema. When it comes to human services data I want to be able to validate every field present, ensuring only what is required gets out via the API. I am validating primarily to ensure an API and the resulting schema is compliant with HSDS/A standards but seeing this breach at the FCC has reminded me that taking the time to validate the schema for our APIs can also contribute to API security–for those attacks that don’t come from outside, but from within.

Disclosure: Restlet Client and Runscope are API Evangelist partners.


APIs For Monitoring The Performance Of Your APIs

I am a big fan of API providers who also have APIs. It may sound silly to say, but you would be surprised how many companies are selling services to API providers and do not actually have an API themselves. So, anytime I find a good example of API service providers launching new APIs that help API providers be more successful, I’m all over it with a story.

Today’s example is from my friends over at Runscope with their API Metrics API that lets you “retrieve your API tests performance metrics for each individual test, keep a pulse on your API’s performance over time, and create custom internal or external dashboards with it”. You can filter the request by using 3 different parameters:

  • region - The service region you’re using to run your tests (e.g. us1, us2, eu1, etc.)
  • timeframe - Hour, day, week, or month. Depending on the timeframe you use, the interval between the response times will be different.
  • environment_uuid - Filter by a specific environment, such as test, production, etc.

That is a pretty healthy example of everything that is API for me–an API that helps you make sure your APIs are performing as expected. You can not just understand how well your API responds, you can dial that in by region, and paint a clear picture of how well you are doing over time. I like that you can create internal dashboards for communicating this with your organization, but I also like their approach to providing external API performance dashboards so much I am going to add it to my list of building blocks I track on as part of my API performance research.

Aight. That concludes today’s showcase of an API service provider making sure they are practicing what they preach and providing APIs for their valuable services. Honestly, I find this to be a fascinating layer of the API sector–the API layer that can orchestrate APIs. I enjoy thinking about what is possible when your APIs have APIs–it makes something like API performance a much more obtainable, scalable, and as Runscope does it, something you can easily communicate with your internal stakeholders and your API community.


The Depth And Dimensions Of Monitoring API Operations

When I play with my Hitch service I am always left thinking about the many dimensions of API monitoring. When you talk about API monitoring in the tech sector conversations almost always start with the API providers and the technical details monitoring of individual APIs. Hopefully, these discussions also focus on API monitoring from the API consumers point of view, but I wanted to also shine a light on companies like Hitch who are adding an additional dimension from the API service provider view of things–which is closer to my vantage point as an analyst.

I am an advisor to Hitch because they are a different breed of API monitoring service, that isn’t just focused on the APIs. Hitch brings in the wider view of monitoring the entire operations of an API–if documentation changes, an SDK on Github, or update via Twitter, or a pricing change, you get alerted. As a developer I enjoy being made aware of what is going on across operations, keeping me in tune with not just the technical, but also the business and politics of API platform operations.

Another reason I like Hitch, and really the reason behind me writing this post, is that they are helping API providers think about the bigger picture of API monitoring. Helping them think deeply, as well as getting their shit together when it comes to regularly sending out the critical signals us API consumers are tuning into. When you are down in the trenches of operating an API at a large company, it is easy to get caught up in the internal vacuum, forgetting to properly communicate and support your community–Hitch helps keep this bubble from forming, assisting you in keeping an external focus on your community.

If you are just embarking on your API journey I recommend tuning into API Evangelist first. ;-) However, if you are unsure of how to properly communicate and support your community I recommend you talk to the Hitch team. They’ll help get you up to speed on the best practices when it comes to API operations, and understand how to send the right signals to your community–something that will make or break your API efforts, so please don’t ignore it.

Disclosure: I am an advisor for Hitch, and they are my friends.


Its Not Just The Technology: API Monitoring Means You Care

I was just messing around with a friend online about monitoring of our monitoring tools, where I said that I have a monitor setup to monitor whether or not I care about monitoring. I was half joking, but in reality, giving a shit is actually a pretty critical component of monitoring when you think about it. Nobody monitors something they don’t care about. While monitoring in the world of APIsn might mean a variety of things, I’m guessing that caring about those resources is a piece of every single monitoring configuration.

This has come up before in conversation with my friend Dave O’Neill of APIMetrics, where he tells stories of developers signing up for their service, running the reports they need to satisfy management or customers, then they turn off the service. I think this type of behavior exists at all levels, with many reasons why someone truly doesn’t care about a service actually performing as promised, and doing what it takes to rise to the occasion–resulting in the instability, and unreliability that APIs that gets touted in the tech blogosphere.

There are many reasons management or developers will not truly care when it comes to monitoring the availability, reliability, and security of an API. Demonstrating yet another aspect of the API space that is more business and politics, than it is ever technical. We are seeing this play out online with the flakiness of websites, applications, devices, and the networks we depend on daily, and the waves of breaches, vulnerabilities, and general cyber(in)security. This is a human problem, not a technical, but there are many services and tools that can help mitigate people not caring.


A Ranking Score to Determine If Your API Was SLA Compliant

I talked about Google's shift towards providing an SLA across their cloud services last week, and this week I'd like to highlight APIMetric's Cloud API Service Consistency (CASC) score, and how it can be applied to determine if an API is meeting their SLA or not. APIMetric came up with the CASC Score as part of their APImetrics Insights analytics package, and has shown been very open with the API ranking system, as well as the data behind.

The CASC score provides us with an API reliability and stability ranking for us to apply across our APIs, providing one very important layer of a comprehensive API rating system that we can use across the industries being impacted by APIs. I said in my story about Google's SLAs that companies should have an API present for their APIs. They will also need to ensure that 3rd party API service providers like APIMetrics are gathering the data, and providing us with a CASC score for all the APIs we depend on in our businesses. 

I know that external monitoring service providers like APIMetrics, and API ranking systems like the CASC score make some API providers nervous, but if you want to sell to the enterprise, government, and other critical infrastructure, you will need to over it. I also recommend that you work hard to reduce barriers for API service providers to get at the data they need, as well as get out there way when it comes to publish the data publicly, and sharing or aggregating it as part of industry level rating for groups of APIs.

If we want this API economy, that we keep talking about to scale, we are going to have to make sure we have SLAs, and ensuring we are meeting them throughout our operations. SLA monitoring helps us meet our contractual engagements with our customers, but they are also beginning to contribute to an overall ranking system being used across the industries we operate in. This is something that is only going to grow and expand, so the sooner we get the infrastructure in place to determine our service level benchmarks, monitor and aggregate the corresponding data, the better off we are going to be in a fast-growing API economy.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.