RSS

API Monitoring News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is monitoring their API operations.

Connecting Service Level Agreement To API Monitoring

Monitoring your API availability should be standard practice for internal and external APIs. If you have the resources to custom build API monitoring, testing, and performance infrastructure, I am guessing you already have some pretty cool stuff in place. If you don’t, then you should not be reinventing the wheel out there, and you should be leveraging one of the existing API monitoring services out there on the market. When you are getting started with monitoring your APIs I recommend you begin with uptime and downtime, and once you deliver successfully on that front, I recommend you work on API performance, and the responsiveness of your APIs.

You should begin with making sure you are delivering the service level agreement you have in place with your API consumers. What, you don’t have a service level agreement? No better time to start than now. If you don’t already have an explicitly stated SLA in place, I recommend creating one internally, and see what you can do to live up to your API SLA, then once you ensure things are operating at acceptable levels, you share with your API consumers. I am guessing they will be pretty pleased to hear that you are taking the initiative to offer an SLA, and are committed enough to your API to work towards such a high bar for API operations.

To help you manage defining, and then ultimately monitoring and living up to your API SLA, I recommend taking a look at APIMetrics, who is obsessively focused on API quality, performance, and reliability. They spend a lot of time monitoring public APIs, and have developed a pretty sophisticated approach to ranking and scoring your API to ensure you meet your SLA. As you can see in the picture for this story, the APIMetrics administrative dashboard provides a pretty robust way for you to measure any API you want, and establish metrics and triggers that let you know if you’ve met or failed to meet your SLA requirements. As I said, you could start out by monitoring internally if you are nervous about the results, but once you are ready to go prime time you have the tools to help you regularly reporting internally, as well as externally to your API consumers.

I wish that every stop along the life cycle had a common definition for defining a specific aspect of service level agreements, and was something that multiple API providers could measure and report upon similar to what APIMetrics does for monitoring and performance. I’d like to see API design begin to have a baseline definition, that was verifiable through a common set of machine readable API assertions. I’d love for API plans, pricing, and even terms of service measurable, reportable, in a similar way. These are all things that should be observable through existing outputs, and reflected as part of service level agreements. I’d love to see the concept of the SLA evolve to cover all aspects of the quality of service beyond just availability. APIMetrics provides a good look at how the services we use to manage our APIs can be used to define the level of service we provide, something that we can be emulating more across our API operations.


A Couple More Questions For The Equifax CEO About Their Breach

Speaking to the House Energy and Commerce Committee, former Equifax CEO Richard Smith pointed the finger at a single developer who failed to patch the Apache Struts vulnerability. Saying that protocol was followed, and a single developer was responsible, shifting the blame away from leadership. It sounds like a good answer, but when you operate in the space you understand that this was a systemic failure, and you shouldn’t be relying on a single individual, or even a single piece of scanning software to verify the patch was applied. You really should have many layers in place to help prevent breaches like we saw with Equifax.

If I was interviewing the CEO, I’d have a few other questions for him, getting at some of the other systemic and process failures based upon his lack of leadership, and awareness:

  • API Monitoring & Testing - You say the scanner for the Apache Struts vulnerability failed, but what about other monitoring and testing. The plugin in questions was a REST plugin, that allowed for API communication with your systems. Due to the vulnerability, extra junk information was allowed to get through. Where were your added API request and response integrity testing and monitoring process? Sure you were scanning for the vulnerability, but are you keeping an eye on the details of the data being passed back and forth? API monitoring & testing has been around for many years, and service providers like Runscope do this for a living. What other layers of monitoring and testing were in place?
  • API Management - When you expose APIs like you did from Apache Struts, what does the standardized management approach look like? What sort of metering, logging, rate limiting, and analysis occurs on each endpoint, and verification occurs, ensuring that only required clients should have access? API management has been standard procedure for over a decade now for exposing APIs like this both internally and externally. Why didn’t your API management process stop this sort of breach after only a couple hundred record went out? API management is about awareness regarding access to all your resources. You should have a dashboard, or at least some reports that you view as a CEO on this aspect of operations.

These are just two examples of systems and processes that should have been in place. You should not be depending on a single person, or a single tool to catch this type of security incident There should be many layers in place, with security triggers, and notifications in place. Your CTO should be in tune with all of these layers, and you as the CEO should be getting briefed on how they work, and have a hand in making sure they are in place. I’m guessing that your company is doing APIs, but is dramatically behind the times when it comes to commonplace API management practices. This is your fault as the CEO. This is not the fault of a single employee, or any group of employees.

I am guessing that as a CEO you are more concerned with the selling of this data, than you are of securing it in storage, or transit. I’m guessing you are intimately aware of the layers that enable you to generate revenue, but you are barely investing in the technology and processes to do this securely, while respecting the privacy of your users. They are just livestock to you. They are just products on a shelf. It shows your lack of leadership to point the finger at a single person, or single piece of technology. There should have been many layers in place to catch this type of breach beyond a single vulnerability. It demonstrates your lack of knowledge regarding modern trends in how we secure and provide access to data, and you should never have been put in charge of such a large data brokerage company.


When To Build Or Depend On An API Service Provider

I am at that all too familiar place with a project where I am having to decide whether I want to build what I need, or depend on an API service provider. As an engineer it is always easy to think you can just build what you need, but the more experience you have, you begin to realize this isn’t always the smartest move. I’m at that point with API monitoring. I have a growing number of endpoints that I need to make sure are alive and active, but I also see an endless road map of detailed requests when it comes to granularity of what “alive and active” actually means.

At first I was just going to use my default cron job service to hit the base url and API paths defined in my OpenAPI for each project, checking for the expected HTTP status code. Then I thought I better start checking for a valid schema. Then I thought I better start checking for valid data. My API project is an open source solution, and I thought about each of my clients and implementations as me for testing and monitoring for their needs. Then I thought, no way!! I’m just going to use Runscope, and build in documentation and processes that each of my clients and implementations can also use Runscope to dial in monitoring and testing of their API on their own terms.

Since all of my API projects is OpenAPI driven, and Runscope is an OpenAPI driven API service provider (as ALL should be), I can use this as the seed for setting up testing and monitoring. Not all of my API implementations will be using 100% of the microservices I’m defining, or 100% of the API paths available fo each of the microservices I’m defining. Each microservice has it’s core set of paths that deliver the service, but then I’m also bundling in database, server, DNS, logging and other microservice operational level APIs that not all my implementations will care about monitoring (sadly). So it is important for my clients and implementations to be easily select with APIs they care about monitoring, which OpenAPI will help do the heavy lifting. When it comes to exactly what API monitoring and testing means to them, I’ll rely on Runscope to do the heavy lifting.

If Runscope didn’t have the ability to import an OpenAPI to plant the seeds for API testing and monitoring I might have opted to just build out a basic solution myself. The manual process of setting up my API monitoring and testing for each client would quickly become more work than just building a solution–even if it was nowhere near as good as Runscope. However, we are increasingly living in an OpenAPI driven API lifecycle where service providers of all shapes and sizes allow for the importing and exporting of common API definition formats like OpenAPI. Helping API providers and architects like myself stick to what we do best, and not reinvent the wheel for each stop along the API lifecycle.

Disclosure: Runscope is an API Evangelist partner.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continuous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continuous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Open Sourcing Your API Like VersionEye

I’m always on the hunt for healthy patterns that I would like to see API providers, and API service providers consider when crafting their own strategies. It’s what I do as the API Evangelist. Find common patterns. Understand the good ones, and the bad ones. Tell stories about both, helping folks understand the possibilities, and what they should be thinking about as they plan their operations.

One very useful API that notifies you about security vulnerabilities, license violations and out-dated dependencies in your Git repositories, has a nice approach to delivering their API, as well as the other components of their stack. You can either use VersionEye in the cloud, or you can deploy on-premise:

VersionEye also has their entire stack available as Docker images, ready for deployment anywhere you need them. I wanted have a single post that I can reference when talking about possible open source, on-premise, continous integration approaches to delivering API solutions, that actually have a sensible business model. VersionEye spans the areas that I think API providers should consider investing in, delivering SaaS or on-premise, while also delivering open source solutions, and generating sensible amounts of revenue.

Many APIs I come across do not have an open source version of their API. They may have open source SDKs, and other tooling on Github, but rarely does an API provider offer up an open source copy of their API, as well as Docker images. VersionEye’s approach to operating in the cloud, and on-premise, while leveraging open source and APIs, as well as dovetailing with existing continous integration flows is worth bookmarking. I am feeling like this is the future of API deployment and consumption, but don’t get nervous, there is still plenty of money to be be made via the cloud services.


Understanding Global API Performance At The Multi-Cloud Level

APIMetrics has a pretty addictive map showing the performance of API calls between multiple cloud providers, spanning many global regions. The cloud location latency map “shows relative performance of a standard, reference GET request made to servers running on all the Google locations and via the Google global load balancer. Calls are made from AWS, Azure, IBM and Google clouds and data is stored for all steps of the API call process and the key percentiles under consideration.”

It is interesting to play with the destination of the API calls, changing the region, and visualizing how API calls begin to degrade to different regions. It really sets the stage for how we should start thinking about the deployment, monitoring, and testing of our APIs. Region, by region, getting to know where our consumers are, and making sure APIs are deployed within the cloud infrastructure that delivers the best possible performance. It’s not just testing your APIs in a single location from many locations, it is also rethinking where your APIs are deployed, leveraging a multi-cloud reality and using all the top cloud provider, while also making API deployment by region a priority.

I’m a big fan of what APIMetrics is doing with the API performance visualizations and mapping. However, I think their approach to using HTTPbin is a significant part of this approach to monitoring and visualizing API performance at the multi-cloud level, while also making much of the process and data behind it all public. I want to put some more thought into how they are using HTTPbin behind this approach to multi-cloud API performance monitoring. I feel like there is potential her for applying this beyond just API performance, and think about other testing, security, and critical aspects of reliability and doing business online with APIs today.

After thinking where else this HTTPbin approach to data gathering could be applied, I want to think more about how the data behind APIMetrics cloud location latency map can be injected into other conversations, when it comes where we are deploying APIs, and running our API tests. Eventually I would like to see this type of multi-cloud API performance data alongside data for security and privacy compliance data, and even the regulations of each country as they apply to specific industries. Think about a time when we can deploy our APIs exactly where we want them based upon performance, privacy, security, regulations, and other critical aspects of doing business in the Internet age.


Making Sure Your API Service Connects To Other Stops Along The API Lifecycle

I am continuing my integration platform as a service research, and spending a little bit of time trying to understand how API providers are offering up integrations with other APIs. Along the way, I also wanted to look at how API service providers are doing it as well, opening themselves up to other stops along n API lifecycle. To understand how API service providers are allowing their users to easily connect to other services I’m taking a look at how my partners are handling this, starting with connected services at Runscope.

Runscope provides ready to go integration of their API monitoring and testing services with twenty other platforms, delivering a pretty interesting Venn diagram of services along the API lifecycle:

  • Slack - Slack to receive notifications from Runscope API test results and Traffic Alerts.
  • Datadog - Datadog to create events and metrics from Runscope API test results.
  • Splunk Cloud - Splunk Cloud to create events for API test results.
  • PagerDuty - A PagerDuty service to trigger and resolve incidents based on Runscope API test results or Traffic Alerts.
  • Amazon Web Services - Amazon Web Services to import tests from API Gateway definitions.
  • Ghost Inspector - Ghost Inspector to run UI tests from within your Runscope API tests.
  • New Relic Insights - New Relic Insights to create events from Runscope API test results.
  • Microsoft Teams - Microsoft Teams to receive notifications from Runscope API test results.
  • HipChat - HipChat to receive notifications from Runscope API test results and Traffic Alerts.
  • StatusPage.io - StatusPage.io to create metrics from Runscope API test results.
  • Big Panda - Big Panda to create alerts from Runscope API test results.
  • Keen IO - Keen IO to create events from Runscope API test results.
  • VictorOps - A VictorOps service to trigger and resolve incidents based on Runscope API test results or Traffic Alerts.
  • Flowdock - Flowdock to receive notifications from Runscope API test results and Traffic Alerts.
  • AWS CodePipeline - Integrate your Runscope API tests into AWS CodePipeline.
  • Jenkins - Trigger a test run on every build with the Jenkins Runscope plugin.
  • Zapier - integrate with 250+ services like HipChat, Asana, BitBucket, Jira, Trello and more.
  • OpsGenie - OpsGenie to send alerts from Runscope API test results.
  • Grove - Grove to send messages to your IRC channels from Runscope API test results and Traffic Alerts.
  • CircleCI - Run your API tests after a completed CircleCI build.

Anyone can integrate API monitoring and testing into operation using the Runscope API, but these twenty services are available by default to any user, immediately opening up several important layers of our API operations. Immediately you see the messaging, notifications, chat, and other support layers. Then you see the continuous integration / deployment, code, and SDK layers. Then you come across Zapier, which opens up a whole other world of endless integration possibilities. I see Runscope owning the monitoring, testing, and performance stops along the API lifecycle, but their connected services puts other stops like deployment, management, logging, analysis, and many others also within reach.

I am working on a way to track the integrations between API providers, and API service providers. I’d like to be able to visualize the relationships between providers, helping me see the integrations that are most important to different groups of end users. I’m a big advocate for API providers to put iPaaS services like Zapier and DataFire to work, opening up a whole world of integrations to their developers and end users. I also encourage API service providers to work to understand how Zapier can open up other stops along the API lifecycle. Next, everyone should be thinking about deeper integrations like Runscope is doing with their connected services, and make sure you always publish a public page showcasing integrations, making it part of documentation, SDKs, and other aspects of your API service platform.


API SDK Licensing Notifications Using VersionEye

I have been watching VersionEye for a while now. If you aren’t familiar, they provide a service that will notify you of security vulnerabilities, license violations and out-dated dependencies in your Git repositories. I wanted to craft a story specifically about their licensing notification services, which can check all your open source dependencies against a license white list, then notify you of violations, and changes at the SDK licensing level.

The first thing I like here, is the notion of an API SDK licensing whitelist. The idea that there is a service that could potentially let you know which API providers have SDKs that are licensed in a way that meets your integration requirements. I think it helps developers who are building applications on top of APIs understand which APIs they should or shouldn’t be using based upon SDK licensing, while also providing an incentive for API providers to get their SDKs organized, including the licensing–you’d be surprised at how many API providers do not have their SDK house in order.

VersionEye also provides CI/CD integration, so that you can stop a build based on a licensing violation. Injecting the politics of API operations, from an API consumers perspective, into the application lifecycle. I’m interested in VersionEye’s CI/CD, as well as security vulnerabilities, but I wanted to make sure this approach to keeping an eye on SDK licensing was included in my SDK, monitoring, and licensing research, influencing my storytelling across these areas. Some day all API providers will have a wide variety of SDKs available, each complete with clear licensing, published on Github, and indexed as part of an APIs.json. We just aren’t quite there yet, and we need more services like VersionEye to help build awareness at the API SDK licensing level to get us closer to this reality.


Containerized Microservices Monitoring Driving API Infrastructure Visualizations

While I track on what is going on with visualizations generated from data, I haven’t seen much when it comes to API driven visualizations, or specifically visualization about API infrastructure, that is new and interesting. This week I came across an interesting example in a post from Netsil about mapping microservices so that you can monitor them. They are a pretty basic visualization of each database, API, and DNS element for your stack, but it does provide solid example of visualizing not just the deployment of database and API resources, but also DNS, and other protocols in your stack.

Netsil microservices visualization is focused on monitoring, but I can see this type of visualization also being applied to design, deployment, management, logging, testing, and any other stop along the API lifecycle. I can see API lifecycle visualization tooling like this becoming more common place, and play more of a role in making API infrastructure more observable. Visualizations are an important of the storytelling around API operations that moves things from just IT and dev team monitoring, making it more observable by all stakeholders.

I’m glad to see service providers moving the needle with helping visualize API infrastructure. I’d like to see more embeddable solutions deployed to Github emerge as part of API life cycle monitoring. I’d like to see what full life cycle solutions are possible when it comes to my partners like deployment visualizations from Tyk and Dreamfactory APIs, and management visualizations with 3Scale APIs, and monitoring and testing visualizations using Runscope. I’ll play around with pulling data from these provides, and publishing to Github as YAML, which I can then easily make available as JSON or CSV for use in some basic visualizations.

If you think about it, thee really should be a wealth of open source dashboard visualizations that could be embedded on any public or private Github repository, for every API service provider out there. API providers should be able to easily map out their API infrastructure, using any of the API service providers they are using already using to operate their APIs. Think of some of the embeddable API status pages we see out there already, and what Netsil is offering for mapping out infrastructure, but something for ever stop along the API life cycle, helping deliver visualizations of API infrastructure no matter which stop you find yourself at.


HTTP Status Codes Are An Essential Part Of API Design And Deployment

It takes a lot of work provide a reliable API that people can depend on. Something your consumers can trust, and will provide them with consistent, stable, meaningful, and expected behavior. There are a lot of affordances built into the web, allowing us humans to get around, and make sense of the ocean of information on the web today. These affordances aren’t always present with APIs, and we need to communicate with our consumers through the design of our API at every turn.

One area I see IT and developer groups often overlook when it comes to API design and deployment are HTTP Status Codes. That standardized list of meaningful responses that come back with every web and API request:

  • 1xx Informational - An informational response indicates that the request was received and understood. It is issued on a provisional basis while request processing continues.
  • 2xx Success - This class of status codes indicates the action requested by the client was received, understood, accepted, and processed successfully.
  • 3xx Redirection - This class of status code indicates the client must take additional action to complete the request. Many of these status codes are used in URL redirection.
  • 4xx Client errors - This class of status code is intended for situations in which the client seems to have errored.
  • 5xx Server error - The server failed to fulfill an apparently valid request.

Without HTTP Status codes, application won’t every really know if their API request was successful or not, and even if an application can tell there was a failure, it will never understand why. HTTP Status Codes are fundamental to the web working with browsers, and apis working with applications. HTTP Status Codes should never be left on the API development workbench, and API providers should always go beyond just 200 and 500 for every API implementation. Without them, NO API platform will ever scale, and support any number of external integrations and applications.

The most important example I have of the importance of HTTP Status Codes I have in my API developers toolbox is when I was working to assist federal government agencies in becoming compliant with the White House’s order for all federal agencies to publish a machine readable index of their public data inventory of their agency website. As agencies got to work publishing JSON and XML (an API) of their data inventory, I got to work building an application that would monitor their progress, indexing the available inventory, and providing a dashboard the the GSA and OMB could use to follow their progress (or lack of).

I would monitor the dashboard in real time, but weekly I would also go through many of the top level cabinet agencies, and some of the more prominent sub agencies, and see if there was a page available in my browser. There were numerous agencies who I found had published their machine readable public data inventory, but had returned a variety of HTTP status codes other than 200-resulting in my monitoring application to consider the agency not compliant. I wrote several stories about HTTP Status Codes, in which the GSA, and White House groups circulated with agencies, but ultimately I’d say this stumbling block was one of the main reasons that cause this federated public data API project to stumble early on, and never gain proper momentum–a HUGE loss to an open and more observable federal government. ;-(

HTTP Status Codes aren’t just a nice to have thing when it comes to APIs, they are essential. Without HTTP Status Codes each application will deliver unreliable results, and aggregate or federated solutions that are looking to consume many APIs will become much more difficult and costly to develop. Make sure you prioritize HTTP Status Codes as part of your API design and deployment process. At the very least make sure all five layers of HTTP Status Codes are present in your release. You can always get more precise and meaningful with specific series HTTP status codes later on, but ALL APIs should be employing all five layers of HTTP Status Codes by default, to prevent friction and instability in every application that builds on top of your APIs.


API Environment Portability

I was reading the post from Runscope on copying environments using their new API. I was looking through the request and response structure for their API, it looks like a pretty good start when it comes to what I’d call API environment portability. I’m talking about allowing us to define, share, replicate, and reuse the definitions for our API environments across the services and tools we are depending on.

If our API environment definitions shared a common schema, and API like Runscope provides, I could take my Runscope environment settings, and use them in my Stoplight, Restlet Client, Postman, and other API services and tooling. It would also help me templatize and standardize my development, staging, production, and other environments across the services I use. Assisting me in keeping my environment house in order, and also something that I can use to audit and turn over my environments to help out with security.

It is just a thought. An API environment API, possessing an evolving but common schema just seems like one of those things that would make the entire API space work a little smoother. Making our API environments exportable, importable, and portable just seems like it would help us think through when it comes setting up, configuring, managing, and evolving our API environments–who knows maybe someday we’ll have API service providers who help us manage our API environments, dictating how they are used across the growing number of API services we are depending on.

Disclosure: Runscope and Restlet are API Evangelist partners.


Validating My API Schema As Part of My API Security Practices

I am spending more time thinking about the unknown unknowns when it comes to API security. This means thinking beyond the usual suspects when thinking about API security like encryption, API keys, and OAuth. As I monitor the API space I’m keeping an eye out for examples of what might be security concerns that not every API provider is thinking about. [I found one recently in ARS Technica, about the Federal Communication Commission (FCC) leaking the email addresses through the CC API for anyone who submitted feedback as part of any issues like the recent Net Neutrality discussion.

It sounds like the breach with the FCC API was unintentional, but it provides a pretty interesting example of a security risk that could probably be mitigated with some basic API testing and monitoring, using common services like Runscope, or Restlet Client. Adding a testing and monitoring layer to your API operations helps you look beyond just an API being up or down. You should be validating that each endpoint is returning the intended/expected schema. Just this little step of setting up a more detailed monitor can give you that brief moment to think a little more deeply about your schema–the little things like whether or not you should be sharing the email addresses of thousands, or even millions of users.

I’m working on a JSON Schema for my Open Referral Human Services API right now. I want to be able to easily validate any API as human services compliant, but I also want to be able to setup testing and monitoring, as well as security checkups by validating the schema. When it comes to human services data I want to be able to validate every field present, ensuring only what is required gets out via the API. I am validating primarily to ensure an API and the resulting schema is compliant with HSDS/A standards but seeing this breach at the FCC has reminded me that taking the time to validate the schema for our APIs can also contribute to API security–for those attacks that don’t come from outside, but from within.

Disclosure: Restlet Client and Runscope are API Evangelist partners.


APIs For Monitoring The Performance Of Your APIs

I am a big fan of API providers who also have APIs. It may sound silly to say, but you would be surprised how many companies are selling services to API providers and do not actually have an API themselves. So, anytime I find a good example of API service providers launching new APIs that help API providers be more successful, I’m all over it with a story.

Today’s example is from my friends over at Runscope with their API Metrics API that lets you “retrieve your API tests performance metrics for each individual test, keep a pulse on your API’s performance over time, and create custom internal or external dashboards with it”. You can filter the request by using 3 different parameters:

  • region - The service region you’re using to run your tests (e.g. us1, us2, eu1, etc.)
  • timeframe - Hour, day, week, or month. Depending on the timeframe you use, the interval between the response times will be different.
  • environment_uuid - Filter by a specific environment, such as test, production, etc.

That is a pretty healthy example of everything that is API for me–an API that helps you make sure your APIs are performing as expected. You can not just understand how well your API responds, you can dial that in by region, and paint a clear picture of how well you are doing over time. I like that you can create internal dashboards for communicating this with your organization, but I also like their approach to providing external API performance dashboards so much I am going to add it to my list of building blocks I track on as part of my API performance research.

Aight. That concludes today’s showcase of an API service provider making sure they are practicing what they preach and providing APIs for their valuable services. Honestly, I find this to be a fascinating layer of the API sector–the API layer that can orchestrate APIs. I enjoy thinking about what is possible when your APIs have APIs–it makes something like API performance a much more obtainable, scalable, and as Runscope does it, something you can easily communicate with your internal stakeholders and your API community.


The Depth And Dimensions Of Monitoring API Operations

When I play with my Hitch service I am always left thinking about the many dimensions of API monitoring. When you talk about API monitoring in the tech sector conversations almost always start with the API providers and the technical details monitoring of individual APIs. Hopefully, these discussions also focus on API monitoring from the API consumers point of view, but I wanted to also shine a light on companies like Hitch who are adding an additional dimension from the API service provider view of things–which is closer to my vantage point as an analyst.

I am an advisor to Hitch because they are a different breed of API monitoring service, that isn’t just focused on the APIs. Hitch brings in the wider view of monitoring the entire operations of an API–if documentation changes, an SDK on Github, or update via Twitter, or a pricing change, you get alerted. As a developer I enjoy being made aware of what is going on across operations, keeping me in tune with not just the technical, but also the business and politics of API platform operations.

Another reason I like Hitch, and really the reason behind me writing this post, is that they are helping API providers think about the bigger picture of API monitoring. Helping them think deeply, as well as getting their shit together when it comes to regularly sending out the critical signals us API consumers are tuning into. When you are down in the trenches of operating an API at a large company, it is easy to get caught up in the internal vacuum, forgetting to properly communicate and support your community–Hitch helps keep this bubble from forming, assisting you in keeping an external focus on your community.

If you are just embarking on your API journey I recommend tuning into API Evangelist first. ;-) However, if you are unsure of how to properly communicate and support your community I recommend you talk to the Hitch team. They’ll help get you up to speed on the best practices when it comes to API operations, and understand how to send the right signals to your community–something that will make or break your API efforts, so please don’t ignore it.

Disclosure: I am an advisor for Hitch, and they are my friends.


Its Not Just The Technology: API Monitoring Means You Care

I was just messing around with a friend online about monitoring of our monitoring tools, where I said that I have a monitor setup to monitor whether or not I care about monitoring. I was half joking, but in reality, giving a shit is actually a pretty critical component of monitoring when you think about it. Nobody monitors something they don’t care about. While monitoring in the world of APIsn might mean a variety of things, I’m guessing that caring about those resources is a piece of every single monitoring configuration.

This has come up before in conversation with my friend Dave O’Neill of APIMetrics, where he tells stories of developers signing up for their service, running the reports they need to satisfy management or customers, then they turn off the service. I think this type of behavior exists at all levels, with many reasons why someone truly doesn’t care about a service actually performing as promised, and doing what it takes to rise to the occasion–resulting in the instability, and unreliability that APIs that gets touted in the tech blogosphere.

There are many reasons management or developers will not truly care when it comes to monitoring the availability, reliability, and security of an API. Demonstrating yet another aspect of the API space that is more business and politics, than it is ever technical. We are seeing this play out online with the flakiness of websites, applications, devices, and the networks we depend on daily, and the waves of breaches, vulnerabilities, and general cyber(in)security. This is a human problem, not a technical, but there are many services and tools that can help mitigate people not caring.


A Ranking Score to Determine If Your API Was SLA Compliant

I talked about Google's shift towards providing an SLA across their cloud services last week, and this week I'd like to highlight APIMetric's Cloud API Service Consistency (CASC) score, and how it can be applied to determine if an API is meeting their SLA or not. APIMetric came up with the CASC Score as part of their APImetrics Insights analytics package, and has shown been very open with the API ranking system, as well as the data behind.

The CASC score provides us with an API reliability and stability ranking for us to apply across our APIs, providing one very important layer of a comprehensive API rating system that we can use across the industries being impacted by APIs. I said in my story about Google's SLAs that companies should have an API present for their APIs. They will also need to ensure that 3rd party API service providers like APIMetrics are gathering the data, and providing us with a CASC score for all the APIs we depend on in our businesses. 

I know that external monitoring service providers like APIMetrics, and API ranking systems like the CASC score make some API providers nervous, but if you want to sell to the enterprise, government, and other critical infrastructure, you will need to over it. I also recommend that you work hard to reduce barriers for API service providers to get at the data they need, as well as get out there way when it comes to publish the data publicly, and sharing or aggregating it as part of industry level rating for groups of APIs.

If we want this API economy, that we keep talking about to scale, we are going to have to make sure we have SLAs, and ensuring we are meeting them throughout our operations. SLA monitoring helps us meet our contractual engagements with our customers, but they are also beginning to contribute to an overall ranking system being used across the industries we operate in. This is something that is only going to grow and expand, so the sooner we get the infrastructure in place to determine our service level benchmarks, monitor and aggregate the corresponding data, the better off we are going to be in a fast-growing API economy.


Look Across My API Monitoring API Methods By Grouping Them Using Tag

Last week I was playing with defining API monitoring APIs so I can map to each stop along the API life cycle. I took three of the API monitoring services I use (APIMetrics, API Science, and Runscope), and like I do for other areas along the API life cycle, and for common API stacks, I profiled their APIs using the OpenAPI Spec. This is standard operating procedure for any of my research areas, in that part of profiling each company's operations, I profile the API surface area in detail.

For each of my research projects, I will include this listing of each API endpoint available as part of the work. As I was adding one for my API monitoring research, I had a thought--I wanted to reorganize the endpoints, across the three API monitoring service providers, and group them by tag. So I started playing with a new way to look at the APIs available in any given APIs.json driven collection.

This is a listing of API resources available in this projects APIs.json, organized by tag.

Account

Auth

  • Delete an Authentication Setting - (DELETE) - /auth/{id}/
  • Get an existing Authentication Setting - (GET) - /auth/{id}/
  • List Authentication Settings - (GET) - /auth/
  • Update an existing Authentication Setting - (PUT) - /auth/{id}/

Buckets

Calls

Checks

Contacts

Deployments

Messages

Monitors

Reports

Shared Environments

Tags

  • List All Tags - (GET) - /tags

Templates

Test Environments

Test Steps

Tests

Tokens

Workflows

If you mouse over each actual endpoint, it will tell you the host of the API it is for. I am just playing around. I have no idea what value this would present for anyone, except for just helping provide a new dimension for viewing the APIs involved. For me, this particular one helps me understand API resources across many providers, while also encouraging me to think more critically about how I tag the APIs I define using OpenAPI Spec.

You can view the listing by provider, as well as listing by tag, for my API monitoring research. I will be adding these two views to all of my core research areas, and the API stacks I define as I have time, but I thought it would be interesting to add to my own API stack, which is probably the most defined of all of my stacks--here is listing by provider, and listing by tag, for my API Evangelist stack.

We'll see how this plays out as I roll out for more of my research. I am sure I will learn a lot along the way, by adding new APIs.json driven dimensions like these. I'd like to eventually have a whole toolbox of these types of views, and even some APIs.json and OpenAPI Spec driven visualizations.


Defining API Monitoring APIs So I Can Map To Each Stop Along The API Life Cycle

I am going through each of the 35+ areas of the APi space that I monitor, working to bring alive the over 900 stops along the API life cycle, that I have identified through my research. I'm still working through prototypes for my life cycle explorer, but the current version has organizations, tools, links, and questions, along with the title and description of each stop of the life cycle journey I am trying to bring into focus.

Part of my approach in identifying the different lines, areas, and stops along this life cycle involves taking a look at the approach of leading API providers, as well as service being offered by companies selling their solutions to these API providers--giving me two sides of the API life cycle coin. In the last couple months I have also found another way to identify potential building blocks, and round off the ones I have, through the API definitions of leading API providers.

All I do, is craft an OADF file for each of the API service providers I track on, within each area of my research. I'm spending time tonight working on my API monitoring research, so I look at three of the service providers I track on, who have APIs. The OADF specs are not complete, but provide me a baseline definition for the surface area of each API, something I'll round out with more use. Here are the endpoints I have from each provider so far.

API Science Monitors API (oadf)

  • Get All Contacts - (GET) - /contacts.json
  • Create a Contact - (POST) - /contacts.json
  • Delete a Contact - (DELETE) - /contacts/{id}.json
  • Get a Specific Contact - (GET) - /contacts/{id}.json
  • Update a Contact - (PATCH) - /contacts/{id}.json
  • Get All Monitors - (GET) - /monitors
  • Create a Monitor - (POST) - /monitors
  • Apply Actions to Multiple Monitors - (PUT) - /monitors
  • Get a Specific Monitor - (GET) - /monitors/{id}
  • Get Checks For A Monitor - (GET) - /monitors/{id}/checks.json
  • Performance Report - (GET) - /monitors/{id}/performance
  • Show a Monitors Templates - (GET) - /monitors/{id}/templates
  • Get a Template - (GET) - /monitors/{id}/templates/{templates]
  • Create a Template - (POST) - /monitors/{id}/templates/{templates]
  • Testing your Monitor - (GET) - /monitors/{id}/test
  • Uptime Report - (GET) - /monitors/{id}/uptime.json
  • List All Tags - (GET) - /tags

Runscope API (oadf)

  • Account Resource - (GET) - /account
  • Teams Resource - (GET) - /teams/{teamId}/people
  • Team integrations list - (GET) - /teams/{teamId}/integrations
  • Returns a list of buckets. - (GET) - /buckets
  • Create a new bucket - (POST) - /buckets
  • Returns a single bucket resource. - (GET) - /buckets/{bucketKey}
  • Delete a single bucket resource. - (DELETE) - /buckets/{bucketKey}
  • Retrieve a list of messages in a bucket - (GET) - /buckets/{bucketKey}/messages
  • Clear a bucket (remove all messages). - (DELETE) - /buckets/{bucketKey}/messages
  • Create a message - (POST) - /buckets/{bucketKey}/messages
  • Retrieve a list of error messages in a bucket - (GET) - /buckets/{bucketKey}/errors
  • Retrieve the details for a single message. - (GET) - /buckets/{bucketKey}/messages/{messageId}
  • Returns a list of tests. - (GET) - /buckets/{bucketKey}/tests
  • Create a test. - (POST) - /buckets/{bucketKey}/tests
  • Delete a single test. - (DELETE) - /buckets/{bucketKey}/tests/{testId}
  • List test steps for a test. - (GET) - /buckets/{bucketKey}/tests/{testId}/steps
  • Add new test step. - (POST) - /buckets/{bucketKey}/tests/{testId}/steps
  • Update the details of a single test step. - (PUT) - /buckets/{bucketKey}/tests/{testId}/steps/{stepId}
  • Delete a step from a test. - (DELETE) - /buckets/{bucketKey}/tests/{testId}/steps/{stepId}
  • Return details of the test's environments (only those that belong to the specified test) - (GET) - /buckets/{bucketKey}/tests/{testId}/environments
  • Create new test environment. - (POST) - /buckets/{bucketKey}/tests/{testId}/environments
  • Update the details of a test environment. - (PUT) - /buckets/{bucketKey}/tests/{testId}/environments/{environmentId}
  • Returns list of shared environments for a specified bucket. - (GET) - /buckets/{bucketKey}/environments
  • Create new shared environment. - (POST) - /buckets/{bucketKey}/environments
  • Update the details of a test environment. - (PUT) - /buckets/{bucketKey}/environments/{environmentId}

APIMetrics Auth API (oadf)

  • List Authentication Settings - (GET) - /auth/
  • Delete an Authentication Setting - (DELETE) - /auth/{id}/
  • Get an existing Authentication Setting - (GET) - /auth/{id}/
  • Update an existing Authentication Setting - (PUT) - /auth/{id}/

APIMetrics Calls API (oadf)

  • List API Calls - (GET) - /calls/
  • Create new API Call - (POST) - /calls/
  • List API Calls by Authentication - (GET) - /calls/auth/{auth_id}/
  • Delete an API Call - (DELETE) - /calls/{id}/
  • Get an existing API Call - (GET) - /calls/{id}/
  • Update an existing API Call - (PUT) - /calls/{id}/
  • Trigger an API Call to run - (POST) - /calls/{id}/run
  • List Stats from before a date for an API Call - (GET) - /calls/{id}/stats/before
  • List Stats since a date for an API Call - (GET) - /calls/{id}/stats/since

APIMetrics Deployments API (oadf)

  • List all Deployment - (GET) - /deployments/
  • Create a new Deployment - (POST) - /deployments/
  • Get all Deployments for an API Call - (GET) - /deployments/call/{call_id}/
  • Get all Deployments for a Workflow - (GET) - /deployments/workflow/{workflow_id}
  • Delete a Deployment - (DELETE) - /deployments/{id}/
  • Get an existing Deployment - (GET) - /deployments/{id}/
  • Update an existing Deployment - (PUT) - /deployments/{id}/

APIMetrics Reports API (oadf)

  • List all Reports - (GET) - /reports/
  • Create a new Report - (POST) - /reports/
  • Delete a Report - (DELETE) - /reports/{id}/
  • Get an existing Report - (GET) - /reports/{id}/
  • Update an existing Report - (PUT) - /reports/{id}/

APIMetrics Tokens API (oadf)

  • List Auth Tokens - (GET) - /tokens/
  • Create a new Auth Token - (POST) - /tokens/
  • Get all tokens for an Authentication Setting - (GET) - /tokens/auth/{auth_id}/
  • Delete an Auth Token - (DELETE) - /tokens/{id}/
  • Get an existing Auth Token - (GET) - /tokens/{id}/
  • Update an Auth Token - (PUT) - /tokens/{id}/

APIMetrics Workflows API (oadf)

  • List all Workflows - (GET) - /workflows/
  • Create new Authentication Settings - (POST) - /workflows/
  • Delete a Workflow - (DELETE) - /workflows/{id}/
  • Get an existing Workflow - (GET) - /workflows/{id}/
  • Trigger a Workflow to run now - (POST) - /workflows/{id}/
  • Create a new Workflow - (PUT) - /workflows/{id}/

When you compare the definitions for these API service providers, you are comparing apples to oranges, even though they exist in the same layer of the API space. To me, having them defined, will allow me to slowly weave them into my master list of common building blocks for API monitoring.

What really excites me, is that for each potential stop along the API monitoring line, I might be able to actually link to specific API endpoints, and even down to the verb level. For example, I could link to the endpoint for creating a new test for API Science, APIMetrics, and Runscope, with a single button or widget. 

I find the API definitions for API service providers to be more interesting them some of the features they showcase via their site. I will be continuing to identify the API service providers that I track on who have APIs, and defining them using OADF. You can find the APIs for my API monitoring research available in the projects APIs.json file, as well as each individual APIs.json and OADF files listed on the API monitoring service providers page.


Evolving My API Stack To Be A Public Repo For Sharing API Discovery, Monitoring, And Rating Information

My API Stack began as a news site, and evolved into a directory of the APIs that I monitor in the space. I published APIs.json indexes for the almost 1000 companies I am trackig on, with almost 400 OADF files for some of the APIs I've profiled in more detail. My mission around the project so far, has been to create an open source, machine readable repo for the API space.

I have  had two recent occurrences that are pushing me to expand on my API Stack work. First, I have other entities who want to contribute monitoring data and other elements I would like to see collected, but haven't had time. The other is I that I have started spidering the URLs of the API portals I track on, and need a central place to store the indexes, so that others can access.

Ultimately I'd like to see the API Stack act as a public repo, where anyone can grab the data they need to discovery, evaluate, integrate, and stay in tune with what APIs are doing, or not doing. In addition to finding OADF, API Blueprint, and RAML files by crawling and indexing API portals, and publishing in a public repo, I want to build out the other building blocks that I index with APIs.json, like pricing, and TOS changes, and potentially monitoring, testing, performance data available.

Next I will publish some pricing, monitoring, and portal site crawl indexes to the repo, for some of the top APIs out there, and start playing with the best way to store the JSON, and other files, and provide an easy way explore and play with the data. If you have any data that you are collecting, and would like to contribute, or have a specific need you'd like to see tracked on, let me know, and I'll add to the road map.

My goal is to go for quality and completeness of data there, before I look to scale, and expand the quantity of information and tooling available. Let me know if you have any thoughts or feedback.


API Monitoring Should Be Baked Into Your API Strategy By Default

As I've written several posts on the recent Amazon API Gateway release, one of the side things I noted about the API solution from AWS, was that API monitoring is baked in by default. As stated on the AWS API Gateway page:

After your API is deployed, Amazon API Gateway provides you with a dashboard to visually monitor calls to your services using Amazon CloudWatch, so you see performance metrics and information on API calls, data latency, and error rates.

This may seem like common sense to many people who have been in the API space for a while now, but for many API designers, architects, developers, and business folk who are just getting going, API monitoring may not be default for all implementations.

To me, AWS baking in API monitoring by default, demonstrates that the world of API monitoring has matured, marking an important milestone that all API providers should pay attention to. I've been watching this space grow over the last couple years, and similar to the API management space, the AWS release reflects the overall health of the API sector.

If you are operating any API In 2015, monitoring should be standard operating procedure, alongside your API documentation.


Growth Of Bug Bounties, Importance of 3rd Party Monitoring, and Operating On The Open Internet


Going Beyond Just API Status And Providing An Official API Monitoring Service(s) With Your API

I've long advocated that an API Status page should be a required building block for any API operation. As I work on monitoring for my own master API stack, and as I read stories like Pingometer Keeps Your Uptime In The 9’s With Twilio SMS, I'm thinking we need more that just a simple status report for our APIs.

From an API provider standpoint, we should have a more nuanced view of our API availability, beyond just up or down--something popular API integration service providers like Runscope and API Science have been saying for a while. If our APIs are publicly available, I'm going to even suggest providers start actively sharing their API monitoring strategy, and resulting data with the ecosystem--something I'm going to also be advocating for inclusion in the APIs.json index.

From an API consumer standpoint, you are going to want a real-time awareness about the availability of all the APIs you depend on, and much like the provider standpoint, this view needs to be more nuanced than just whether a service is up or down. Developers need to be making tactical, run-time decisions based upon API monitoring, as well as be able to make longer term strategic decisions about which APIs they depend on, based upon API monitoring exhaust. 

I like Twilio's style for showcasing Pingometer as a solution. I'm thinking every API provider should cozy up with an API monitoring service provider for their own needs, while also establishing an approach they can also share with their API consumers, to meet their needs as well. I'm currently adding API Science and Runscope as default building blocks for my API Stack, something you will find indexed as part of each of my services APIs.json file.

I'm working to coherently separate out the benefits API service providers like API Science and Runscope bring to the table for both API providers, and API consumers, in a way that is also in alignment with a common set of building blocks, that everyone involved can benefit from, throughout the API life-cycle.


Updating My WalkSensor Monitoring Network Profile For The Month

I am updating my WalkSensor profile for the month. I like to take a look at my profile each month, stay up to date on the providers that I contribute data to, and consider any possible new additions to my profile. Right now I gather data for 28 separate companies, organizations, and the city of Portland, OR where I live. When I walk to work each day, which is about 1.4 miles, I scan about 93 separate sensors, for all of my 28 companies, organizations and government agencies.

I started walking to work each day for my health, and I kept walking because of WalkSensor. I get to gather data from water, electrical, weather, and other sensors place around my neighborhood and larger city. Rather than companies connecting each sensor directly on the Internet for information gathering, it was more cost effective, and secure to give each sensor the ability to transmit a data signal for a small radius, say like 30 feet. These signals can be picked up by any regular mobile smart phone, but it requires the WalkSensor app to actually accept, decrypt, and store each signals message.

When I get to work, or get back home, whichever is the first location I connect to a secure wifi connection, the gathered data is transmitted to each companies WalkSensor server. The process is a win for everyone involved--each company, organization, or government agency is able to deploy low cost sensors, and I am able to generate some revenue, pushing me to walk to work, rather than driving. I get to choose who I gather data for, and I only work for them if I have an affinity with their mission, and data collection goals.

I do not make much money, about $50.00 a month, but it makes it worth it. I can earn extra money by taking walks in the evening or on weekends. Companies have specific data gathering needs, and my WalkSensor app will aggregate potential opportunities for me, and plan a trip for me, encouraging me to go for a walk. I like that I am supporting a different vision of the Internet of Things, one that doesn’t involve everything being connected to the Internet, and allows for much more sensible and controlled way to connect devices to a central online platform.


An API Evangelism Strategy To Map The Global Family Tree

In my work everyday as the API Evangelist, I get to have some very interesting conversations, with a wide variety of folks, about how they are using APIs, as well as brainstorming other ways they can approach their API strategy allowing them to be more effective. One of the things that keep me going in this space is this diversity. One day I’m looking at Developer.Trade.Gov for the Department of Commerce, the next talking to WordPress about APIs for 60 million websites, and then I’m talking with the The Church of Jesus Christ of Latter-day Saints about the Family Search API, which is actively gathering, preserving, and sharing genealogical records from around the world.

I’m so lucky I get to speak with all of these folks about the benefits, and perils of APIs, helping them think through their approach to opening up their valuable resources using APIs. The process is nourishing for me because I get to speak to such a diverse number of implementations, push my understanding of what is possible with APIs, while also sharpening my critical eye, understanding of where APIs can help, or where they can possibly go wrong. Personally, I find a couple of things very intriguing about the Family Search API story:

  1. Mapping the worlds genealogical history using a publicly available API — Going Big!!
  2. Potential from participation by not just big partners, but the long tail of genealogical geeks
  3. Transparency, openness, and collaboration shining through as the solution beyond just the technology
  4. The mission driven focus of the API overlapping with my obsession for API evangelism intrigues and scares me
  5. Have existing developer area, APIs, and seemingly necessary building blocks but failed to achieve a platform level

I’m open to talking with anyone about their startup, SMB, enterprise, organizational, institutional, or government API, always leaving open a 15 minute slot to hear a good story, which turned into more than an hour of discussion with the Family Search team. See, Family Search already has an API, they have the technology in order, and they even have many of the essential business building blocks as well, but where they are falling short is when it comes to dialing in both the business and politics of their developer ecosystem to discover the right balance that will help them truly become a platform—which is my specialty. ;-)

This brings us to the million dollar question: How does one become a platform?

All of this makes Family Search an interesting API story. The scope of the API, and to take something this big to the next level, Family Search has to become a platform, and not a superficial “platform” where they are just catering to three partners, but nourishing a vibrant long tail ecosystem of website, web application, single page application, mobile applications, and widget developers. Family Search is at an important reflection point, they have all the parts and pieces of a platform, they just have to figure out exactly what changes need to be made to open up, and take things to the next level.

First, let’s quantify the company, what is FamilySearch? “ For over 100 years, FamilySearch has been actively gathering, preserving, and sharing genealogical records worldwide”, believing that “learning about our ancestors helps us better understand who we are—creating a family bond, linking the present to the past, and building a bridge to the future”.

FamilySearch is 1.2 billion total records, with 108 million completed in 2014 so far, with 24 million awaiting, as well as 386 active genealogical projects going on. Family Search provides the ability to manage photos, stories, documents, people, and albums—allowing people to be organized into a tree, knowing the branch everyone belongs to in the global family tree.

FamilySearch, started out as the Genealogical Society of Utah, which was founded in 1894, and dedicate preserving the records of the family of mankind, looking to "help people connect with their ancestors through easy access to historical records”. FamilySearch is a mission-driven, non-profit organization, ran by the The Church of Jesus Christ of Latter-day Saints. All of this comes together to define an entity, that possesses an image that will appeal to some, while leave concern for others—making for a pretty unique formula for an API driven platform, that doesn’t quite have a model anywhere else.

FamilySearch consider what they deliver as as a set of record custodian services:

  • Image Capture - Obtaining a preservation quality image is often the most costly and time-consuming step for records custodians. Microfilm has been the standard, but digital is emerging. Whether you opt to do it yourself or use one of our worldwide camera teams, we can help.
  • Online Indexing - Once an image is digitized, key data needs to be transcribed in order to produce a searchable index that patrons around the world can access. Our online indexing application harnesses volunteers from around the world to quickly and accurately create indexes.
  • Digital Conversion - For those records custodians who already have a substantial collection of microfilm, we can help digitize those images and even provide digital image storage.
  • Online Access - Whether your goal is to make your records freely available to the public or to help supplement your budget needs, we can help you get your records online. To minimize your costs and increase access for your users, we can host your indexes and records on FamilySearch.org, or we can provide tools and expertise that enable you to create your own hosted access.
  • Preservation - Preservation copies of microfilm, microfiche, and digital records from over 100 countries and spanning hundreds of years are safely stored in the Granite Mountain Records Vault—a long-term storage facility designed for preservation.

FamilySearch provides a proven set of services that users can take advantage of via a web applications, as well as iPhone and Android mobile apps, resulting in the online community they have built today. FamilySearch also goes beyond their basic web and mobile application services, and is elevated to software as a service (SaaS) level by having a pretty robust developer center and API stack.

Developer Center
FamilySearch provides the required first impression when you land in the FamilySearch developer center, quickly explaining what you can do with the API, "FamilySearch offers developers a way to integrate web, desktop, and mobile apps with its collaborative Family Tree and vast digital archive of records”, and immediately provides you with a getting started guide, and other supporting tutorials.

FamilySearch provides access to over 100 API resources in the twenty separate groups: Authorities, Change History, Discovery, Discussions, Memories, Notes, Ordinances, Parents and Children, Pedigree, Person, Places, Records, Search and Match, Source Box, Sources, Spouses, User, Utilities, Vocabularies, connecting you to the core FamilySearch genealogical engine.

The FamilySearch developer area provides all the common, and even some forward leaning technical building blocks:

To support developers, FamilySearch provides a fairly standard support setup:

To augment support efforts there are also some other interesting building blocks:

Setting the stage for FamilySearch evolving to being a platform, they even posses some necessary partner level building blocks:

There is even an application gallery showcasing what web, mac & windows desktop, and mobile applications developers have built. FamilySearch even encourages developers to “donate your software skills by participating in community projects and collaborating through the FamilySearch Developer Network”.

Many of the ingredients of a platform exist within the current FamilySearch developer hub, at least the technical elements, and some of the common business, and political building blocks of a platform, but what is missing? This is what makes FamilySearch a compelling story, because it emphasizes one of the core elements of API Evangelist—that all of this API stuff only works when the right blend of technical, business, and politics exists.

Establishing A Rich Partnership Environment

FamilySearch has some strong partnerships, that have helped establish FamilySearch as the genealogy service it is today. FamilySearch knows they wouldn’t exist without the partnerships they’ve established, but how do you take it to the next and grow to a much larger, organic API driven ecosystem where a long tail of genealogy businesses, professionals, and enthusiasts can build on, and contribute to, the FamilySearch platform.

FamilySearch knows the time has come to make a shift to being an open platform, but is not entirely sure what needs to happen to actually stimulate not just the core FamilySearch partners, but also establish a vibrant long tail of developers. A developer portal is not just a place where geeky coders come to find what they need, it is a hub where business development occurs at all levels, in both synchronous, and asynchronous ways, in a 24/7 global environment.

FamilySearch acknowledge they have some issues when it comes investing in API driven partnerships:

  • “Platform” means their core set of large partners
  • Not treating all partners like first class citizens
  • Competing with some of their partners
  • Don’t use their own API, creating a gap in perspective

FamilySearch knows if they can work out the right configuration, they can evolve FamilySearch from a digital genealogical web and mobile service to a genealogical platform. If they do this they can scale beyond what they’ve been able to do with a core set of partners, and crowdsource the mapping of the global family tree, allowing individuals to map their own family trees, while also contributing to the larger global tree. With a proper API driven platform this process doesn’t have to occur via the FamiliySearch website and mobile app, it can happen in any web, desktop, or mobile application anywhere.

FamilySearch already has a pretty solid development team taking care of the tech of the FamilySearch API, and they have 20 people working internally to support partners. They have a handle on the tech of their API, they just need to get a handle on the business and politics of their API, and invest in the resources that needed to help scale the FamilySearch API being just a developer area, to being a growing genealogical developer community, to a full blow ecosystem that span not just the FamilySearch developer portal, but thousands of other sites and applications around the globe.

A Good Dose Of API Evangelism To Shift Culture A Bit

A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business for FamilySearch, something that if done right, can open up FamilySearch to outside ideas, and with the right framework manage to allow the platform to move beyond just certification, and partnering to also investment, and acquisition of data, content, talent, applications, and partners via the FamilySearch developer platform.

Think of evangelism as the grease in the gears of the platform allowing it to grow, expand, and handle a larger volume, of outreach, and support. API evangelism works to lubricate all aspects of platform operation.

First, lets kick off with setting some objectives for why we are doing this, what are we trying to accomplish:

  • Increase Number of Records - Increase the number of overall records in the FamilySearch database, contributing the larger goals of mapping the global family tree.
  • Growth in New Users - Growing the number of new users who are building on the FamilySearch API, increase the overall headcount fro the platform.
  • Growth In Active Apps - Increase not just new users but the number of actual apps being built and used, not just counting people kicking the tires.
  • Growth in Existing User API Usage - Increase how existing users are putting the FamilySearch APIs. Educate about new features, increase adoption.
  • Brand Awareness - One of the top reasons for designing, deploying and managing an active APIs is increase awareness of the FamilySearch brand.
  • What else?

What does developer engagement look like for the FamilySearch platform?

  • Active User Engagement - How do we reach out to existing, active users and find out what they need, and how do we profile them and continue to understand who they are and what they need. Is there a direct line to the CRM?
  • Fresh Engagement - How is FamilySearch contacting new developers who have registered weekly to see what their immediate needs are, while their registration is fresh in their minds.
  • Historical Engagement - How are historical active and / or inactive developers being engaged to better understand what their needs are and would make them active or increase activity.
  • Social Engagement - Is FamilySearch profiling the URL, Twitter, Facebook LinkedIn, and Github profiles, and then actively engage via these active channels?

Establish a Developer Focused Blog For Storytelling

  • Projects - There are over 390 active projects on the FamilySearch platform, plus any number of active web, desktop, and mobile applications. All of this activity should be regularly profiled as part of platform evangelism. An editorial assembly line of technical projects that can feed blog stories, how-tos, samples and Github code libraries should be taking place, establishing a large volume of exhaust via the FamlySearch platform.
  • Stories - FamilySearch is great at writing public, and partner facing content, but there is a need to be writing, editing and posting of stories derived from the technically focused projects, with SEO and API support by design.
  • Syndication - Syndication to Tumblr, Blogger, Medium and other relevant blogging sites on regular basis with the best of the content.

Mapping Out The Geneology Landscape

  • Competition Monitoring - Evaluation of regular activity of competitors via their blog, Twitter, Github and beyond.
  • Alpha Players - Who are the vocal people in the genealogy space with active Twitter, blogs, and Github accounts.
  • Top Apps - What are the top applications in the space, whether built on the FamilySearch platform or not, and what do they do?
  • Social - Mapping the social landscape for genealogy, who is who, and who should the platform be working with.
  • Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks. (should already be done by marketing folks)
  • Cities & Regions - Target specific markets in cities that make sense to your evangelism strategy, what are local tech meet ups, what are the local organizations, schools, and other gatherings. Who are the tech ambassadors for FamilySearch in these spaces?

Adding To Feedback Loop From Forum Operations

  • Stories - Deriving of stories for blog derived from forum activity, and the actual needs of developers.
  • FAQ Feed - Is this being updated regularly with stuff?
  • Streams - other stream giving the platform a heartbeat?

Being Social About Platform Code and Operations With Github

  • Setup Github Account - Setup FamilySearch platform developer account and bring internal development team into a team umbrella as part of.
  • Github Relationships - Managing of followers, forks, downloads and other potential relationships via Github, which has grown beyond just code, and is social.
  • Github Repositories - Managing of code sample Gists, official code libraries and any samples, starter kits or other code samples generated through projects.

Adding To The Feedback Loop From The Bigger FAQ Picture

  • Quora - Regular trolling of Quora and responding to relevant [Client Name] or industry related questions.
  • Stack Exchange - Regular trolling of Stack Exchange / Stack Overflow and responding to relevant FamilySearch or industry related questions.
  • FAQ - Add questions from the bigger FAQ picture to the local FamilySearch FAQ for referencing locally.

Leverage Social Engagement And Bring In Developers Too

  • Facebook - Consider setting up of new API specific Facebook company. Posting of all API evangelism activities and management of friends.
  • Google Plus - Consider setting up of new API specific Google+ company. Posting of all API evangelism activities and management of friends.
  • LinkedIn - Consider setting up of new API specific LinkedIn profile page who will follow developers and other relevant users for engagement. Posting of all API evangelism activities.
  • Twitter - Consider setting up of new API specific Twitter account. Tweeting of all API evangelism activity, relevant industry landscape activity, discover new followers and engage with followers.

Sharing Bookmarks With the Social Space

  • Hacker News - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Hacker News, to keep a fair and balanced profile, as well as network and user engagement.
  • Product Hunt - Product Hunt is a place to share the latest tech creations, providing an excellent format for API providers to share details about their new API offerings.
  • Reddit - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Reddit, to keep a fair and balanced profile, as well as network and user engagement.

Communicate Where The Roadmap Is Going

  • Roadmap - Provide regular roadmap feedback based upon developer outreach and feedback.
  • Changelog - Make sure the change log always reflects the roadmap communication or there could be backlash.

Establish A Presence At Events

  • Conferences - What are the top conferences occurring that we can participate in or attend--pay attention to call for papers of relevant industry events.
  • Hackathons - What hackathons are coming up in 30, 90, 120 days? Which would should be sponsored, attended, etc.
  • Meetups - What are the best meetups in target cities? Are there different formats that would best meet our goals? Are there any sponsorship or speaking opportunities?
  • Family History Centers - Are there local opportunities for the platform to hold training, workshops and other events at Family History Centers?
  • Learning Centers - Are there local opportunities for the platform to hold training, workshops and other events at Learning Centers?

Measuring All Platform Efforts

  • Activity By Group - Summary and highlights from weekly activity within the each area of API evangelism strategy.
  • New Registrations - Historical and weekly accounting of new developer registrations across APis.
  • Volume of Calls - Historical and weekly accounting of API calls per API.
  • Number of Apps - How many applications are there.

Essential Internal Evangelism Activities

  • Storytelling - Telling stories of an API isn’t just something you do externally, what stories need to be told internally to make sure an API initiative is successful.
  • Conversations - Incite internal conversations about the FamilySearch platform. Hold brown bag lunches if you need to, or internal hackathons to get them involved.
  • Participation - It is very healthy to include other people from across the company in API operations. How can we include people from other teams in API evangelism efforts. Bring them to events, conferences and potentially expose them to local, platform focused events.
  • Reporting - Sometimes providing regular numbers and reports to key players internally can help keep operations running smooth. What reports can we produce? Make them meaningful.

All of this evangelism starts with a very external focus, which is a hallmark of API and developer evangelism efforts, but if you notice by the end we are bringing it home to the most important aspect of platform evangelism, the internal outreach. This is the number one reason APIs fail, is due to a lack of internal evangelism, educating top and mid-level management, as well as lower level staff, getting buy-in and direct hands-on involvement with the platform, and failing to justify budget costs for the resources needed to make a platform successful.

Top-Down Change At FamilySearch

The change FamilySearch is looking for already has top level management buy-in, the problem is that the vision is not in lock step sync with actual platform operations. When regular projects developed via the FamilySearch platform are regularly showcased to top level executives, and stories consistent with platform operations are told, management will echo what is actually happening via the FamilySearch. This will provide a much more ongoing, deeper message for the rest of the company, and partners around what the priorities of the platform are, making it not just a meaningless top down mandate.

An example of this in action is with the recent mandate from President Obama, that all federal agencies should go “machine readable by default”, which includes using APIs and open data outputs like JSON, instead of document formats like PDF. This top down mandate makes for a good PR soundbite, but in reality has little affect on the ground at federal agencies. In reality it has taken two years of hard work on the ground, at each agency, between agencies, and with the public to even begin to make this mandate a truth at over 20 of the federal government agencies.

Top down change is a piece of the overall platform evolution at FamilySearch, but is only a piece. Without proper bottom-up, and outside-in change, FamilySearch will never evolve beyond just being a genealogical software as a service with an interesting API. It takes much more than leadership to make a platform.

Bottom-Up Change At FamilySearch

One of the most influential aspects of APIs I have seen at companies, institutions, and agencies is the change of culture brought when APIs move beyond just a technical IT effort, and become about making resources available across an organization, and enabling people to do their job better. Without an awareness, buy-in, and in some cases evangelist conversion, a large organization will not be able to move from a service orientation to a platform way of thinking.

If a company as a whole is unaware of APIs, either at the company or organization, as well as out in the larger world with popular platforms like Twitter, Instagram, and others—it is extremely unlikely they will endorse, let alone participate in moving from being a digital service to platform. Employees need to see the benefits of a platform to their everyday job, and their involvement cannot require what they would perceive as extra work to accomplish platform related duties. FamilySearch employees need to see the benefits the platform brings to the overall mission, and play a role in this happening—even if it originates from a top-down mandate.

Top bookseller Amazon was already on the path to being a platform with their set of commerce APIs, when after a top down mandate from CEO Jeff Bezos, Amazon internalized APIs in such a way, that the entire company interacted, and exchange resources using web APIs, resulting in one of the most successful API platforms—Amazon Web Services (AWS). Bezos mandated that if an Amazon department needed to procure a resource from another department, like server or storage space from IT, it need to happen via APIs. This wasn’t a meaningless top-down mandate, it made employees life easier, and ultimately made the entire company more nimble, and agile, while also saving time and money. Without buy-in, and execution from Amazon employees, what we know as the cloud would never have occurred.

Change at large enterprises, organizations, institutions and agencies, can be expedited with the right top-down leadership, but without the right platform evangelism strategy, that includes internal stakeholders as not just targets of outreach efforts, but also inclusion in operations, it can result in sweeping, transformational changes. This type of change at a single organization can effect how an entire industry operates, similar to what we’ve seen from the ultimate API platform pioneer, Amazon.

Outside-In Change At FamilySearch

The final layer of change that needs to occur to bring FamilySearch from being just a service to a true platform, is opening up the channels to outside influence when it comes not just to platform operations, but organizational operations as well. The bar is high at FamilySearch. The quality of services, and expectation of the process, and adherence to the mission is strong, but if you are truly dedicated to providing a database of all mankind, you are going to have to let mankind in a little bit.

FamilySearch is still the keeper of knowledge, but to become a platform you have to let in the possibility that outside ideas, process, and applications can bring value to the organization, as well as to the wider genealogical community. You have to evolve beyond notions that the best ideas from inside the organization, and just from the leading partners in the space. There are opportunities for innovation and transformation in the long-tail stream, but you have to have a platform setup to encourage, participate in, and be able to identify value in the long-tail stream of an API platform.

Twitter is one of the best examples of how any platform will have to let in outside ideas, applications, companies, and individuals. Much of what we consider as Twitter today was built in the platform ecosystem from the iPhone and Android apps, to the desktop app TweetDeck, to terminology like the #hashtag. Over the last 5 years, Twitter has worked hard to find the optimal platform balance, regarding how they educate, communicate, invest, acquire, and incentives their platform ecosystem. Listening to outside ideas goes well beyond the fact that Twitter is a publicly available social platform, it is about having such a large platform of API developers, and it is impossible to let in all ideas, but through a sophisticated evangelism strategy of in-person, and online channels, in 2014 Twitter has managed to find a balance that is working well.

Having a public facing platform doesn’t mean the flood gates are open for ideas, and thoughts to just flow in, this is where service composition, and the certification and partner framework for FamilySearch will come in. Through clear, transparent partners tiers, open and transparent operations and communications, an optimal flow of outside ideas, applications, companies and individuals can be established—enabling a healthy, sustainable amount of change from the outside world.

Knowing All Of Your Platform Partners

The hallmark of any mature online platform is a well established partner ecosystem. If you’ve made the transition from service to platform, you’ve established a pretty robust approach to not just certifying, and on boarding your partners, you also have stepped it up in knowing and understanding who they are, what their needs are, and investing in them throughout the lifecycle.

First off, profile everyone who comes through the front door of the platform. If they sign up for a public API key, who are they, and where do they potentially fit into your overall strategy. Don’t be pushy, but understanding who they are and what they might be looking for, and make sure you have a track for this type of user well defined.

Next, quality, and certify as you have been doing. Make sure the process is well documented, but also transparent, allowing companies and individuals to quickly understand what it will take to certified, what the benefits are, and examples of other partners who have achieved this status. As a developer, building a genealogical mobile app, I need to know what I can expect, and have some incentive for investing in the certification process.

Keep your friends close, and your competition closer. Open the door wide for your competition to become a platform user, and potentially partner. 100+ year old technology company Johnson Controls (JCI) was concerned about what the competition might do it they opened up their building efficient data resources to the public via the Panoptix API platform, when after it was launched, they realized their competition were now their customer, and a partner in this new approach to doing business online for JCI.

When Department of Energy decides what data and other resource it makes available via Data.gov or the agencies developer program it has to deeply consider how this could affect U.S. industries. The resources the federal agency possesses can be pretty high value, and huge benefits for the private sector, but in some cases how might opening up APIs, or limiting access to APIs help or hurt the larger economy, as well as the Department of Energy developer ecosystem—there are lots of considerations when opening up API resources, that vary from industry to industry.

There are no silver bullets when it comes to API design, deployment, management, and evangelism. It takes a lot of hard work, communication, and iterating before you strike the right balance of operations, and every business sector will be different. Without knowing who your platform users are, and being able to establish a clear and transparent road for them to follow to achieve partner status, FamilySearch will never elevate to a true platform. How can you scale the trusted layers of your platform, if your partner framework isn’t well documented, open, transparent, and well executed? It just can’t be done.

Meaningful Monetization For Platform

All of this will take money to make happen. Designing, and executing on the technical, and the evangelism aspects I’m laying out will cost a lot of money, and on the consumers side, it will take money to design, develop, and manage desktop, web, and mobile applications build around the FamilySearch platform. How will both the FamilySearch platform, and its participants make ends meet?

This conversation is a hard one for startups, and established businesses, let alone when you are a non-profit, mission driven organization. Internal developers cost money, server and bandwidth are getting cheaper but still are a significant platform cost--sustaining a sale, bizdev, and evangelism also will not be cheap. It takes money to properly deliver resources via APIs, and even if the lowest tiers of access are free, at some point consumers are going to have to pay for access, resources, and advanced features.

The conversation around how do you monetize API driven resources is going on across government, from cities up to the federal government. Where the thought of charging for access to public data is unheard of. These are public assets, and they should be freely available. While this is true, think of the same situation, but when it comes to physical public assets that are owned by the government, like parks. You can freely enjoy many city, county, and federal parks, there are sometimes small fees for usage, but if you want to actually sell something in a public park, you will need to buy permits, and often share revenue with the managing agency. We have to think critically about how we fund the publishing, and refinement of publicly owned digital assets, as with physical assets there will be much debate in coming years, around what is acceptable, and what is not.

Woven into the tiers of partner access, there should always be provisions for applying costs, overhead, and even generation of a little revenue to be applied in other ways. With great power, comes great responsibility, and along with great access for FamilySearch partners, many will also be required to cover costs of compute capacity, storage costs, and other hard facts of delivering a scalable platform around any valuable digital assets, whether its privately or publicly held.

Platform monetization doesn’t end with covering the costs of platform operation. Consumers of FamilySearch APIs will need assistance in identify the best ways to cover their own costs as well. Running a successful desktop, web or mobile application will take discipline, structure, and the ability to manage overhead costs, while also being able to generate some revenue through a clear business model. As a platform, FamilySearch will have to bring to the table some monetization opportunities for consumers, providing guidance as part of the certification process regarding what are best practices for monetization, and even some direct opportunities for advertising, in-app purchases and other common approaches to application monetization and sustainment.

Without revenue greasing the gears, no service can achieve platform status. As with all other aspects of platform operations the conversation around monetization cannot be on-sided, and just about the needs of the platform providers. Pro-active steps need to be taken to ensure both the platform provider, and its consumers are being monetized in the healthiest way possible, bringing as much benefit to the overall platform community as possible.

Open & Transparent Operations & Communications

How does all of this talk of platform and evangelism actually happen? It takes a whole lot of open, transparent communication across the board. Right now the only active part of the platform is the FamilySearch Developer Google Group, beyond that you don’t see any activity that is platform specific. There are active Twitter, Facebook, Google+, and mainstream and affiliate focused blogs, but nothing that serves the platform, contributed to the feedback loop that will be necessary to take the service to the next level.

On a public platform, communications cannot all be private emails, phone calls, or face to face meetings. One of the things that allows an online service to expand to become a platform, then scale and grow into robust, vibrant, and active community is a stream of public communications, which include blogs, forums, social streams, images, and video content. These communication channels cannot all be one way, meaning they need to include forum and social conversations, as well as showcase platform activity by API consumers.

Platform communications isn’t just about getting direct messages answered, it is about public conversation so everyone shares in the answer, and public storytelling to help guide and lead the platform, that together with support via multiple channels, establishes a feedback loop, that when done right will keep growing, expanding and driving healthy growth. The transparent nature of platform feedback loops are essential to providing everything the consumers will need, while also bringing a fresh flow of ideas, and insight within the FamilySearch firewall.

Truly Shifting FamilySearch The Culture

Top-down, bottom-up, outside-in, with constantly flow of oxygen via vibrant, flowing feedback loop, and the nourishing, and sanitizing sunlight of platform transparency, where week by week, month by month someone change can occur. It won’t all be good, there are plenty of problems that arise in ecosystem operations, but all of this has the potential to slowly shift culture when done right.

One thing that shows me the team over at FamilySearch has what it takes, is when I asked if I could write this up a story, rather than just a proposal I email them, they said yes. This is a true test of whether or not an organization might have what it takes. If you are unwilling to be transparent about the problems you have currently, and the work that goes into your strategy, it is unlikely you will have what it takes to establish the amount of transparency required for a platform to be successful.

When internal staff, large external partners, and long tail genealogical app developers and enthusiasts are in sync via a FamilySearch platform driven ecosystem, I think we can consider a shift to platform has occurred for FamilySearch. The real question is how do we get there?

Executing On Evangelism

This is not a definitive proposal for executing on an API evangelism strategy, merely a blueprint for the seed that can be used to start a slow, seismic shift in how FamilySearch engages its API area, in a way that will slowly evolve it into a community, one that includes internal, partner, and public developers, and some day, with the right set of circumstances, FamilySearch could grow into robust, social, genealogical ecosystem where everyone comes to access, and participate in the mapping of mankind.

  • Defining Current Platform - Where are we now? In detail.
  • Mapping the Landscape - What does the world of genealogy look like?
  • Identifying Projects - What are the existing projects being developed via the platform?
  • Define an API Evangelist Strategy - Actually flushing out of a detailed strategy.
    • Projects
    • Storytelling
    • Syndication
    • Social
    • Channels
      • External Public
      • External Partner
      • Internal Stakeholder
      • Internal Company-Wide
  • Identify Resources - What resource currently exist? What are needed?
    • Evangelist
    • Content / Storytelling
    • Development
  • Execute - What does execution of an API evangelist strategy look like?
  • Iterate - What does iteration look like for an API evangelism strategy.
    • Weekly
    • Review
    • Repeat

AS with many providers, you don’t want to this to take 5 years, so how do you take a 3-5 year cycle, and execute in 12-18 months?

  • Invest In Evangelist Resources - It takes a team of evangelists to build a platform
    • External Facing
    • Partner Facing
    • Internal Facing
  • Development Resources - We need to step up the number of resources available for platform integration.
    • Code Samples & SDKs
    • Embeddable Tools
  • Content Resources - A steady stream of content should be flowing out of the platform, and syndicated everywhere.
    • Short Form (Blog)
    • Long Form (White Paper & Case Study)
  • Event Budget - FamilySearch needs to be everywhere, so people know that it exists. It can’t just be online.
    • Meetups
    • Hackathons
    • Conferences

There is nothing easy about this. It takes time, and resources, and there are only so many elements you can automate when it comes to API evangelism. For something that is very programmatic, it takes more of the human variable to make the API driven platform algorithm work. With that said it is possible to scale some aspects, and increase the awareness, presence, and effectiveness of FamilySearch platform efforts, which is really what is currently missing.

While as the API Evangelist, I cannot personally execute on every aspect of an API evangelism strategy for FamilySearch, I can provide essential planning expertise for the overall FamilySearch API strategy, as well as provide regular checkin with the team on how things are going, and help plan the roadmap. The two things I can bring to the table that are reflected in this proposal, is the understanding of where the FamilySearch API effort currently is, and what is missing to help get FamilySearch to the next stage of its platform evolution.

When operating within the corporate or organizational silo, it can be very easy to lose site of how other organizations, and companies, are approach their API strategy, and miss important pieces of how you need to shift your strategy. This is one of the biggest inhibitors of API efforts at large organizations, and is one of the biggest imperatives for companies to invest in their API strategy, and begin the process of breaking operations out of their silo.

What FamilySearch is facing demonstrates that APIs are much more than the technical endpoint that most believe, it takes many other business, and political building blocks to truly go from API to platform.


New Heroku Dashboard and Metrics now in Beta


At Heroku, we’re focused on delivering thoughtfully designed systems to improve developer productivity and experience. We firmly believe that improving the development and operations experience helps developers to build and run better apps. This improvement allows developers to focus more on functionality, and businesses to focus more on the value of their applications. Today we are pleased to announce two new features, both in public beta, that support this mission: a new Heroku Dashboard and Heroku Metrics. These new systems bring developers powerful new clarity and simplicity around application management, execution, and optimization. New Heroku Dashboard: Managing applications, organizations, and accounts via the web is now easier than ever. Heroku Metrics: Monitoring production applications and understanding the relationships between runtime characteristics is now built into Heroku. Both are live for you to begin using now. New Heroku Dashboard The management of applications, organizations, and accounts is a part of the development and scaling lifecycle. To make these actions more graceful and intuitive, we’ve developed an entirely new Dashboard for your use of Heroku. Keeping things clear, fast, intuitive, and accessible is important to us, and we wanted the new Dashboard to embody those attributes. It had to be as responsive and reliable as a native application. To achieve this, we redesigned it from scratch and rebuilt it using Ember.js, creating a modern interface with ambitious layouts and interaction patterns. Heroku should work with you, wherever you go. Whether you’re using Dashboard on your largest desktop display or the smallest of laptop screens, you’ll find the new layout brings the controls and data at your fingertips, and makes app- and org-switching speedy. The new Dashboard also gives you the ability to explore changes to your application in a safer manner. We converged on a view-first pattern in order to shield you from accidental changes with potential operational impact, like scaling or resource deletion. With this new interaction mode, you explicitly toggle into edit-mode before you can make modifications to your apps. We took the opportunity presented by this functional redesign to develop a new visual design, as well. We crafted a more friendly and approachable style, to lighten the load of web application development. The result is a lighter interface that favors content over chrome. To get started with the new Dashboard, head to dashboard-next.heroku.com. During the beta, you may need to visit the old Dashboard to complete some administrative tasks; you can switch between the two at any time. When the new Dashboard is GA later this year all of the old functionality will be ported over. Heroku Metrics Managing applications through development and scaling often requires a deeper understanding of performance and resource utilization. A seamless part of the new Dashboard, our new Heroku Metrics system is designed to give you just that understanding, making it simple for you to analyze and optimize performance of your applications on Heroku. In order to provide clarity and general visibility into your applications’ performance and behavior, the new Metrics system provides a unified view of the data most relevant to tuning and scaling your application. Now you have direct visibility of these performance characteristics: Throughput: One of the most important characteristics to measure on a web process is its throughput. Heroku Metrics provides requests per minute segmented by HTTP status codes (OK vs. Failed) per time period. Response Time: The response time of your application is also a key measure of system health and quality of service. The Metrics dashboard provides both the median and 95th percentile response times per time period. Errors: Platform error codes can provide excellent insight into issues with your application, so the Metrics system interpolates them with the rest of your time series data for better understanding of causality. Memory: Visibility into the memory utilization of your application is useful when assessing capacity, finding memory leaks, or identifying performance degradation due to swap. CPU Load: CPU load is another critical component of performance monitoring, especially for applications with processing intensive workloads, or high levels of in-process parallelism. By unifying the display of this data along a consistent time axis, and by representing the data in the manner most meaningful for each metric, the framework provides more visibility into the interaction of these parameters and the relationships between them. This should offer a more intuitive way to understand and tune the overall performance of your app. JS Bin As we gather more data and feedback, we will be introducing guidance around performance optimization, including recommendations on horizontal or vertical scaling of applications. This guidance will be based on heuristics we are currently modeling from the range of applications on the platform. We will be rolling these out soon, and improving them as we go. In order to get started with Heroku Metrics, visit dashboard-next.heroku.com and select the Metrics tab for one of your applications. The Metrics feature is enabled only for applications with more than one running dyno, where performance tuning is likely to be more of a priority. Existing applications should have some period of metrics already available in the system, while newly created applications or applications with recently added process types should see data available in the Metrics view within about an hour of the update. For details on the metrics displayed, see the documentation in Dev Center. Feedback and Availability We hope you enjoy the features we are making available today, and encourage any feedback you have on them as we work towards General Availability later this fall.

URL: http://feedproxy.google.com/~r/heroku/~3/-CQT_1ZkU5U/new-dashboard-and-metrics-beta

If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.