{"API Monitoring"}

API Monitoring Blog Posts

These are posts from the API Evangelist blog that are focused on API monitoring, allowing for a filtered look at my analysis on the topic. I rely on these posts, along with the curated, organizations, APIs, and tools to help paint me a picture of what is going on.

Look Across My API Monitoring API Methods By Grouping Them Using Tag

Last week I was playing with defining API monitoring APIs so I can map to each stop along the API life cycle. I took three of the API monitoring services I use (APIMetrics, API Science, and Runscope), and like I do for other areas along the API life cycle, and for common API stacks, I profiled their APIs using the OpenAPI Spec. This is standard operating procedure for any of my research areas, in that part of profiling each company's operations, I profile the API surface area in detail.

For each of my research projects, I will include this listing of each API endpoint available as part of the work. As I was adding one for my API monitoring research, I had a thought--I wanted to reorganize the endpoints, across the three API monitoring service providers, and group them by tag. So I started playing with a new way to look at the APIs available in any given APIs.json driven collection.

This is a listing of API resources available in this projects APIs.json, organized by tag.

Account

Auth

  • Delete an Authentication Setting - (DELETE) - /auth/{id}/
  • Get an existing Authentication Setting - (GET) - /auth/{id}/
  • List Authentication Settings - (GET) - /auth/
  • Update an existing Authentication Setting - (PUT) - /auth/{id}/

Buckets

Calls

Checks

Contacts

Deployments

Messages

Monitors

Reports

Shared Environments

Tags

  • List All Tags - (GET) - /tags

Templates

Test Environments

Test Steps

Tests

Tokens

Workflows

If you mouse over each actual endpoint, it will tell you the host of the API it is for. I am just playing around. I have no idea what value this would present for anyone, except for just helping provide a new dimension for viewing the APIs involved. For me, this particular one helps me understand API resources across many providers, while also encouraging me to think more critically about how I tag the APIs I define using OpenAPI Spec.

You can view the listing by provider, as well as listing by tag, for my API monitoring research. I will be adding these two views to all of my core research areas, and the API stacks I define as I have time, but I thought it would be interesting to add to my own API stack, which is probably the most defined of all of my stacks--here is listing by provider, and listing by tag, for my API Evangelist stack.

We'll see how this plays out as I roll out for more of my research. I am sure I will learn a lot along the way, by adding new APIs.json driven dimensions like these. I'd like to eventually have a whole toolbox of these types of views, and even some APIs.json and OpenAPI Spec driven visualizations.

See The Full Blog Post


Defining API Monitoring APIs So I Can Map To Each Stop Along The API Life Cycle

I am going through each of the 35+ areas of the APi space that I monitor, working to bring alive the over 900 stops along the API life cycle, that I have identified through my research. I'm still working through prototypes for my life cycle explorer, but the current version has organizations, tools, links, and questions, along with the title and description of each stop of the life cycle journey I am trying to bring into focus.

Part of my approach in identifying the different lines, areas, and stops along this life cycle involves taking a look at the approach of leading API providers, as well as service being offered by companies selling their solutions to these API providers--giving me two sides of the API life cycle coin. In the last couple months I have also found another way to identify potential building blocks, and round off the ones I have, through the API definitions of leading API providers.

All I do, is craft an OADF file for each of the API service providers I track on, within each area of my research. I'm spending time tonight working on my API monitoring research, so I look at three of the service providers I track on, who have APIs. The OADF specs are not complete, but provide me a baseline definition for the surface area of each API, something I'll round out with more use. Here are the endpoints I have from each provider so far.

API Science Monitors API (oadf)

  • Get All Contacts - (GET) - /contacts.json
  • Create a Contact - (POST) - /contacts.json
  • Delete a Contact - (DELETE) - /contacts/{id}.json
  • Get a Specific Contact - (GET) - /contacts/{id}.json
  • Update a Contact - (PATCH) - /contacts/{id}.json
  • Get All Monitors - (GET) - /monitors
  • Create a Monitor - (POST) - /monitors
  • Apply Actions to Multiple Monitors - (PUT) - /monitors
  • Get a Specific Monitor - (GET) - /monitors/{id}
  • Get Checks For A Monitor - (GET) - /monitors/{id}/checks.json
  • Performance Report - (GET) - /monitors/{id}/performance
  • Show a Monitors Templates - (GET) - /monitors/{id}/templates
  • Get a Template - (GET) - /monitors/{id}/templates/{templates]
  • Create a Template - (POST) - /monitors/{id}/templates/{templates]
  • Testing your Monitor - (GET) - /monitors/{id}/test
  • Uptime Report - (GET) - /monitors/{id}/uptime.json
  • List All Tags - (GET) - /tags

Runscope API (oadf)

  • Account Resource - (GET) - /account
  • Teams Resource - (GET) - /teams/{teamId}/people
  • Team integrations list - (GET) - /teams/{teamId}/integrations
  • Returns a list of buckets. - (GET) - /buckets
  • Create a new bucket - (POST) - /buckets
  • Returns a single bucket resource. - (GET) - /buckets/{bucketKey}
  • Delete a single bucket resource. - (DELETE) - /buckets/{bucketKey}
  • Retrieve a list of messages in a bucket - (GET) - /buckets/{bucketKey}/messages
  • Clear a bucket (remove all messages). - (DELETE) - /buckets/{bucketKey}/messages
  • Create a message - (POST) - /buckets/{bucketKey}/messages
  • Retrieve a list of error messages in a bucket - (GET) - /buckets/{bucketKey}/errors
  • Retrieve the details for a single message. - (GET) - /buckets/{bucketKey}/messages/{messageId}
  • Returns a list of tests. - (GET) - /buckets/{bucketKey}/tests
  • Create a test. - (POST) - /buckets/{bucketKey}/tests
  • Delete a single test. - (DELETE) - /buckets/{bucketKey}/tests/{testId}
  • List test steps for a test. - (GET) - /buckets/{bucketKey}/tests/{testId}/steps
  • Add new test step. - (POST) - /buckets/{bucketKey}/tests/{testId}/steps
  • Update the details of a single test step. - (PUT) - /buckets/{bucketKey}/tests/{testId}/steps/{stepId}
  • Delete a step from a test. - (DELETE) - /buckets/{bucketKey}/tests/{testId}/steps/{stepId}
  • Return details of the test's environments (only those that belong to the specified test) - (GET) - /buckets/{bucketKey}/tests/{testId}/environments
  • Create new test environment. - (POST) - /buckets/{bucketKey}/tests/{testId}/environments
  • Update the details of a test environment. - (PUT) - /buckets/{bucketKey}/tests/{testId}/environments/{environmentId}
  • Returns list of shared environments for a specified bucket. - (GET) - /buckets/{bucketKey}/environments
  • Create new shared environment. - (POST) - /buckets/{bucketKey}/environments
  • Update the details of a test environment. - (PUT) - /buckets/{bucketKey}/environments/{environmentId}

APIMetrics Auth API (oadf)

  • List Authentication Settings - (GET) - /auth/
  • Delete an Authentication Setting - (DELETE) - /auth/{id}/
  • Get an existing Authentication Setting - (GET) - /auth/{id}/
  • Update an existing Authentication Setting - (PUT) - /auth/{id}/

APIMetrics Calls API (oadf)

  • List API Calls - (GET) - /calls/
  • Create new API Call - (POST) - /calls/
  • List API Calls by Authentication - (GET) - /calls/auth/{auth_id}/
  • Delete an API Call - (DELETE) - /calls/{id}/
  • Get an existing API Call - (GET) - /calls/{id}/
  • Update an existing API Call - (PUT) - /calls/{id}/
  • Trigger an API Call to run - (POST) - /calls/{id}/run
  • List Stats from before a date for an API Call - (GET) - /calls/{id}/stats/before
  • List Stats since a date for an API Call - (GET) - /calls/{id}/stats/since

APIMetrics Deployments API (oadf)

  • List all Deployment - (GET) - /deployments/
  • Create a new Deployment - (POST) - /deployments/
  • Get all Deployments for an API Call - (GET) - /deployments/call/{call_id}/
  • Get all Deployments for a Workflow - (GET) - /deployments/workflow/{workflow_id}
  • Delete a Deployment - (DELETE) - /deployments/{id}/
  • Get an existing Deployment - (GET) - /deployments/{id}/
  • Update an existing Deployment - (PUT) - /deployments/{id}/

APIMetrics Reports API (oadf)

  • List all Reports - (GET) - /reports/
  • Create a new Report - (POST) - /reports/
  • Delete a Report - (DELETE) - /reports/{id}/
  • Get an existing Report - (GET) - /reports/{id}/
  • Update an existing Report - (PUT) - /reports/{id}/

APIMetrics Tokens API (oadf)

  • List Auth Tokens - (GET) - /tokens/
  • Create a new Auth Token - (POST) - /tokens/
  • Get all tokens for an Authentication Setting - (GET) - /tokens/auth/{auth_id}/
  • Delete an Auth Token - (DELETE) - /tokens/{id}/
  • Get an existing Auth Token - (GET) - /tokens/{id}/
  • Update an Auth Token - (PUT) - /tokens/{id}/

APIMetrics Workflows API (oadf)

  • List all Workflows - (GET) - /workflows/
  • Create new Authentication Settings - (POST) - /workflows/
  • Delete a Workflow - (DELETE) - /workflows/{id}/
  • Get an existing Workflow - (GET) - /workflows/{id}/
  • Trigger a Workflow to run now - (POST) - /workflows/{id}/
  • Create a new Workflow - (PUT) - /workflows/{id}/

When you compare the definitions for these API service providers, you are comparing apples to oranges, even though they exist in the same layer of the API space. To me, having them defined, will allow me to slowly weave them into my master list of common building blocks for API monitoring.

What really excites me, is that for each potential stop along the API monitoring line, I might be able to actually link to specific API endpoints, and even down to the verb level. For example, I could link to the endpoint for creating a new test for API Science, APIMetrics, and Runscope, with a single button or widget. 

I find the API definitions for API service providers to be more interesting them some of the features they showcase via their site. I will be continuing to identify the API service providers that I track on who have APIs, and defining them using OADF. You can find the APIs for my API monitoring research available in the projects APIs.json file, as well as each individual APIs.json and OADF files listed on the API monitoring service providers page.

See The Full Blog Post


Evolving My API Stack To Be A Public Repo For Sharing API Discovery, Monitoring, And Rating Information

My API Stack began as a news site, and evolved into a directory of the APIs that I monitor in the space. I published APIs.json indexes for the almost 1000 companies I am trackig on, with almost 400 OADF files for some of the APIs I've profiled in more detail. My mission around the project so far, has been to create an open source, machine readable repo for the API space.

I have  had two recent occurrences that are pushing me to expand on my API Stack work. First, I have other entities who want to contribute monitoring data and other elements I would like to see collected, but haven't had time. The other is I that I have started spidering the URLs of the API portals I track on, and need a central place to store the indexes, so that others can access.

Ultimately I'd like to see the API Stack act as a public repo, where anyone can grab the data they need to discovery, evaluate, integrate, and stay in tune with what APIs are doing, or not doing. In addition to finding OADF, API Blueprint, and RAML files by crawling and indexing API portals, and publishing in a public repo, I want to build out the other building blocks that I index with APIs.json, like pricing, and TOS changes, and potentially monitoring, testing, performance data available.

Next I will publish some pricing, monitoring, and portal site crawl indexes to the repo, for some of the top APIs out there, and start playing with the best way to store the JSON, and other files, and provide an easy way explore and play with the data. If you have any data that you are collecting, and would like to contribute, or have a specific need you'd like to see tracked on, let me know, and I'll add to the road map.

My goal is to go for quality and completeness of data there, before I look to scale, and expand the quantity of information and tooling available. Let me know if you have any thoughts or feedback.

See The Full Blog Post


API Monitoring Should Be Baked Into Your API Strategy By Default

As I've written several posts on the recent Amazon API Gateway release, one of the side things I noted about the API solution from AWS, was that API monitoring is baked in by default. As stated on the AWS API Gateway page:

After your API is deployed, Amazon API Gateway provides you with a dashboard to visually monitor calls to your services using Amazon CloudWatch, so you see performance metrics and information on API calls, data latency, and error rates.

This may seem like common sense to many people who have been in the API space for a while now, but for many API designers, architects, developers, and business folk who are just getting going, API monitoring may not be default for all implementations.

To me, AWS baking in API monitoring by default, demonstrates that the world of API monitoring has matured, marking an important milestone that all API providers should pay attention to. I've been watching this space grow over the last couple years, and similar to the API management space, the AWS release reflects the overall health of the API sector.

If you are operating any API In 2015, monitoring should be standard operating procedure, alongside your API documentation.

See The Full Blog Post


Growth Of Bug Bounties, Importance of 3rd Party Monitoring, and Operating On The Open Internet

See The Full Blog Post


Going Beyond Just API Status And Providing An Official API Monitoring Service(s) With Your API

I've long advocated that an API Status page should be a required building block for any API operation. As I work on monitoring for my own master API stack, and as I read stories like Pingometer Keeps Your Uptime In The 9’s With Twilio SMS, I'm thinking we need more that just a simple status report for our APIs.

From an API provider standpoint, we should have a more nuanced view of our API availability, beyond just up or down--something popular API integration service providers like Runscope and API Science have been saying for a while. If our APIs are publicly available, I'm going to even suggest providers start actively sharing their API monitoring strategy, and resulting data with the ecosystem--something I'm going to also be advocating for inclusion in the APIs.json index.

From an API consumer standpoint, you are going to want a real-time awareness about the availability of all the APIs you depend on, and much like the provider standpoint, this view needs to be more nuanced than just whether a service is up or down. Developers need to be making tactical, run-time decisions based upon API monitoring, as well as be able to make longer term strategic decisions about which APIs they depend on, based upon API monitoring exhaust. 

I like Twilio's style for showcasing Pingometer as a solution. I'm thinking every API provider should cozy up with an API monitoring service provider for their own needs, while also establishing an approach they can also share with their API consumers, to meet their needs as well. I'm currently adding API Science and Runscope as default building blocks for my API Stack, something you will find indexed as part of each of my services APIs.json file.

I'm working to coherently separate out the benefits API service providers like API Science and Runscope bring to the table for both API providers, and API consumers, in a way that is also in alignment with a common set of building blocks, that everyone involved can benefit from, throughout the API life-cycle.

See The Full Blog Post


Updating My WalkSensor Monitoring Network Profile For The Month

I am updating my WalkSensor profile for the month. I like to take a look at my profile each month, stay up to date on the providers that I contribute data to, and consider any possible new additions to my profile. Right now I gather data for 28 separate companies, organizations, and the city of Portland, OR where I live. When I walk to work each day, which is about 1.4 miles, I scan about 93 separate sensors, for all of my 28 companies, organizations and government agencies.

I started walking to work each day for my health, and I kept walking because of WalkSensor. I get to gather data from water, electrical, weather, and other sensors place around my neighborhood and larger city. Rather than companies connecting each sensor directly on the Internet for information gathering, it was more cost effective, and secure to give each sensor the ability to transmit a data signal for a small radius, say like 30 feet. These signals can be picked up by any regular mobile smart phone, but it requires the WalkSensor app to actually accept, decrypt, and store each signals message.

When I get to work, or get back home, whichever is the first location I connect to a secure wifi connection, the gathered data is transmitted to each companies WalkSensor server. The process is a win for everyone involved--each company, organization, or government agency is able to deploy low cost sensors, and I am able to generate some revenue, pushing me to walk to work, rather than driving. I get to choose who I gather data for, and I only work for them if I have an affinity with their mission, and data collection goals.

I do not make much money, about $50.00 a month, but it makes it worth it. I can earn extra money by taking walks in the evening or on weekends. Companies have specific data gathering needs, and my WalkSensor app will aggregate potential opportunities for me, and plan a trip for me, encouraging me to go for a walk. I like that I am supporting a different vision of the Internet of Things, one that doesn’t involve everything being connected to the Internet, and allows for much more sensible and controlled way to connect devices to a central online platform.

See The Full Blog Post


An API Evangelism Strategy To Map The Global Family Tree

In my work everyday as the API Evangelist, I get to have some very interesting conversations, with a wide variety of folks, about how they are using APIs, as well as brainstorming other ways they can approach their API strategy allowing them to be more effective. One of the things that keep me going in this space is this diversity. One day I’m looking at Developer.Trade.Gov for the Department of Commerce, the next talking to WordPress about APIs for 60 million websites, and then I’m talking with the The Church of Jesus Christ of Latter-day Saints about the Family Search API, which is actively gathering, preserving, and sharing genealogical records from around the world.

I’m so lucky I get to speak with all of these folks about the benefits, and perils of APIs, helping them think through their approach to opening up their valuable resources using APIs. The process is nourishing for me because I get to speak to such a diverse number of implementations, push my understanding of what is possible with APIs, while also sharpening my critical eye, understanding of where APIs can help, or where they can possibly go wrong. Personally, I find a couple of things very intriguing about the Family Search API story:

  1. Mapping the worlds genealogical history using a publicly available API — Going Big!!
  2. Potential from participation by not just big partners, but the long tail of genealogical geeks
  3. Transparency, openness, and collaboration shining through as the solution beyond just the technology
  4. The mission driven focus of the API overlapping with my obsession for API evangelism intrigues and scares me
  5. Have existing developer area, APIs, and seemingly necessary building blocks but failed to achieve a platform level

I’m open to talking with anyone about their startup, SMB, enterprise, organizational, institutional, or government API, always leaving open a 15 minute slot to hear a good story, which turned into more than an hour of discussion with the Family Search team. See, Family Search already has an API, they have the technology in order, and they even have many of the essential business building blocks as well, but where they are falling short is when it comes to dialing in both the business and politics of their developer ecosystem to discover the right balance that will help them truly become a platform—which is my specialty. ;-)

This brings us to the million dollar question: How does one become a platform?

All of this makes Family Search an interesting API story. The scope of the API, and to take something this big to the next level, Family Search has to become a platform, and not a superficial “platform” where they are just catering to three partners, but nourishing a vibrant long tail ecosystem of website, web application, single page application, mobile applications, and widget developers. Family Search is at an important reflection point, they have all the parts and pieces of a platform, they just have to figure out exactly what changes need to be made to open up, and take things to the next level.

First, let’s quantify the company, what is FamilySearch? “ For over 100 years, FamilySearch has been actively gathering, preserving, and sharing genealogical records worldwide”, believing that “learning about our ancestors helps us better understand who we are—creating a family bond, linking the present to the past, and building a bridge to the future”.

FamilySearch is 1.2 billion total records, with 108 million completed in 2014 so far, with 24 million awaiting, as well as 386 active genealogical projects going on. Family Search provides the ability to manage photos, stories, documents, people, and albums—allowing people to be organized into a tree, knowing the branch everyone belongs to in the global family tree.

FamilySearch, started out as the Genealogical Society of Utah, which was founded in 1894, and dedicate preserving the records of the family of mankind, looking to "help people connect with their ancestors through easy access to historical records”. FamilySearch is a mission-driven, non-profit organization, ran by the The Church of Jesus Christ of Latter-day Saints. All of this comes together to define an entity, that possesses an image that will appeal to some, while leave concern for others—making for a pretty unique formula for an API driven platform, that doesn’t quite have a model anywhere else.

FamilySearch consider what they deliver as as a set of record custodian services:

  • Image Capture - Obtaining a preservation quality image is often the most costly and time-consuming step for records custodians. Microfilm has been the standard, but digital is emerging. Whether you opt to do it yourself or use one of our worldwide camera teams, we can help.
  • Online Indexing - Once an image is digitized, key data needs to be transcribed in order to produce a searchable index that patrons around the world can access. Our online indexing application harnesses volunteers from around the world to quickly and accurately create indexes.
  • Digital Conversion - For those records custodians who already have a substantial collection of microfilm, we can help digitize those images and even provide digital image storage.
  • Online Access - Whether your goal is to make your records freely available to the public or to help supplement your budget needs, we can help you get your records online. To minimize your costs and increase access for your users, we can host your indexes and records on FamilySearch.org, or we can provide tools and expertise that enable you to create your own hosted access.
  • Preservation - Preservation copies of microfilm, microfiche, and digital records from over 100 countries and spanning hundreds of years are safely stored in the Granite Mountain Records Vault—a long-term storage facility designed for preservation.

FamilySearch provides a proven set of services that users can take advantage of via a web applications, as well as iPhone and Android mobile apps, resulting in the online community they have built today. FamilySearch also goes beyond their basic web and mobile application services, and is elevated to software as a service (SaaS) level by having a pretty robust developer center and API stack.

Developer Center
FamilySearch provides the required first impression when you land in the FamilySearch developer center, quickly explaining what you can do with the API, "FamilySearch offers developers a way to integrate web, desktop, and mobile apps with its collaborative Family Tree and vast digital archive of records”, and immediately provides you with a getting started guide, and other supporting tutorials.

FamilySearch provides access to over 100 API resources in the twenty separate groups: Authorities, Change History, Discovery, Discussions, Memories, Notes, Ordinances, Parents and Children, Pedigree, Person, Places, Records, Search and Match, Source Box, Sources, Spouses, User, Utilities, Vocabularies, connecting you to the core FamilySearch genealogical engine.

The FamilySearch developer area provides all the common, and even some forward leaning technical building blocks:

To support developers, FamilySearch provides a fairly standard support setup:

To augment support efforts there are also some other interesting building blocks:

Setting the stage for FamilySearch evolving to being a platform, they even posses some necessary partner level building blocks:

There is even an application gallery showcasing what web, mac & windows desktop, and mobile applications developers have built. FamilySearch even encourages developers to “donate your software skills by participating in community projects and collaborating through the FamilySearch Developer Network”.

Many of the ingredients of a platform exist within the current FamilySearch developer hub, at least the technical elements, and some of the common business, and political building blocks of a platform, but what is missing? This is what makes FamilySearch a compelling story, because it emphasizes one of the core elements of API Evangelist—that all of this API stuff only works when the right blend of technical, business, and politics exists.

Establishing A Rich Partnership Environment

FamilySearch has some strong partnerships, that have helped establish FamilySearch as the genealogy service it is today. FamilySearch knows they wouldn’t exist without the partnerships they’ve established, but how do you take it to the next and grow to a much larger, organic API driven ecosystem where a long tail of genealogy businesses, professionals, and enthusiasts can build on, and contribute to, the FamilySearch platform.

FamilySearch knows the time has come to make a shift to being an open platform, but is not entirely sure what needs to happen to actually stimulate not just the core FamilySearch partners, but also establish a vibrant long tail of developers. A developer portal is not just a place where geeky coders come to find what they need, it is a hub where business development occurs at all levels, in both synchronous, and asynchronous ways, in a 24/7 global environment.

FamilySearch acknowledge they have some issues when it comes investing in API driven partnerships:

  • “Platform” means their core set of large partners
  • Not treating all partners like first class citizens
  • Competing with some of their partners
  • Don’t use their own API, creating a gap in perspective

FamilySearch knows if they can work out the right configuration, they can evolve FamilySearch from a digital genealogical web and mobile service to a genealogical platform. If they do this they can scale beyond what they’ve been able to do with a core set of partners, and crowdsource the mapping of the global family tree, allowing individuals to map their own family trees, while also contributing to the larger global tree. With a proper API driven platform this process doesn’t have to occur via the FamiliySearch website and mobile app, it can happen in any web, desktop, or mobile application anywhere.

FamilySearch already has a pretty solid development team taking care of the tech of the FamilySearch API, and they have 20 people working internally to support partners. They have a handle on the tech of their API, they just need to get a handle on the business and politics of their API, and invest in the resources that needed to help scale the FamilySearch API being just a developer area, to being a growing genealogical developer community, to a full blow ecosystem that span not just the FamilySearch developer portal, but thousands of other sites and applications around the globe.

A Good Dose Of API Evangelism To Shift Culture A Bit

A healthy API evangelism strategy brings together a mix of business, marketing, sales and technology disciplines into a new approach to doing business for FamilySearch, something that if done right, can open up FamilySearch to outside ideas, and with the right framework manage to allow the platform to move beyond just certification, and partnering to also investment, and acquisition of data, content, talent, applications, and partners via the FamilySearch developer platform.

Think of evangelism as the grease in the gears of the platform allowing it to grow, expand, and handle a larger volume, of outreach, and support. API evangelism works to lubricate all aspects of platform operation.

First, lets kick off with setting some objectives for why we are doing this, what are we trying to accomplish:

  • Increase Number of Records - Increase the number of overall records in the FamilySearch database, contributing the larger goals of mapping the global family tree.
  • Growth in New Users - Growing the number of new users who are building on the FamilySearch API, increase the overall headcount fro the platform.
  • Growth In Active Apps - Increase not just new users but the number of actual apps being built and used, not just counting people kicking the tires.
  • Growth in Existing User API Usage - Increase how existing users are putting the FamilySearch APIs. Educate about new features, increase adoption.
  • Brand Awareness - One of the top reasons for designing, deploying and managing an active APIs is increase awareness of the FamilySearch brand.
  • What else?

What does developer engagement look like for the FamilySearch platform?

  • Active User Engagement - How do we reach out to existing, active users and find out what they need, and how do we profile them and continue to understand who they are and what they need. Is there a direct line to the CRM?
  • Fresh Engagement - How is FamilySearch contacting new developers who have registered weekly to see what their immediate needs are, while their registration is fresh in their minds.
  • Historical Engagement - How are historical active and / or inactive developers being engaged to better understand what their needs are and would make them active or increase activity.
  • Social Engagement - Is FamilySearch profiling the URL, Twitter, Facebook LinkedIn, and Github profiles, and then actively engage via these active channels?

Establish a Developer Focused Blog For Storytelling

  • Projects - There are over 390 active projects on the FamilySearch platform, plus any number of active web, desktop, and mobile applications. All of this activity should be regularly profiled as part of platform evangelism. An editorial assembly line of technical projects that can feed blog stories, how-tos, samples and Github code libraries should be taking place, establishing a large volume of exhaust via the FamlySearch platform.
  • Stories - FamilySearch is great at writing public, and partner facing content, but there is a need to be writing, editing and posting of stories derived from the technically focused projects, with SEO and API support by design.
  • Syndication - Syndication to Tumblr, Blogger, Medium and other relevant blogging sites on regular basis with the best of the content.

Mapping Out The Geneology Landscape

  • Competition Monitoring - Evaluation of regular activity of competitors via their blog, Twitter, Github and beyond.
  • Alpha Players - Who are the vocal people in the genealogy space with active Twitter, blogs, and Github accounts.
  • Top Apps - What are the top applications in the space, whether built on the FamilySearch platform or not, and what do they do?
  • Social - Mapping the social landscape for genealogy, who is who, and who should the platform be working with.
  • Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks. (should already be done by marketing folks)
  • Cities & Regions - Target specific markets in cities that make sense to your evangelism strategy, what are local tech meet ups, what are the local organizations, schools, and other gatherings. Who are the tech ambassadors for FamilySearch in these spaces?

Adding To Feedback Loop From Forum Operations

  • Stories - Deriving of stories for blog derived from forum activity, and the actual needs of developers.
  • FAQ Feed - Is this being updated regularly with stuff?
  • Streams - other stream giving the platform a heartbeat?

Being Social About Platform Code and Operations With Github

  • Setup Github Account - Setup FamilySearch platform developer account and bring internal development team into a team umbrella as part of.
  • Github Relationships - Managing of followers, forks, downloads and other potential relationships via Github, which has grown beyond just code, and is social.
  • Github Repositories - Managing of code sample Gists, official code libraries and any samples, starter kits or other code samples generated through projects.

Adding To The Feedback Loop From The Bigger FAQ Picture

  • Quora - Regular trolling of Quora and responding to relevant [Client Name] or industry related questions.
  • Stack Exchange - Regular trolling of Stack Exchange / Stack Overflow and responding to relevant FamilySearch or industry related questions.
  • FAQ - Add questions from the bigger FAQ picture to the local FamilySearch FAQ for referencing locally.

Leverage Social Engagement And Bring In Developers Too

  • Facebook - Consider setting up of new API specific Facebook company. Posting of all API evangelism activities and management of friends.
  • Google Plus - Consider setting up of new API specific Google+ company. Posting of all API evangelism activities and management of friends.
  • LinkedIn - Consider setting up of new API specific LinkedIn profile page who will follow developers and other relevant users for engagement. Posting of all API evangelism activities.
  • Twitter - Consider setting up of new API specific Twitter account. Tweeting of all API evangelism activity, relevant industry landscape activity, discover new followers and engage with followers.

Sharing Bookmarks With the Social Space

  • Hacker News - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Hacker News, to keep a fair and balanced profile, as well as network and user engagement.
  • Product Hunt - Product Hunt is a place to share the latest tech creations, providing an excellent format for API providers to share details about their new API offerings.
  • Reddit - Social bookmarking of all relevant API evangelism activities as well as relevant industry landscape topics to Reddit, to keep a fair and balanced profile, as well as network and user engagement.

Communicate Where The Roadmap Is Going

  • Roadmap - Provide regular roadmap feedback based upon developer outreach and feedback.
  • Changelog - Make sure the change log always reflects the roadmap communication or there could be backlash.

Establish A Presence At Events

  • Conferences - What are the top conferences occurring that we can participate in or attend--pay attention to call for papers of relevant industry events.
  • Hackathons - What hackathons are coming up in 30, 90, 120 days? Which would should be sponsored, attended, etc.
  • Meetups - What are the best meetups in target cities? Are there different formats that would best meet our goals? Are there any sponsorship or speaking opportunities?
  • Family History Centers - Are there local opportunities for the platform to hold training, workshops and other events at Family History Centers?
  • Learning Centers - Are there local opportunities for the platform to hold training, workshops and other events at Learning Centers?

Measuring All Platform Efforts

  • Activity By Group - Summary and highlights from weekly activity within the each area of API evangelism strategy.
  • New Registrations - Historical and weekly accounting of new developer registrations across APis.
  • Volume of Calls - Historical and weekly accounting of API calls per API.
  • Number of Apps - How many applications are there.

Essential Internal Evangelism Activities

  • Storytelling - Telling stories of an API isn’t just something you do externally, what stories need to be told internally to make sure an API initiative is successful.
  • Conversations - Incite internal conversations about the FamilySearch platform. Hold brown bag lunches if you need to, or internal hackathons to get them involved.
  • Participation - It is very healthy to include other people from across the company in API operations. How can we include people from other teams in API evangelism efforts. Bring them to events, conferences and potentially expose them to local, platform focused events.
  • Reporting - Sometimes providing regular numbers and reports to key players internally can help keep operations running smooth. What reports can we produce? Make them meaningful.

All of this evangelism starts with a very external focus, which is a hallmark of API and developer evangelism efforts, but if you notice by the end we are bringing it home to the most important aspect of platform evangelism, the internal outreach. This is the number one reason APIs fail, is due to a lack of internal evangelism, educating top and mid-level management, as well as lower level staff, getting buy-in and direct hands-on involvement with the platform, and failing to justify budget costs for the resources needed to make a platform successful.

Top-Down Change At FamilySearch

The change FamilySearch is looking for already has top level management buy-in, the problem is that the vision is not in lock step sync with actual platform operations. When regular projects developed via the FamilySearch platform are regularly showcased to top level executives, and stories consistent with platform operations are told, management will echo what is actually happening via the FamilySearch. This will provide a much more ongoing, deeper message for the rest of the company, and partners around what the priorities of the platform are, making it not just a meaningless top down mandate.

An example of this in action is with the recent mandate from President Obama, that all federal agencies should go “machine readable by default”, which includes using APIs and open data outputs like JSON, instead of document formats like PDF. This top down mandate makes for a good PR soundbite, but in reality has little affect on the ground at federal agencies. In reality it has taken two years of hard work on the ground, at each agency, between agencies, and with the public to even begin to make this mandate a truth at over 20 of the federal government agencies.

Top down change is a piece of the overall platform evolution at FamilySearch, but is only a piece. Without proper bottom-up, and outside-in change, FamilySearch will never evolve beyond just being a genealogical software as a service with an interesting API. It takes much more than leadership to make a platform.

Bottom-Up Change At FamilySearch

One of the most influential aspects of APIs I have seen at companies, institutions, and agencies is the change of culture brought when APIs move beyond just a technical IT effort, and become about making resources available across an organization, and enabling people to do their job better. Without an awareness, buy-in, and in some cases evangelist conversion, a large organization will not be able to move from a service orientation to a platform way of thinking.

If a company as a whole is unaware of APIs, either at the company or organization, as well as out in the larger world with popular platforms like Twitter, Instagram, and others—it is extremely unlikely they will endorse, let alone participate in moving from being a digital service to platform. Employees need to see the benefits of a platform to their everyday job, and their involvement cannot require what they would perceive as extra work to accomplish platform related duties. FamilySearch employees need to see the benefits the platform brings to the overall mission, and play a role in this happening—even if it originates from a top-down mandate.

Top bookseller Amazon was already on the path to being a platform with their set of commerce APIs, when after a top down mandate from CEO Jeff Bezos, Amazon internalized APIs in such a way, that the entire company interacted, and exchange resources using web APIs, resulting in one of the most successful API platforms—Amazon Web Services (AWS). Bezos mandated that if an Amazon department needed to procure a resource from another department, like server or storage space from IT, it need to happen via APIs. This wasn’t a meaningless top-down mandate, it made employees life easier, and ultimately made the entire company more nimble, and agile, while also saving time and money. Without buy-in, and execution from Amazon employees, what we know as the cloud would never have occurred.

Change at large enterprises, organizations, institutions and agencies, can be expedited with the right top-down leadership, but without the right platform evangelism strategy, that includes internal stakeholders as not just targets of outreach efforts, but also inclusion in operations, it can result in sweeping, transformational changes. This type of change at a single organization can effect how an entire industry operates, similar to what we’ve seen from the ultimate API platform pioneer, Amazon.

Outside-In Change At FamilySearch

The final layer of change that needs to occur to bring FamilySearch from being just a service to a true platform, is opening up the channels to outside influence when it comes not just to platform operations, but organizational operations as well. The bar is high at FamilySearch. The quality of services, and expectation of the process, and adherence to the mission is strong, but if you are truly dedicated to providing a database of all mankind, you are going to have to let mankind in a little bit.

FamilySearch is still the keeper of knowledge, but to become a platform you have to let in the possibility that outside ideas, process, and applications can bring value to the organization, as well as to the wider genealogical community. You have to evolve beyond notions that the best ideas from inside the organization, and just from the leading partners in the space. There are opportunities for innovation and transformation in the long-tail stream, but you have to have a platform setup to encourage, participate in, and be able to identify value in the long-tail stream of an API platform.

Twitter is one of the best examples of how any platform will have to let in outside ideas, applications, companies, and individuals. Much of what we consider as Twitter today was built in the platform ecosystem from the iPhone and Android apps, to the desktop app TweetDeck, to terminology like the #hashtag. Over the last 5 years, Twitter has worked hard to find the optimal platform balance, regarding how they educate, communicate, invest, acquire, and incentives their platform ecosystem. Listening to outside ideas goes well beyond the fact that Twitter is a publicly available social platform, it is about having such a large platform of API developers, and it is impossible to let in all ideas, but through a sophisticated evangelism strategy of in-person, and online channels, in 2014 Twitter has managed to find a balance that is working well.

Having a public facing platform doesn’t mean the flood gates are open for ideas, and thoughts to just flow in, this is where service composition, and the certification and partner framework for FamilySearch will come in. Through clear, transparent partners tiers, open and transparent operations and communications, an optimal flow of outside ideas, applications, companies and individuals can be established—enabling a healthy, sustainable amount of change from the outside world.

Knowing All Of Your Platform Partners

The hallmark of any mature online platform is a well established partner ecosystem. If you’ve made the transition from service to platform, you’ve established a pretty robust approach to not just certifying, and on boarding your partners, you also have stepped it up in knowing and understanding who they are, what their needs are, and investing in them throughout the lifecycle.

First off, profile everyone who comes through the front door of the platform. If they sign up for a public API key, who are they, and where do they potentially fit into your overall strategy. Don’t be pushy, but understanding who they are and what they might be looking for, and make sure you have a track for this type of user well defined.

Next, quality, and certify as you have been doing. Make sure the process is well documented, but also transparent, allowing companies and individuals to quickly understand what it will take to certified, what the benefits are, and examples of other partners who have achieved this status. As a developer, building a genealogical mobile app, I need to know what I can expect, and have some incentive for investing in the certification process.

Keep your friends close, and your competition closer. Open the door wide for your competition to become a platform user, and potentially partner. 100+ year old technology company Johnson Controls (JCI) was concerned about what the competition might do it they opened up their building efficient data resources to the public via the Panoptix API platform, when after it was launched, they realized their competition were now their customer, and a partner in this new approach to doing business online for JCI.

When Department of Energy decides what data and other resource it makes available via Data.gov or the agencies developer program it has to deeply consider how this could affect U.S. industries. The resources the federal agency possesses can be pretty high value, and huge benefits for the private sector, but in some cases how might opening up APIs, or limiting access to APIs help or hurt the larger economy, as well as the Department of Energy developer ecosystem—there are lots of considerations when opening up API resources, that vary from industry to industry.

There are no silver bullets when it comes to API design, deployment, management, and evangelism. It takes a lot of hard work, communication, and iterating before you strike the right balance of operations, and every business sector will be different. Without knowing who your platform users are, and being able to establish a clear and transparent road for them to follow to achieve partner status, FamilySearch will never elevate to a true platform. How can you scale the trusted layers of your platform, if your partner framework isn’t well documented, open, transparent, and well executed? It just can’t be done.

Meaningful Monetization For Platform

All of this will take money to make happen. Designing, and executing on the technical, and the evangelism aspects I’m laying out will cost a lot of money, and on the consumers side, it will take money to design, develop, and manage desktop, web, and mobile applications build around the FamilySearch platform. How will both the FamilySearch platform, and its participants make ends meet?

This conversation is a hard one for startups, and established businesses, let alone when you are a non-profit, mission driven organization. Internal developers cost money, server and bandwidth are getting cheaper but still are a significant platform cost--sustaining a sale, bizdev, and evangelism also will not be cheap. It takes money to properly deliver resources via APIs, and even if the lowest tiers of access are free, at some point consumers are going to have to pay for access, resources, and advanced features.

The conversation around how do you monetize API driven resources is going on across government, from cities up to the federal government. Where the thought of charging for access to public data is unheard of. These are public assets, and they should be freely available. While this is true, think of the same situation, but when it comes to physical public assets that are owned by the government, like parks. You can freely enjoy many city, county, and federal parks, there are sometimes small fees for usage, but if you want to actually sell something in a public park, you will need to buy permits, and often share revenue with the managing agency. We have to think critically about how we fund the publishing, and refinement of publicly owned digital assets, as with physical assets there will be much debate in coming years, around what is acceptable, and what is not.

Woven into the tiers of partner access, there should always be provisions for applying costs, overhead, and even generation of a little revenue to be applied in other ways. With great power, comes great responsibility, and along with great access for FamilySearch partners, many will also be required to cover costs of compute capacity, storage costs, and other hard facts of delivering a scalable platform around any valuable digital assets, whether its privately or publicly held.

Platform monetization doesn’t end with covering the costs of platform operation. Consumers of FamilySearch APIs will need assistance in identify the best ways to cover their own costs as well. Running a successful desktop, web or mobile application will take discipline, structure, and the ability to manage overhead costs, while also being able to generate some revenue through a clear business model. As a platform, FamilySearch will have to bring to the table some monetization opportunities for consumers, providing guidance as part of the certification process regarding what are best practices for monetization, and even some direct opportunities for advertising, in-app purchases and other common approaches to application monetization and sustainment.

Without revenue greasing the gears, no service can achieve platform status. As with all other aspects of platform operations the conversation around monetization cannot be on-sided, and just about the needs of the platform providers. Pro-active steps need to be taken to ensure both the platform provider, and its consumers are being monetized in the healthiest way possible, bringing as much benefit to the overall platform community as possible.

Open & Transparent Operations & Communications

How does all of this talk of platform and evangelism actually happen? It takes a whole lot of open, transparent communication across the board. Right now the only active part of the platform is the FamilySearch Developer Google Group, beyond that you don’t see any activity that is platform specific. There are active Twitter, Facebook, Google+, and mainstream and affiliate focused blogs, but nothing that serves the platform, contributed to the feedback loop that will be necessary to take the service to the next level.

On a public platform, communications cannot all be private emails, phone calls, or face to face meetings. One of the things that allows an online service to expand to become a platform, then scale and grow into robust, vibrant, and active community is a stream of public communications, which include blogs, forums, social streams, images, and video content. These communication channels cannot all be one way, meaning they need to include forum and social conversations, as well as showcase platform activity by API consumers.

Platform communications isn’t just about getting direct messages answered, it is about public conversation so everyone shares in the answer, and public storytelling to help guide and lead the platform, that together with support via multiple channels, establishes a feedback loop, that when done right will keep growing, expanding and driving healthy growth. The transparent nature of platform feedback loops are essential to providing everything the consumers will need, while also bringing a fresh flow of ideas, and insight within the FamilySearch firewall.

Truly Shifting FamilySearch The Culture

Top-down, bottom-up, outside-in, with constantly flow of oxygen via vibrant, flowing feedback loop, and the nourishing, and sanitizing sunlight of platform transparency, where week by week, month by month someone change can occur. It won’t all be good, there are plenty of problems that arise in ecosystem operations, but all of this has the potential to slowly shift culture when done right.

One thing that shows me the team over at FamilySearch has what it takes, is when I asked if I could write this up a story, rather than just a proposal I email them, they said yes. This is a true test of whether or not an organization might have what it takes. If you are unwilling to be transparent about the problems you have currently, and the work that goes into your strategy, it is unlikely you will have what it takes to establish the amount of transparency required for a platform to be successful.

When internal staff, large external partners, and long tail genealogical app developers and enthusiasts are in sync via a FamilySearch platform driven ecosystem, I think we can consider a shift to platform has occurred for FamilySearch. The real question is how do we get there?

Executing On Evangelism

This is not a definitive proposal for executing on an API evangelism strategy, merely a blueprint for the seed that can be used to start a slow, seismic shift in how FamilySearch engages its API area, in a way that will slowly evolve it into a community, one that includes internal, partner, and public developers, and some day, with the right set of circumstances, FamilySearch could grow into robust, social, genealogical ecosystem where everyone comes to access, and participate in the mapping of mankind.

  • Defining Current Platform - Where are we now? In detail.
  • Mapping the Landscape - What does the world of genealogy look like?
  • Identifying Projects - What are the existing projects being developed via the platform?
  • Define an API Evangelist Strategy - Actually flushing out of a detailed strategy.
    • Projects
    • Storytelling
    • Syndication
    • Social
    • Channels
      • External Public
      • External Partner
      • Internal Stakeholder
      • Internal Company-Wide
  • Identify Resources - What resource currently exist? What are needed?
    • Evangelist
    • Content / Storytelling
    • Development
  • Execute - What does execution of an API evangelist strategy look like?
  • Iterate - What does iteration look like for an API evangelism strategy.
    • Weekly
    • Review
    • Repeat

AS with many providers, you don’t want to this to take 5 years, so how do you take a 3-5 year cycle, and execute in 12-18 months?

  • Invest In Evangelist Resources - It takes a team of evangelists to build a platform
    • External Facing
    • Partner Facing
    • Internal Facing
  • Development Resources - We need to step up the number of resources available for platform integration.
    • Code Samples & SDKs
    • Embeddable Tools
  • Content Resources - A steady stream of content should be flowing out of the platform, and syndicated everywhere.
    • Short Form (Blog)
    • Long Form (White Paper & Case Study)
  • Event Budget - FamilySearch needs to be everywhere, so people know that it exists. It can’t just be online.
    • Meetups
    • Hackathons
    • Conferences

There is nothing easy about this. It takes time, and resources, and there are only so many elements you can automate when it comes to API evangelism. For something that is very programmatic, it takes more of the human variable to make the API driven platform algorithm work. With that said it is possible to scale some aspects, and increase the awareness, presence, and effectiveness of FamilySearch platform efforts, which is really what is currently missing.

While as the API Evangelist, I cannot personally execute on every aspect of an API evangelism strategy for FamilySearch, I can provide essential planning expertise for the overall FamilySearch API strategy, as well as provide regular checkin with the team on how things are going, and help plan the roadmap. The two things I can bring to the table that are reflected in this proposal, is the understanding of where the FamilySearch API effort currently is, and what is missing to help get FamilySearch to the next stage of its platform evolution.

When operating within the corporate or organizational silo, it can be very easy to lose site of how other organizations, and companies, are approach their API strategy, and miss important pieces of how you need to shift your strategy. This is one of the biggest inhibitors of API efforts at large organizations, and is one of the biggest imperatives for companies to invest in their API strategy, and begin the process of breaking operations out of their silo.

What FamilySearch is facing demonstrates that APIs are much more than the technical endpoint that most believe, it takes many other business, and political building blocks to truly go from API to platform.

See The Full Blog Post


New Heroku Dashboard and Metrics now in Beta


At Heroku, we’re focused on delivering thoughtfully designed systems to improve developer productivity and experience. We firmly believe that improving the development and operations experience helps developers to build and run better apps. This improvement allows developers to focus more on functionality, and businesses to focus more on the value of their applications. Today we are pleased to announce two new features, both in public beta, that support this mission: a new Heroku Dashboard and Heroku Metrics. These new systems bring developers powerful new clarity and simplicity around application management, execution, and optimization. New Heroku Dashboard: Managing applications, organizations, and accounts via the web is now easier than ever. Heroku Metrics: Monitoring production applications and understanding the relationships between runtime characteristics is now built into Heroku. Both are live for you to begin using now. New Heroku Dashboard The management of applications, organizations, and accounts is a part of the development and scaling lifecycle. To make these actions more graceful and intuitive, we’ve developed an entirely new Dashboard for your use of Heroku. Keeping things clear, fast, intuitive, and accessible is important to us, and we wanted the new Dashboard to embody those attributes. It had to be as responsive and reliable as a native application. To achieve this, we redesigned it from scratch and rebuilt it using Ember.js, creating a modern interface with ambitious layouts and interaction patterns. Heroku should work with you, wherever you go. Whether you’re using Dashboard on your largest desktop display or the smallest of laptop screens, you’ll find the new layout brings the controls and data at your fingertips, and makes app- and org-switching speedy. The new Dashboard also gives you the ability to explore changes to your application in a safer manner. We converged on a view-first pattern in order to shield you from accidental changes with potential operational impact, like scaling or resource deletion. With this new interaction mode, you explicitly toggle into edit-mode before you can make modifications to your apps. We took the opportunity presented by this functional redesign to develop a new visual design, as well. We crafted a more friendly and approachable style, to lighten the load of web application development. The result is a lighter interface that favors content over chrome. To get started with the new Dashboard, head to dashboard-next.heroku.com. During the beta, you may need to visit the old Dashboard to complete some administrative tasks; you can switch between the two at any time. When the new Dashboard is GA later this year all of the old functionality will be ported over. Heroku Metrics Managing applications through development and scaling often requires a deeper understanding of performance and resource utilization. A seamless part of the new Dashboard, our new Heroku Metrics system is designed to give you just that understanding, making it simple for you to analyze and optimize performance of your applications on Heroku. In order to provide clarity and general visibility into your applications’ performance and behavior, the new Metrics system provides a unified view of the data most relevant to tuning and scaling your application. Now you have direct visibility of these performance characteristics: Throughput: One of the most important characteristics to measure on a web process is its throughput. Heroku Metrics provides requests per minute segmented by HTTP status codes (OK vs. Failed) per time period. Response Time: The response time of your application is also a key measure of system health and quality of service. The Metrics dashboard provides both the median and 95th percentile response times per time period. Errors: Platform error codes can provide excellent insight into issues with your application, so the Metrics system interpolates them with the rest of your time series data for better understanding of causality. Memory: Visibility into the memory utilization of your application is useful when assessing capacity, finding memory leaks, or identifying performance degradation due to swap. CPU Load: CPU load is another critical component of performance monitoring, especially for applications with processing intensive workloads, or high levels of in-process parallelism. By unifying the display of this data along a consistent time axis, and by representing the data in the manner most meaningful for each metric, the framework provides more visibility into the interaction of these parameters and the relationships between them. This should offer a more intuitive way to understand and tune the overall performance of your app. JS Bin As we gather more data and feedback, we will be introducing guidance around performance optimization, including recommendations on horizontal or vertical scaling of applications. This guidance will be based on heuristics we are currently modeling from the range of applications on the platform. We will be rolling these out soon, and improving them as we go. In order to get started with Heroku Metrics, visit dashboard-next.heroku.com and select the Metrics tab for one of your applications. The Metrics feature is enabled only for applications with more than one running dyno, where performance tuning is likely to be more of a priority. Existing applications should have some period of metrics already available in the system, while newly created applications or applications with recently added process types should see data available in the Metrics view within about an hour of the update. For details on the metrics displayed, see the documentation in Dev Center. Feedback and Availability We hope you enjoy the features we are making available today, and encourage any feedback you have on them as we work towards General Availability later this fall.

URL: http://feedproxy.google.com/~r/heroku/~3/-CQT_1ZkU5U/new-dashboard-and-metrics-beta

See The Full Blog Post


The New StrongLoop API Server Provides A Look At Future Of API Deployment

I’m looking through the most recent API server release from StrongLoop, and I can’t help but see echoes of what I’ve been researching, and covering across the API Evangelist networkAPI management has been front and center for years, but API deployment is something that is just now being productized, with a wealth of new service providers emerging to provide API deployment solutions that go beyond DIY frameworks, and enterprise API gateways.

Let start with walking through their announcement of their StrongLoop API Server:

  • LoopBack 2.0 - An open source framework for quickly creating APIs with Node, including the client SDKs.
  • mobile Backend-as-a-Service - An mBaaS to provide mobile services like push, offline-sync, geopoint and social login either on-premise or in the cloud.
  • Connectors - Connectivity for Node apps leveraging over ten supported data sources including Oracle, SQL Server, MongoDB and SOAP.
  • Controller - Automated DevOps for Node apps including profiling, clustering, process management and log management capabilities.
  • Monitoring - A hosted or on-premise graphical console for monitoring resource utilization, response times and function tracing with the ability to send metrics to existing monitoring tools.

Just as StrongLoop did in their release post, let’s dive deeper into LoopBack 2.0, the open source core of StrongLoop, which they say "acts as a glue between apps or devices and data via APIs written in Node”:

  • Studio - A graphical interface to complement the command-line tooling and assist developers in building Loopback models.
  • Yeoman and Grunt - The ability to script tasks, scaffold, and template applications and externalize their configurations for multiple environments.
  • ExpressJS 4.0 - The latest update, for the well known Node.js package, bringing improvements by removing bundled middleware and refactoring them into maintainable modules, revamped router to remove confusion on HTTP verb usage and decoupling Connect, the HTTP framework of Node from the Express web framework. It is also the E in the MEAN stack (MongoDB, ExpressJS, AngularJS, Node.js).
  • Project Structure - An expanded directory structure has been expanded to make it easier to organize apps and add functionality via pre-built LoopBack components and Node modules.
  • Workspace API - Internal API making it easier to define, configure, and bootstrap your application at design time and runtime by simply defining metadata in the form of JSON.

This is one of the few sophisticated, next generation, API deployment frameworks I have seen. We have had gateways for a while, and we have a new breed of database and spreadsheet to API providers like APISpark. We also have a new wave of scraping to API solutions from Kimono Labs and Import.io, but I’d say Orchestrate.io gets us closest to the vision I have for StrongLoop, when it comes to API deployment.

I’ve referenced this ability in my stories on virtual API stacks:

This new approach to API deployment allows us to rapidly define, deploy, and orchestrate stacks of API resources for use in our web, single page, and mobile applications. I really feel like BaaS, as an entire company, was just a short growth phase, that leading us to this point, where anyone can quickly deploy their own BaaS, for any broad, or niche purpose. I also see my research into the world of APIs and Single Page Apps (SPAs) reflected here, in StrongLoops API platform vision.

I feel that StrongLoop has an important take on API deployment, one that reflects where leading API, web, single page, and mobile app developers have been for a while now. The difference is that StrongLoop is providing as a standardized platform, allowing developers to much more elegantly orchestrate their entire lifecycle. You have everything you need to connect to existing resources, generate new API resources, and organize work into reusable parts, to deliver the web, single page, mobile apps you need.

I am closely watching this new generation of API deployment providers, companies like StrongLoop, Orchestrate, Flynn, and Cosmic. I see these players being the next generation API gateway, that goes way beyond just providing an enterprise gateway to internal assets. This newer vision is much more directly aligned with the needs of developers, enabling them to rapidly design, deploy and manage the API services they need to drive the web, single page, and mobile apps that are the vehicles in the API economy.

See The Full Blog Post


Looking At 77 Federal Government API Developer Portals And 190 APIs

I spent most of the day yesterday, looking through 77 of the developer portals listed on the 18F Github portal. While I wanted to evaluate the quality and approach of each of the agencies, my goal for this review cycle was to look for any APIs that already had machine readable API definitions, or would be low hanging fruit for the creation of Swagger definitions, as part of my wider API discovery work.

I had just finished updating all my API Evangelist Network APIs to use verion 0.14 of APIs.json, and while I wait for the search engine APIs.io to update to support the new version, I wanted to see if I could start the hard work of applying API discovery to federal government APIs. 

Ideally all federal agencies would publish APIs.json on their own, placing it within the root of their domain, like they do with data.json, and provide an index of all of their APIs. Being all to familiar with how this stuff work, I know that if I want this to happen, I will have to generate APIs.json for many federal agencies first. However for the APis.json to have their intended impact, I need many of the APIs to have machine readable API definitions that I can point to--which equals more work for me! yay? ;-(

My thinking is that I will look through all of the 77 developer areas, and resulting APIs looking for the low hanging fruit. Basically I would grade each API on its viability to be included in my federal government API discovery work. I spent minimal amount of time look at each API, and in some cases looking for the API, before giving up. I would inspect the supporting developer area, and the actual interface for complexity, helping me understand how hard it would be to hand craft a Swagger spec, and APIs.json for each agency and their APIs. 

(warning contains my raw un-edited notes from doing this research, not suitable for children)

As I went through, I wrote a couple of notes:

  • National Climate Data Center is nice looking, and is a high profile--they should have a kick ass API!
  • Inversely NOAAA Climate Data Online very simple, clean and well done.
  • Department of Education is acceptable as dev area, only because of Socrata
  • National Renewable Energy Laboratory just gets it. They just get it.
  • Some of these are billed as developer areas, but really just a single API. It is a start I guess. 
  • I love me some Department of Labor. Their developer area and API is freak'n cool!
  • MyUSA citizen API has oAuth!!! WTF. How did I not notice this before? Another story, and work here.
  • MyUSA has really good, simple, high value, API resources. 
  • NASA ExoAPI not just API cool, but space cool!!
  • FOIA needs a fucking API. No excuses. FOIA needs an API!
  • Some APIs it might be better to go back to data source and recreate API from scratch, they are so bad.

I wanted to share some of my notes, before the long list of developer areas, and their APIs. There are some specific notes for each APIs, but much of it is very general, helping grade each API, so I can go back through the list of B grade or higher APIs, and figure out which are candidates for me to create a Swagger API definition, APIs.json and ultimately adding to APIs.io. 

For this work I went down the 77 federal agency links, which were billed as developer areas, but many were single APIs. So when a developer area resulted in multiple APIs, I grouped them together, and many of the agencies who have a single API I will group together, and include my commentary as necessary. I'm leaving the URLs visible to help as a reference, show the craziness of some of them, and because it would have been sooooo much work to apply all of them.

Let's start with the White House(http://www.whitehouse.gov/developers):

Next up is the USDA (http://www.usda.gov/wps/portal/usda/usdahome?navid=USDA_DEVELOPER), which is a hodgepodge of service, no consistency whatsoever between services in interface, supporting content or anything. 

Overall I give USDA a D on all their APIs. A couple might be high value sources, and going after, but definitely not low hanging fruit for me. It would be easier to tackle as independent project for generating brand new APIs.

Next up is Department of Commerce (http://www.commerce.gov/developer), who definitely high some higher value resources, as well as some health API initiatives. 

Next with the Department of Defense (http://www.defense.gov/developer/)There are 8 things billed as aPIs, with a variety of datasets, API like things, and web services available.  Not really sure whats up? (D)

We have one by itself here:

Then the department of education who is just riding their data.gov work as API area:

  • Department of Education - http://www.ed.gov/developers - Lots of high value datasets. Says API, but is JSON file. Wouldn’t be hard to generate APIs for it all and make machine readable definitions. (B)

Next some energy related efforts:

  • Department of Energy - http://www.energy.gov/developers - Lots of datasets. Pretty presentation.  Could use some simple APIs,. Wouldn’t be much to pick high value data sets and create cache of them. (C)
  • Energy Information Administration - http://www.eia.gov/developer/ - Nice web API, simple clean presentation. Needs interactive docs. (B)
  • National Renewable Energy Laboratory - http://developer.nrel.gov/ - Close to a modern Developer area with web APIs. Uses standardized access (umbrella). Some of them have Swagger specs, the rest would be easy to create. (A)
  • Office of Scientific and Technical Information - http://www.osti.gov/XMLServices - Interfaces are pretty well designed, and Swagger specs would be straightforward. But docs are all PDF currently. (B)

Moving on to the Department of Health and Human Services (http://www.hhs.gov/developer), which all of their APis are somewhat cosistent, and provide simple resources:

The Food and Drug Administration (http://open.fda.gov) is one of the agencies that is definitely getting on board with APIs, they have some pretty nice implementations, but there there are some not so nice ones that need a lot of work:

Next up the Department of Homeland Security (http://www.dhs.gov/developer), where they have three APIs (its a start):

Then we have two agencies that have pretty simple API operations, so I'll group together:

Then we have several API developer efforts under the Department of Interior (http://www.doi.gov/developer):

Now we have some APIS coming out of law enforcement side of government, starting with Department of Justice (http://www.justice.gov/developer):

Now we get to one of my favorite afforts in the federal government

  • Department of Labor - http://developer.dol.gov/ - I love their developer area. They have a great API, easy to generate API definitions. (A)
  • Bureau of Labor Statistics - http://www.bls.gov/developers/ - Web APIs in there. Complex, and lots of work, but can be done. API Definitions Needed. (B)

Next we have the API efforts from Department of State (http://www.state.gov/developer):

Moving on to the Department of Transportation (http://www.dot.gov/developer):

Now let's head over to the Department of the Treasury (http://www.treasury.gov/developer):

The Department of Veterans Affairs (http://www.va.gov/developer) has some hope, because of the work I did in the fall.

Moving to another on of my favorite agencies, well quasi gov agencies:

One agency that appears to be on radar, but I really can't tell what is going on API wise:

  • Environmental Protection Agency - http://www.epa.gov/developer/ - There is a really nice layout to the area, with seemingly a lot of APIs, and they look like web APIs, but it looks like one API being represented as a much of methods? Would be too much work, but still hard to figure out WTF. (C)

The the Federal Communications Commission (http://www.fcc.gov/developers) has a lot of APIs going on, in various states of operation:

All by itself on the list, we have a bank one lonely bank:

  • Federal Reserve Bank of St. Louis - http://api.stlouisfed.org/ - Good API and area, would be easy to generate API definitions. (B)

The General Services Administration (http://www.gsa.gov/developers/) definitely is ahead of the game when it comes to API design and deployment:

Once I reached the National Aeronautics and Space Administration http://open.nasa.gov/developer I found some really, really cool APIs:

Grouping a couple loose agencies together:

The Recovery Accountability and Transparency Board (http://www.recovery.gov/arra/FAQ/Developer/Pages/default.aspx) has some APIs to look at: 

The Small Business Administration (http://www.sba.gov/about-sba/sba_performance/sba_data_store/web_service_api) has some nice APIs that are consistent and well presented:

Lasty, we have a couple of loose agencies to look at (or not):

Ok that was it. I know there are more APIs to look at, but this is derived from the master list of federal government developer portals. This gives me what I need. Although, I'm a little disappointed I have less than 5 Swagger definitions, after looking at about 190 APIs. 

After looking at all of this, I'm overwhelmed. How do you solve API discovery for a mess like this? Holy shit!

I really need my fucking head examined for taking this shit on. I really am my own worst enemy, but I love it. I am obsessed with contributing to a solution for API discovery that will work in private sector, as well as a public sector mess like this. So where the hell do I start?  More to come on that soon...

See The Full Blog Post


Contributing To The Testing & Monitoring Lifecycle

Contributing To The Testing & Monitoring Lifecycle

When it comes to testing, and monitoring an API, you begin to really see how machine readable API definitions can be the truth, in the contract between API provider and consumer. API definitions are being used by API testing and monitoring services like SmartBear, providing a central set of rules that can ensure your APIs deliver as promised.

Making sure all your APIs operate as expected, and just like generating up to date documentation, you can ensure the entire surface area of your API is tested, and operating as intended. Test driven development (TDD) is becoming common practice for API development, and API definitions will play an increasing role in this side of API operations.

An API definition provides a central truth, that can be used by API providers to monitor API operations, but also give the same set of rules to external API monitoring services, as well as individual API consumers. Monitoring, and understanding an API up time, from multiple external sources is becoming a part of how the API economy is stabilizing itself, and API definitions provide a portable template, that can be used across all API monitoring services.

Testing and monitoring of vital resources that applications depend on is becoming the norm, with new service providers emerging to assist in this area, and large technology companies like Google, making testing and monitoring default in all platform operations. Without a set of instructions that describe the API surface area, it will be cumbersome, and costly, to generate the automated testing and monitoring jobs necessary to produce a stable, API economy.

See The Full Blog Post


Monitoring Your Resources Becomes Default With Google Developer Console

I’m not at Google I/O this week, enjoying some downtime in SoCal, but I am watching some of the news coming out of the event. One thing I noticed, was the addition of monitoring to the Google Developer Console, where Google is slowly working their StackDriver acquisition into their fast growing API driven, cloud development platform.

You have to request access to the new monitoring services, and Google will open up "Stackdriver's monitoring capabilities available to select Google Cloud trusted testers at no charge”. Wile Stackdriver is about monitoring your cloud infrastructure, it also provides granular level, API endpoint monitoring solutions that you can use to monitor the health of the API resources you depend on in your apps.

When Google makes a move to support a specific feature across its world of APIs, I take notice. Much like when Google adopted OAuth 2.0 across all APIs, I think their move to provide monitoring services, is a sign that monitoring of the resources you depend is going mainstream--something all web, mobile and Iot API resource providers need to have in place by default in 2014.

See The Full Blog Post


If I Could Design My Perfect API Design Editor

I’ve been thinking a lot about API design lately, the services and tooling coming from Apiary, RAML and Swagger, and wanted to explore some thoughts around what I would consider to be killer features for the killer API design editor. Some of these thoughts are derived from the features I’ve seen in Apiary and RAML editor, and most recently the Swagger Editor, but I’d like to *riff* on a little bit and play with what could be the next generation of features.

While exploring my dream API design editor, I’d like to walk through each group of features, organized around my indentations and objectives around my API designs.

Getting Started
When kicking off the API design process, I want to be able to jumpstart the API design lifecycle from multiple sources. There will be many times that I want to start from a clean slate, but many times I will be working from existing patterns.

  • Blank Canvas - I want to start with a blank canvas, no patterns to follow today, I’m painting my masterpiece. 
  • Import Existing File - I have a loose API design file laying around, and I want to be able to open, import and get to work with it, in any of the formats. 
  • Fork From Gallery - I want to fork one of my existing API designs, that I have stored in my API design taller (I will outline below). 
  • Import From API Commons - Select an existing API design pattern from API Commons and import into editor, and API design gallery.

My goals in getting started with API design, will be centered around re-use the best patterns across the API space, as well as my own individual or company API design gallery. We are already mimicking much of this behavior, we just don’t have a central API design editor for managing these flows.

Editing My API Design
Now we get to the meat of the post, the editor. I have several things in mind when I’m actually editing a single API definition, functions I want, actions I want to take around my API design. These are just a handful of the editor specific features I’d love to see in my perfect API design editor.

  • Multi-Lingual - I want my editor to word with API definitions in API Blueprint, RAML and Swagger. I prefer to edit my API designs in JSON, but I know many people I work with will prefer markdown or YAML based, and my editor needs to support fluid editing between all popular formats. 
  • Internationalization - How will I deal with making my API resources available to developers around the world? Beyond API definition langugages, how do I actually make my interfaces accessible, and understood by consuers around the glob.e
  • Dictionary - I will outline my thoughts around a central dictionary below, but I want my editor to pull from a common dictionary, providing a standardized language that I work from, as well as my company when designing interfaces, data models, etc. 
  • Annotation - I want to be able to annotate various aspects of my API designs and have associated notes, conversation around these elements of my design. 
  • Highlight - Built in highlighting would be good to support annotations, but also just reference various layers of my API designs to highlighting during conversations with others, or even allowing the reverse engineer of my designs, complete with the notes and layers of the onions for others to follow. 
  • Source View - A view of my API design that allows me to see the underlying markdown, YAML, or JSON and directly edit the underlying API definition language. 
  • GUI View - A visual view of my API design, allowing for adding, editing and removing elements in an easy GUI interface, no source view necessary for designing APIs. 
  • Interactive View - A rendered visual view of my API, allowing me to play with either my live API or generated mock API, through interactive documentation within my editors. 
  • Save To Gallery - When I’m done working with my API designs, all roads lead to saving it to my gallery, once saved to my working space I can decide to take other actions. 
  • Suggestions - I want my editor to suggest the best patterns available to me from private and public sources. I shouldn't ever design my APIs in the dark.

The API design editor should work like most IDE’s we see today, but keep it simple, and reflect extensibility like GIthub’s Atom editor. My editor should give me full control over my API designs, and enable me to take action in many pre-defined or custom ways one could imagine.

Taking Action
My API designs represent the truth of my API, at any point within its lifecycle, from initial conception to deprecation. In my perfect editor I should be able to take meaningful actions around my API designs. For the purposes of this story I’m going to group actions into some meaningful buckets, that reflect the expanding areas of the API lifecycle. You will notice the four areas below, reflect the primary areas I track on via API Evangelist.

Design Actions
Early on in my API lifecycle, while I’m crafting new designs, I will need to take action around my designs. Designs actions will help me iterate on designs before I reach expensive deployment and management phases.

  • Mock Interface - With each of my API designs I will need to generate mock interfaces that I can use to play with what my API will deliver. I will also need to share this URL with other stakeholders, so that they can play with, and provide feedback on my API interface. 
  • Copy / Paste - API designs will evolve and branch out into other areas. I need to be able to copy / paste or fork my API designs, and my editor, and API gallery should keep track of these iterations so I don’t have to. The API space essentially copy and pastes common patterns, we just don’t have a formal way of doing it currently. 
  • Email Share - I want to easily share my API designs via email with other key stakeholders that will be part of the API lifecycle. Ideally I wouldn’t be emailing around the designs themselves, just pointers to the designs and tools for interacting within the lifecycle. 
  • Social Share - Sometimes the API design process all occur over common social networks, and in some cases be very public. I want to be able to easily share all my API designs via my most used social networks like Github, Twitter and LinkedIn. 
  • Collaboration - API design should not be done in isolation, and should be a collaborative process with all key stockholders. I would like to to even have Etherpad style real-time interactions around the design process with other users.

API design actions are the first stop, in the expanding API design lifecycle. Being able to easily generate mocks, share my interfaces and collaborate with other stakeholders. Allowing me to quickly, seamlessly take action throughout the early design cycles will save me money, time and resources early on—something that only become more costly and restrictive later on in the lifecycle.

Deployment Actions
Next station in the API design lifecycle, is being able to deploy APIs from my designs. Each of the existing API definition formats provide API deployment solutions, and with the evolution in cloud computing, we are seeing even more complete, modular ways to take action around your API designs.

  • Server - With each of my API designs, I should be able to generate server side code in the languages that I use most. I should be able to register specific frameworks, languages, and other defining aspects of my API server code, then generate the code and make available for download, or publish using Github and FTP. 
  • Container - Cloud computing has matured, producing a new way of deploying very modular architectural resources, giving rise to a new cloud movement, being called containers. Container virtualization will do for APIs, what APIs have done for companies in the last 14 years. Containers provide a very defined, self-contained way of deploying APIs from API design blueprints, ushering a new ay of deploy API resources in coming years.

I need help to deploy my APIs, and with container solutions like Docker, I should have predefined packages I can configure with my API designs, and deploy using popular container solutions from Google, Amazon, or other coud provider.

Management Actions
After I deploy an API I will need to use my API definitions as a guide for an increasing number of areas of my management process, not just the technical, but the business and politics of my API operations.

  • Documentation - Generating of interactive API documentation is what kicked off the popularity of API design, and importance of API definitions. Swagger provider the Swagger UI, and interactive, hands-on way of learning about what an API offered, but this wasn’t the only motivation—providing up to date documentation as well, added just the incentives API providers needed to generate machine readable documentation.
  • Code - Second to API documentation, providing code samples, libraries, and SDKS is one of the best ways you can eliminate friction when on boarding new API users. API definitions provide a machine readable set of instructions, for generating the code that is necessary throughout the API management portion of the API lifecycle. 
  • Embeddable - JavaScript provides a very meaningful way to demonstrate the value of APis, and embeddable JavaScript should always be part of the API lifecycle. Machine readable API definitions can easily generate visualizations that can be used in documentation, and other aspects of the API lifecycle.

I predict, with the increased adoption of machine readable API formats like API Blueprint, RAML and Swagger, we will see more layers of the API management process be expanded on, further automating how we manage APis.

Discovery Actions
Having your APIs found, and being able to find the right API design for integration, are two sides of an essential coin in the API lifecycle. We are just now beginning to get a handle on what is need when it comes to API discovery.

  • APIs.json - I should be able to organize API designs into groupings, and publish an APIs.json file for these groups. API designs should be able to be organized in multiple groups, organized by domain and sub-domain. 
  • API Commons - Thanks to Oracle, the copyright of API of API definitions will be part of the API lifecycle. I want the ability to manage and publish all of my designs to the API Commons, or any other commons for sharing of API designs.

The discovery of APIs has long been a problem, but is just now reaching the critical point where we have to start develop solutions for not just finding APIs, but also understanding what they offer, and the details of of the interface, so we can make sense of not just the technical, but business and political decisions around API driven resources.

Integration Actions
Flipping from providing APIs to consuming APIs, I envision a world where I can take actions around my API designs, that focus on the availability, and integration of valuable API driven resources. As an API provider, I need as much as assistance as I can, to look at my APIs from an external perspective, and being able to take action in this area will grow increasingly important.

  • Testing - Using my machine readable API definitions, I should be able to publish testing definitions, that allow the execution of common API testing patterns. I’d love to see providers like SmartBear, Runscope, APITools, and API Metrics offer services around the import of API design generated definitions. 
  • Monitoring - Just like API testing, I want to be able to generate definition that allow for the monitoring of API endpoints. My API monitoring tooling should allow for me to generate standard monitoring definitions, and import and run them in my API monitoring solution.

I’d say that API integration is the fastest growing area of the AP space, second only to API design itself. Understanding how an API operates, from an integrators perspective is valuable, not just to the integrator, but also the provider. I need to be thinking about integration issues early on in the API design lifecyle to minimize costly changes downstream.

Custom Actions
I’ve laid out some of the essential actions I’d like to be able to take around my API definitions, throughout the API lifecycle. I expect the most amount of extensibility from my API design editor, in the future, and should be able to extend in any way that I need.

  • Links - I need a dead simple way to take an API design, and publish to a single URL, from within my editor. This approach provides the minimum amount of extensibility I will need in the API design lifecycle. 
  • JavaScript - I will need to run JavaScript that I write against all of my API designs, generating specific results that I will need throughout the API design process. My editor should allow me to write, store and execute JavaScript against all my API designs. 
  • Marketplace - There should be a marketplace to find other custom actions I can take against my API designs. I want a way to publish my API actions to the marketplace, as well as browse other API actions, and add them to my own library.

We’ve reached a point where using API definitions like API Blueprint, RAML, and Swagger are common place, and being able to innovate around what actions we take throughout the API design lifecycle will be critical to the space moving forward, and how companies take action around their own APIs.

API Design Gallery
In my editor, I need a central location to store and manage all of my API designs. I’m calling this a gallery, because I do not want it only to be a closed off repository of designs, I want to encourage collaboration, and even public sharing of common API design patterns. I see several key API editor features I will need in my API design gallery.

  • Search - I need to be able to search for API designs, based upon their content, as well as other meta data I assign to my designs. I should be able to easily expose my search criteria, and assist potential API consumers in finding my API designs as well. 
  • Import - I should be able to import any API design from a local file, or provide a public URL and generate local copy of any API design. Many of my Api designs will be generated from an import of existing definition. 
  • Versioning - I want the API editor of the future to track all versioning of my API designs. Much like managing the code around my API, I need the interface definitions to be versioned, and the standard feature set for managing this process. 
  • Groups - I will be working on many API designs, will various stakeholders in the success of any API design. I need a set of features in my API design editor to help me manage multiple groups, and their access to my API designs. 
  • Domains - Much like the Internet itself, I need to organize my APIs by domain. I have numerous domains which I manage different groups of API resources. Generally I publish all of my API portals to Github under a specific domain, or sub-domain—I would like this level of control in my API design editor. 
  • Github - Github plays a central role in my API design lifecycle. I need my API design editor to help me manage everything, via public and private Github repository. Using the Github API, my API design editor should be able to store all relevant data on Github—seamlessly. 
  • Diff - What are the differences between my API designs? I would like to understand the difference between each of my API resource types, and versions of each API designs. It might be nice if I could see the difference between my API designs, and other public APIs I might consider as competitors. 
  • Public - The majority of my API designs will be public, but this won’t be the case with every designer. API designers should have the control over whether or not their API designs are public or private.

My API design gallery will be the central place i work from. Once I reach a critical mass of designs, I will have many of the patterns I need to design, deploy and manage my APIs. It will be important for me to have access to import the best patterns from public repositories like API Commons. To evolve as an API designer, I need to easily create, store, and evolve my own API designs, while also being influenced by the best patterns available in the public domain.

Embeddable Gallery
Simple visualization can be an effective tool in helping demonstrate the value an API delivers. I want to be able to manage open, API driven visualizations, using platforms like D3.js. I need an arsenal of embeddable, API driven visualizations to help tell the store of the API resources I provide, give me a gallery to manage them.

  • Search - I want to be able to search the meta data around the API embeddable tools I develop. I will have a wealth of graphs, charts, and more functional, JavaScript widgets I generate. 
  • Browse - Give me a way to group, and organize my embeddable tools. I want to be able to organize, group and share my embeddable tools, not just for my needs, but potentially to the public as well.

A picture is worth a thousand words, and being able to easily generate interactive visualizations, driven by API resources, that can be embedded anywhere is critical to my storytelling process. I will use embeddable tools to tell the story of my API, but my API consumers will also use these visualizations as part of their efforts, and hopefully develop their own as well.

API Dictionary
I need a common dictionary to work from when designing my APIs. I need to use consistent interface names, field names, parameters, headers, media types, and other definitions that will assist me in providing the best API experience possible.

  • Search - My dictionary should be available to me in any area of my API design editor, and in true IDE style, while I’m designing. Search of my dictionary, will be essential to my API design work, but also to the groups that I work with. 
  • Schema.org - There are plenty of existing patterns to follow when defining my APIs, and my editor should always assist me in adopting, and reusing any existing pattern I determine as relevant to my API design lifecycle, like Schema.org.
  • Dublin Core - How do I define the metadata surrounding my API designs? My editor should assist me to use common metadata patterns available like Dublin Core.
  • Media Types - The results of my API should conform to existing document representations, when possible. Being able to explore the existing media types available while designing my API would help me emulate existing patterns, rather than reinventing the wheel each time I design an API. 
  • Custom - My dictionary should be able to be driven by existing definitions, or allow me to import and and define my own vocabulary based upon my operations. I want to extend my dictionary to meet the unique demands of my API design lifecycle.

I want my API design process to be driven by a common dictionary that fits my unique needs, but borrows from the best patterns already available in the public space. We already emulate many of the common patterns we come across, we just don’t have any common dictionary to work from, enforcing healthy design via my editor.

An Editor For Just My Own API Design Process
This story has evolved over the last two weeks, as I spent time in San Francisco, discussing API design, then spending a great deal of time driving, and thinking about the API design lifecycle. This is all part of my research into the expanding world of API design, which will result in a white paper soon, and my intent is to just shed some light on what might be some of the future building blocks of the API design space. My thoughts are very much based in my own selfish API design needs, but based upon what i’m seeing in the growing API design space.

An Editor For A Collective API Design Process
With this story, I intend to help keep the API design process a collaborative, and when relevant a public affair. I want to ensure we work from existing patterns that are defined in the space, and as we iterative and evolve APIs, we collectively share our best patterns. You should not just be proud of your API designs, and willing to share publicly, you should demonstrate the due diligence that went into your design, attribute the patterns you used to contribute to your designs, and share back your own interpretation—encouraging re-use and sharing further downstream.

What Features Would Be Part of Your Perfect API Design Editor
This is my vision around the future of API design, and what I’d like to have in my editor—what is yours? What do you need as part of your API design process? Are API definitions part of the “truth” in your API lifecycle? I’d love to hear what tools and services you think should be made available, to assist us in designing our APIs.

Disclosure: I'm still editing and linking up this post. Stay tuned for updates.

See The Full Blog Post


APITools Raises The Bar With Open, On-Premise API Testing and Monitoring Tools

APITools, the cloud-based API integration services is raising the bar for the space by introducing an open source, on-premise version of their API monitoring service. APITools only launched this year, and because of consumer demands, they moved up the timeframe for open sourcing the platform, which was already on the roadmap.

I’d say after API design, API integration services and tooling, for testing, monitoring, and transforming API calls is one of the fastest growing segments of the API space. We are seeing solid solutions from SmartBear, Runscope, TheRightAPI, Nomos Software, and from API pioneer John Musser, with API Science, but APITools is definitely raising the stakes with open sourcing theirs offering.

The world of API integration service and tooling is rapidly expanding, and only time will tell whether developers prefer running in the cloud, or on-premise, and what features they are looking for, something I've been documenting as I study what each of the companies offer.

I suspect, that along with other API design, deployment, and management tools we'll need a mix of freemium, open tiers, on-premise, and enterprise API integration services and tooling, to meet the demands of this fast growing segment that overlaps with both providing and consuming APIs.

Disclosure: APITools is a 3Scale service, and 3Scale is an API Evangelist partner.

See The Full Blog Post


What Are The Incentives For Creating Machine Readable API Definitions?

After #Gluecon in Colorado the other week, I have API design on the brain. A portion of the #APIStrat un-workshops were dedicated to API design related discussion, and API Design is also the most trafficked portion of API Evangelist this year, according to my Google Analytics.

At #Gluecon, 3Scale and API Evangelist announced our new API discovery project APIs.json, and associated tooling, API search engine APIs.io. For APIs.json, APIs.io, and API Commons to work, we are counting API providers, and API consumers creating machine readable API definitions.

With this in mind, I wanted to do some exploration--what would be possible incentives for creating machine readable API definitions?

JSON API Definition
Interactive Documentation
Server Side Code Deployment
Client Side Code generation
Design, Mocking, and Collaboration
Markdown Based API Definition
YAML Based API Definition
Reusability, Interoperability and Copyright
Testing & Monitoring
Discovery
Search

The importance of having an API definition of available resources, is increasing. It was hard to realize the value of defining APIs with the heavy, top down defined WSDL, and even its web counterpart WADL, but with these new approaches, other incentives are emerging—incentives that live throughout the API lifecycle.

The first tangible shift in this area was when Swagger released the Swagger UI, providing interactive documentation that was generated from a Swagger API definition. Apiary quickly moved the incentives to an earlier stage in the API design lifecycle with design, mocking and collaboration opportunities.

As the API design world continues to explode, I’m seeing a number of other incentives emerge for API providers to generate machine readable API definitions, and looking to find any incentives that I’m missing, as well as identify any opportunities in how I can encourage API designers to generate machine readable API definitions in whatever format they desire.

See The Full Blog Post


Politics of APIs

In preparation for a fireside chat with Tyler Singletary at API Strategy & Practice next week I’m reviewing many of what I consider the political API building blocks, and reading some of my past stories to refresh my thoughts on the most pressing topics in the space.

The technical building blocks of APIs are pretty clear at this point, and the business of APIs has been being hammered out over the last five or so years, by API infrastructure providers like 3Scale, but the politics of APIs are an aspect of the API economy we are going to see become more and more of a hot topic as the space continues its growth.

There are a handful of building blocks that are clearly driving how APIs are used (or not used):

Legal

These building blocks are the acting as the control valve for API ecosystems, setting the tone for how much access, control and influence both developers and end-users wil have when it comes to API operations.

There are other elements that will affect an API, and its community, like the amount of transparency a provider embraces, and externally from laws and regulation from the government, but these seven building blocks will have the biggest impact on what is possible when integrating with an API. 

The interesting thing about the politics of APIs is the overlap with the business and tech of APis. There are political building blocks that are more technical:

  • oAuth
  • Rate Limits
  • Monitoring
  • Stability
  • Error Codes

Then there are building blocks that have more to do with business, than the tech, but go along way in influencing the politics of an API:

  • Self-Service
  • Communication
  • Support
  • Roadmap
  • Communication
  • Changelog
  • Budget
  • Revenue-Share

These are all related to busines operations but will all play important role in the internal and external politics of API operations. While every API building block contributes to overall politics in some way there are a small group that I think truly overlap the tech, business and politics of APIs:

  • Github
  • Consistency
  • Security

This is why Github plays such an important role in API operations. It isn't just about code anymore, it is about the operations, reaching and supporting customers, and defining the licensing around API operations--all critical elements vital to achieving a political balance that necessary for a thriving API community.

As with the business of APIs, it will take us years to figure out all the finer points of the politics surrounding APIs, and with each new business sector that APis evolve into, we will find new struggles. Some of the issues we've seen emerge between API providers and consumers in ecosystems like Twitter, security challenges like Snpachat, and legal battles like Oracle vs. Google are just the tip of the iceberg.  

We need to have more tools, services, and open resources to help API providers and consumers navigate terms of service, as well as more open dicusssions around content, data and even API interface licensing. This is why we do API Strategy & Practice--to have open conversations between API leaders, and discuss the biggest challenges we face in the space.

I'm stoked to have Tyler come to Amsterdam and continue conversations with me on the main stage, around the politics of APIs. Like me, Tyler is extremeley passionate about the legal and political challenges we face, but even though we don't always see eye to eye, I thoroughly enjoy our exchanges because of his passion and understanding of the space. Tyler has some pretty important ideas like robots.json, that we we need to explore, and some serious experience in the space from running Klout, that we can all learn from.

As always let me know if there are any issues you'd like to see me cover from the politics of APIs, it is why I started API Voice, and I am happy to even bring it up on stage as we discuss the politics of APIs in Amsterdam next week at #APIStrat.

See The Full Blog Post


Common Building Blocks of Cloud APIs

I’ve been profiling the API management space for almost four years now, and one of the things I keep track of is what some of the common building blocks of API management are. Recently I’ve pushed into other areas like API design, integration and into payment APIs, trying to understand what the common elements providers are using to meet developer needs.

Usually I have to look through the sites of leading companies in the space, like the 38 payment API providers I’m tracking on to find all the building blocks that make up the space, but when it came to cloud computing it was different. While there are several providers in the space, there is but a single undisputed leader—Amazon Cloud Services. I was browsing through AWS yesterday and I noticed their new products & solutions menu, which I think has a pretty telling breakdown of the building blocks of cloud APIs.

Compute & Networking

Compute - Virtual Servers in the Cloud (Amazon EC2)

Auto Scaling - Automatic vertical scaling service (AutoScaling)

Load Balancing - Automatic load balancing service (Elastic Load Balancing)

Virtual Desktops - Virtual Desktops in the Cloud (Amazon WorkSpaces)

On-Premise - Isolated Cloud Resources (Amazon VPC)

DNS - Scalable Domain Name System (Amazon Route 53)

Network - Dedicated Network Connection to AWS (AWS Direct Connect)

Storage & CDN

Storage - Scalable Storage in the Cloud (Amazon S3)

Bulk Storage - Low-Cost Archive Storage in the Cloud (Amazon Glacier)

Storage Volumes - EC2 Block Storage Volumes (Amazon EBS)

Data Portability - Large Volume Data Transfer (AWS Import/Export)

On-Premise Storage - Integrates on-premises IT environments with Cloud storage (AWS Storage Gateway)

Content Delivery Network (CDN) - Global Content Delivery Network (Amazon CloudFront)

Database

Relational Database - Managed Relational Database Service for MySQL, Oracle, SQL Server, and PostgreSQL (Amazon RDS)

NoSQL Database - Fast, Predictable, Highly-scalable NoSQL data store (Amazon DynamoDB)

Data Caching - In-Memory Caching Service (Amazon ElastiCache)

Data Warehouse - Fast, Powerful, Fully Managed, Petabyte-scale Data Warehouse Service (Amazon Redshift)

Analytics

Hadoop - Hosted Hadoop Framework (Amazon EMR)

Real-Time - Real-Time Data Stream Processing (Amazon Kinesis)

Application Services

Application Streaming - Low-Latency Application Streaming (Amazon AppStream)

Search - Managed Search Service (Amazon CloudSearch)

Workflow - Workflow service for coordinating application components (Amazon SWF)

Messaging - Message Queue Service (Amazon SQS)

Email - Email Sending Service (Amazon SES)

Push Notifications - Push Notification Service (Amazon SNS)

Payments - API based payment service (Amazon FPS)

Media Transcoding - Easy-to-use scalable media transcoding (Amazon Elastic Transcoder)

Deployment & Management

Console - Web-Based User Interface (AWS Management Console)

Identity and Access - Configurable AWS Access Controls (AWS Identity and Access Management (IAM))

Change Tracking - User Activity and Change Tracking (AWS CloudTrail)

Monitoring - Resource and Application Monitoring (Amazon CloudWatch)

Containers - AWS Application Container (AWS Elastic Beanstalk)

Templates - Templates for AWS Resource Creation (AWS CloudFormation)

DevOps - DevOps Application Management Services (AWS OpsWorks)

Security - Ops Application Management Services (AWS OpsWorks)Security - Hardware-based Key Storage for Regulatory Compliance (AWS CloudHSM)

The reason I look through at these spaces in this way, is to better understand the common services that API providers are, that are really making developers lives easier. Through assembling a list of the common building blocks, it allows me look at the raw ingredients that makes things work, and not get hunt up with just companies and their products.

There is a lot to be learned from API pioneers like Amazon, and I think this list of building blocks provides a lot of insight into what API driven resources the are truly making the Internet operate in 2014.

See The Full Blog Post


API Evangelism Strategy: Landscape Analysis

I’m working with the Cashtie API to pull together an API evangelism strategy for the payments API. As I pull together, it seems like a great opportunity to share with you. Who knows, maybe you can use some of the elements in your own API strategy.

Next up is what I call "landscape analysis", which is about monitoring the blogs, and other social network activity of the sector(s) you are evangelizing your API within:

  • Competition Monitoring - Evaluation of regular the activity of competitors, visiting their sites, and getting to know their customers.
  • Industry Monitoring - Evaluation of relevant topics and trends of overall industry, industries that you are evangelizing within.
  • Keywords - Established a list of keywords to use when searching for topics at search engines, QA, forums, social bookmarking and social networks—this should dovetail nicely into any SEO work your doing. You are doing SEO work?
  • City and Country Targeting - Target specific markets with your evangelism. Get to know where your existing developers and target developers reside, and get local--be on the ground.
  • Social Media Monitoring - Have a solid strategy and platform for pulling relevant data from Twitter accounts, #hashtag conversations, Github profiles, and other relevant platforms to use in your landscape analysis.

The "landscape", is the space you are looking at each and every day out your virtual window, as well as what your existing and target developers will see each day. It is your job as an evangelist or advocate to know this space well, understand all the elements of the landscape, key players, relationships, and be a contributing member to this landscape.

Lanscape analysis should be something you do every day, weaving it into your regular social media, and other online activities. Remember you aren’t targeting this landscape of companies and users, you are looking to participate in the conversation, while also bring your API resources to the table.

See The Full Blog Post


What Are The Common Building Blocks of API Integration?

I started API Evangelist in 2010 to help business leaders better understand not just the technical, but specifically the business of APIs, helping them be successful in their own API efforts. As part of these efforts I track on what I consider the building blocks of API management. In 2014 I'm also researching what the building blocks are in other areas of the API world, including API design, deployment, discovery and integration.

After taking a quick glance at the fast growing world of API integration tools and services, I've found the following building blocks emerging:

Pain Point Monitoring
Documentation Monitoring - Keeping track of changes to an APIs documentation, alerting you to potential changes in valuable developer API documentation for single or many APIs
Pricing Monitoring - Notifications when an API platform's pricing changes, which might trigger switching services or at least staying in tune with the landscape of what is being offered
Terms of Use Monitoring - Updates when a company changes the terms of service for a particular platform and providing historical versions for comparison
Authentication
oAuth Integration - Provides oAuth integration for developers, to one or many API providers, and potentially offering oAuth listing for API providers
Provider / Key Management - Management of multiple API platform providers, providing a secure interface for managing keys and tokens for common API services
Integration Touch Points
API Debugging - Identifying of API errors and assistance in debugging API integration touch points
API Explorer - Allowing the interactive exploring of API providers registered with the platform, making calls and interacting and capturing API responses
API Feature Testing - The configuring and testing of specific features and configurations, providing precise testing tools for any potential use
API Load Testing - Testing, with added benefit of making sure an API will actually perform under a heavy load
API Monitoring - Actively monitoring registered API endpoints, allowing real-time oversight of important API integrations endpoints that applications depend on
API Request Actions
API Request Automation - Introducing other types of automation for individual, captured API requests like looping, conditional responses, etc.
API Request Capture - Providing the ability to capture a individual API request
API Request Commenting - Adding notes and comments to individual API requests, allowing the cataloging of history, behavior and communication around API request actions
API Request Editor - Allowing the editing of individual API requests
API Request Notifications - Providing a messaging and notification framework around individual API requests events
API Request Playback - Recording and playing back captured API requests so that you can inspect the results
API Request Retry - Enabling the ability to retry a captured API request and play back in current time frame
API Request Scheduling - Allowing the scheduling of any captured API request, by the minute, hour, day, etc.
API Request Sharing - Opening up the ability to share API requests and their results with other users via email, or other means
Other Areas
Analytics - Visual analytics providing insight into individual and bulk API requests and application usage
Code Libraries - Development and support of code libraries that work with single or multiple API providers
Command Line - Providing a command line (CL) interface for developers to interact with APIs
Dashboard - Web based dashboard with analytics, reports and tools that give developers quick access to the most valuable integration information
Gateway - Providing a software gateway for testing, monitoring and production API integration scenarios
Geolocation - Combining of location when testing and proxying APIs from potentially multiple locations
Import and Export - Allowing for importing and exporting of configurations of captured and saved API requests, allowing for data portability in testing, monitoring and integrationPublish - Providing tools for publishing monitoring and alert results to a public site via widget or FTP
LocalHost - Opening up of a local web server to a public address, allowing for webhooks and other interactions
Rating - Establishment of a ranking system for APIs, based upon availability, speed, etc.
Real-Time - Adding real-time elements to analytics, messaging and other aspects of API integration
Reports - Common reports on how APIs are being used across multiple applications and user profiles
Teams - Providing a collaborative, team environment where multiple users can test, monitor and debug APIs and application dependencies
Workflow - Allowing for the daisy chaining and connecting of individual API request actions into a series of workflows and jobs

What else are you seeing? Which tools and services do you depend on when you are integrating one or many APIs into your applications? What tools and services would you like to see?

I'm looking at the world of API design right now, but once I'm done with that research, I will be diving into API integration again, trying to better understand the key players, tools, services and the building blocks they use to get things done.

See The Full Blog Post


What If All Gov Programs Like Healthcare.gov Had A Private Sector Monitoring Group?

The Healthcare.gov launch has been a disaster. I cannot turn on CNN or NPR during the day, without hearing a story about what a failure the technology and implementation has been for the Affordable Care Act(ACA).

I have written and talked about how transparency was the biggest problem for the Healthcare.gov rollout. Sure there was numerous illnesses from procurement to politics, but ultimately if there had been more transparency, from start to finish, things could have been different.

Throughout this debacle I have been making my exit from federal government back to the private sector, and I can't help but think how things could have been different with Healthcare.gov if it there had been some sort of external watchdog group tracking on the process from start to finish. I mean, c'mon this project is way to big and way to important to just leave to government and its contractors.

What if there had been a group of people assigned to the project at its inception? External, non-partisan, independent individuals who had the skills and tracked on the procurement process, design, development and launch of Healthcare.gov? What if any federal, state or city government project had the potential to have a knowledgable group of outside individuals tracking on projects and made recommendations in real-time how to improve the process? Things could be different.

Of course there are lots of questions to ask: How to fund this? Who watches the watchers? On and on. Even with all the quesitons, we should be looking for new and innovative ways to bring the public and private sector together to solve the biggest problems.

It is just a thought. As I exit the Presidential Innovation Fellowship (PIF) program, and head back to the private sector, I can't help but think about ways that we can improve the oversight and involvement of the private sector in how government operates.

See The Full Blog Post


API Testing and Monitoring Finding A Home In Your Companies Existing QA Process

I've been doing API Evangelist for three years now, a world where selling APIs to existing companies outside of Silicon Valley, and often venture capital firms is a serious challenge. While APis have been around for a while in many different forms, this new, more open and collaborative approach to APis seems very foreign, new and scary for some companies and investors--resulting in them often very resistant to it.

As part of my storytelling process, I'm always looking for ways to dovetail API tools and services into existing business needs and operations, making them much more palatable to companies across many business sectors. Once part of the API space I'm just getting a handle on is the area API integration, which includes testing, monitoring, debugging, scheduling, authentication and other key challenges developers face when building applications that depend on APIs.

I was having a great conversation with Roger Guess of TheRightAPI the other day, which I try to do regularly. We are always brainstorming ideas on where the space is going and the best way to tell stories around API integration, that will resonate with existing companies. Roger was talking about the success they are finding dovetailing their testing, monitoring and other web API integration services with a company's existing QA process--something that I can see will resonate with many companies.

Hopefully your company already has a full developed QA cycle for your development team(s), including, but not limited to, automated, unit and regression testing--something where API tests, monitoring, scheduling and other emerging API integration building blocks will fit in nicely. This new breed of APi integration tools don't have to be some entirely new approach to development, chances are you are already using APIs in your development and API testing and monitoring can just be added to your existing QA toolbox.

I will spend more time looking for stories that help relate some of these new approaches to your existing QA processes, hopefully finding news ways you can put tools and services like TheRightAPI to use, helping you better manage the API integration aspect of your web and mobile application development.

See The Full Blog Post


API Providers Guide - API Design

API Providers Guide - API Design

Prepared By Kin Lane

June 2014





Table of Contents

  • Overview of The API Design Space
  • A New Generation Of API Design
  • Developing The Language We Need To Communicate
  • Leading API Definition Formats
  • Building Blocks of API Design
  • Companies Who Provide API Design Services
  • API Design Tools
  • API Design Editors
  • API Definitions Providing A Central Truth For The API Lifecycle
  • Contributing To The Deployment Lifecycle
  • Contributing To The Management Lifecycle
  • Contributing To The Testing & Monitoring Lifecycle
  • Contributing To The Discovery Lifecycle
  • An Evolutionary Period For API Design





See The Full Blog Post


Landscape Analysis and Monitoring as Part of Your API Evangelism

One very important aspect of API Evangelism is landscape analysis. When you launch an API, you need to have an intimate understanding of the landscape where you will be evangelizing your API and engaging with potential consumers.

I always start by establishing potential 10K foot target areas that I will focusing on in my research. While these may vary from industry to industry, my starting list is:

  • Individuals - Who are the key individual influencers in your target landscape
  • Competitors - Know your competitors, you can learn a lot from what they do right or they do wrong
  • Platforms - There are an endless amount of platforms that may benefit your company, such as Drupal, Wordpress and Salesforce
  • Media - Who are the relevant media and blogs you should be following and engaging with
  • Partners - Existing partners who you should be in tune with on the landscape

After tackling these areas and identifying who the players are, you will identify other areas that are unique to your API evangelism efforts, so don't stress over whether you have the right areas in the first week. While researching these areas of my landscape I try to identify specific channels for getting plugged in with landscape targets:

  • Website URL - What is the targets root URL for their website
  • Blog URL - What is the blog for a target
  • Blog RSS - Secondary to the blog, what is the RSS feed so I can programmatically monitor
  • Twitter - What is a companies or individuals Twitter handle
  • LinkedIn - Identify and sometimes connect with a target’s LinkedIn account
  • Facebook - Identify and sometimes connect with a target’s Facebook account
  • Github - Identify, follow and engage with a companies Github profile and appropriate repositories

After getting the primary channels for each target on my landscape identified, I will go deeper sometimes and actually identify key individuals associated with target companies, either the founders and / or key employees. I look for Twitter, LinkedIn, Facebook and Github for individuals as well, and sometimes even personal blogs when available.

While landscape analysis is an initial project in the first couple weeks of designing your API evangelism strategy, it will be ongoing iniative, in monitoring all target channels in real-time, but also identifying new target groups, companies and individuals or even specific channels not listed above like Tumblr or Google+ each week.

Each industry will be different when identifying the landscape for your API evangelism efforts, but this is a good first round blueprint and has worked well for me on multiple projects. Let me know if there are any landscape analysis and monitoring tricks you use.

See The Full Blog Post