Prototype a New API Using a Postman Collection
Postman collections make it really easy to define the structure of an API, using requests to describe the paths, parameters, bodies, and headers, while using examples to demonstrate what will be returned with each response. When you combine collections with the ability to generate mock servers, and the built-in documentation that comes with each collection, you end up with a pretty robust way to prototype APIs. Allowing teams to collaboratively prototype what an API should do, demonstrating functionality, and documentation resources and capabilities along the way, rapdily defining what an API will do in production–without writing code, or hand-crafting an OpenAPI definition. Some teams will find this approach to delivering APIs much more desirable than hand-crafting an OpenAPI and generating mock servers and documentation from the OpenAPI, providing yet another way to approach a modern API lifecycle.
The most important first step of any API lifecycle is to make sure the operations around an API are properly defined, laying the foundation for being able to effectively design and bring an API to life, while also establishing a known place, or places to go to get all the information you need regarding each individual API, or groups of APIs. A little planning and organization at this early step of the API journey can go a long way towards ensuring the overall health and velocity of an API, and the applications and integrations that will depend on each internal, partner, or public API being delivered.
Any API you are looking to develop should have a known location where any stakeholder can get up to speed on what is happening with the development and operation of an API. Postman team workspaces provide a single location to publish APIs, documentation, mock servers, tests, and other artifacts, providing a known location that is discoverable across teams. Establishing early on a place for teams to engage around the design, development, and operation of an API, that makes the API lifecycle observable by default. Team workspaces are only accessible internally to your team members who have an account under your designated organizational Postman team, making it available across search, and via workspace browsing for your team to discover and begin collaborating and working across APIs being developed within each workspace.
Once a team or public workspace has been established for an API, or group of APIs, it is time to define and invite all relevant team members to the workspace. Sitting down to make sure your team is well defined and invited to your workspace and repositories allows you to think through who is involved, what roles they have in moving an API forward, and is something you can revisit on a regular basis. It is also something that allows others to easily see who is involved with an API, and who they can reach out to with questions or feedback through the life of an API.
Having a formal process and approach to designing an API helps establish consistency and the precision of APIs in production, ensuring that APIs are developed using common patterns across an industry, and within an organization, establishing known practices for shaping the surface area and behaviors of APIs that applications are depending upon. But instead of using OpenAPI as the catalyst for the API design process, a Postman collection is used to prototype the API, and then you can generate the OpenAPI from the collection when you are ready to move to production, or at least a more stable portion of the API design phase.
Postman collections provide an excellent format for prototyping an API, driving the API design process from a collection, rather than an OpenAPI definition. Collections allow you to define all of the details for each API request and response, providing actual examples of what will be delivered. Since collections can be easily documented and mocked, they allow for very rapid iteration on the design of APIs in a very collaborative way, without having to be an expert on OpenAPI. Once a prototype has stabilized and changes are becoming less frequent an OpenAPI can be be generated and established as the source truth, providing an alterative approach to API design-first that will be more familiar to a wider audience of developers.
The OpenAPI acts as the contract for any HTTP API, providing a machine readable way to describe the details of each request and response in a way that can be used across the entire API lifecycle. OpenAPI can be introduced in different ways via the API lifecycle, but once introduced should be managed as the source of truth for each API, always keeping up to date and managed via Postman workspace as well as synced to relevant repositories. The OpenAPI for each API can be used to generate collections for documentation, mocking, testing, and other areas of the API lifecycle, providing executable representations of each API tailored for a specific purpose, while maintaining a single source of truth for the contract of each API that can be used to keep docs, mocks, and tests in sync, but also to validate that each API contract is being met.
Mocking how an API works and behaves provides an effective way for teams to collaborate, communicate, and iterate as part of the design of an API, but it also is something that can be used as part of testing, or just providing a sandbox environment for API consumers to learn before they actually begin working with any API in production. Effectively mocking an API takes a little time to set up and configure properly, but once available it will help reduce friction across the entire API lifecycle, helping teams more effectively communicate around an API throughout its journey.
Mock servers provide a simulated reflection of what each API can do, mocking each path request, as well as the response. It is difficult to mock every feature of an API, but mock servers will get you about 75% of the way towards what you will experience in production. An easy to use mock server will help your team get hands on with what an API does, while also allowing them to quickly iterate upon the design and provide feedback without actually having to write code. Mock servers can be defined by generating Postman collections from OpenAPI definitions, and it even can make sense to have multiple mock servers tailored for specific business use cases or outcomes, providing usable representations of the value that an API will deliver in production. Mock servers can be made publicly available or restricted by API key, helping reduce friction for usage, while also potentially limiting who has access to mocked instances of an API.
Examples are essential to API operations for helping demonstrate what an API does and how it can be used. Examples can accompany API documentation to make onboarding with an API much more intuitive, and examples could be used to mock an API during the design or testing parts of the API lifecycle. Examples can help API designers and developers think through what an API should do as part of the API design or prototyping process, and can be used to provide a sandbox environment for API consumers to learn about how APIs work without working with production data. Beyond these areas of the API lifecycle, the process of producing rich examples for each individual API request or response, and multiple examples for different use cases helps better inform API designers and developers as part of the journey, resulting in a much richer API developer experience in the end.
Having complete, accurate, and easy to follow documentation is essential for all APIs, and is something that alleviates the number one pain point for API consumers when it comes to onboarding with any API, expanding the number of API paths an application puts to work. Modern approaches to producing API documentation have moved beyond a single static version of documentation simply published to a portal, as well as there being potentially multiple forms of documentation for any single API. Helping API producers onboard consumers easier, reduce the cognitive load when understanding what an API does, and properly define specific business use cases of an API being put to work in an application or as part of an integration.
The most common form of API documentation is what can be described as reference documentation, providing a complete “menu” of all paths available for an API, along with all the parameters, headers, body, and examples to put to use. Ideally reference documentation is logically organized using folders and tags, and provides informative and useful summaries and descriptions for every individual path. Every API should have up to date and informative reference documentation that allows API consumers to understand the entire scope of an API, but also find exactly the path or paths they need to use as part of their application or integration. Reference documentation for all APIs is an essential part of a healthy API lifecycle, but teams should also be considering what other forms of documentation would help new users, specific domains, as well as describing some of the most common workflows an API is used for when it comes to desired business outcomes.
Establishing a well defined process to deploy an API helps teams bring new APIs to life, as well as assists them in more efficiently delivering each future iteration of an API in a consistent and repeatable way. Making sure APIs are deployed using known development, staging, production, and other agreed upon stages that actively apply other elements like documentation, testing, while natively contributing to observability. API deployment practices will likely have been well established as part of an organization’s traditional software development lifecycle, but is something that should be open to defining, standardizing, and making more repeatable and observable as part of the API lifecycle. The API deployment portion of the API lifecycle will be the most difficult for teams to properly define, articulate, and standardize across teams, but it will continue to be one of the most critical areas of the API lifecycle to do this for, otherwise it will be guaranteed to be a repeated source of friction across API operations.
CI/CD pipelines are an essential part of automating the delivery of APIs, and there are multiple ways they can be used in conjunction with the Postman platform from deployment, to testing and governance. Postman collections are executable units that can be run individually and sequentially as part of a CI/CD pipeline, publishing documentation, running contract, integration, and performance tests, or leverage infrastructure APIs behind our APIs to automate any part of the API lifecycle. Jenkins, Github Actions, and other CI/CD solutions allow for the execution of manually created or dynamically generated Postman collections, providing an excellent mechanism for standardizing how API lifecycle automation occurs across many different APIs, while also utilizing a standardized approach to moving APIs forward as part of an agreed upon API lifecycle.
Gateways are an essential part of a modern API lifecycle, providing a standardized way for providing access to APIs. Gateways are available as commercial or open source offerings, and are a default part of cloud infrastructure, having been commoditized around 2015, and since become critical to the enterprise API lifecycle. Gateways use OpenAPI and other artifacts to define routes, and often use extensions to route requests to backend systems, while applying consistent policies across APIs to define identity and access control, transformations, logging, and other common needs involved in managing APIs. In addition to API deployment, API gateways are how API producers are able to stay aware of how API consumers are putting APIs to work, establishing awareness of how API resources are being applied–offering another direction for API gateway usage within Postman by pulling identity, access, and usage data back into the platform, and making it part of how teams stay informed across the API lifecycle.
APIs should always be managed using a set of common, well-defined set of policies that define and govern how APIs are accessed via all stages of the API lifecycle, ensuring that every API has appropriate authentication, rate limits, logging, and other essential requirements of managing APIs at scale, helping strike a balance between making APIs accessible and the privacy and security concerns that exist. As API gateways and management solutions have been commoditized, many of the essential elements like documentation and testing have expanded into their own areas of the API lifecycle, leaving us with a core set of elements that can be applied by teams to help manage how APIs are put to work in applications and as part of system to system integrations.
The API management layer governs how consumers onboard with an API, providing developers with the ability to sign up for an account, select a usage plan, define their applications, and obtain API keys that they will use when making calls to an API. The onboarding process for each API is heavily influenced by the API gateway and management solution selected by API producers, offering a pretty well defined set of approaches to reducing friction when it comes to getting access to an API, allowing consumers to go from learning about an API to making their first API call in as short of period as possible. API onboarding should be standardized across all of an organization’s APIs, and is not an area teams should be reinventing the wheel–with API management solutions being a commodity, no organization should be hand-rolling their onboarding experience as part of modern API operations.
Every API should operate within one or many well-defined usage plans. No API consumer, whether internal or external to an organization, should have access to an API outside of the definition of a usage plan that is governed at the API management layer. Usage plans are a standard part of API gateways and management solutions, and provide mechanisms for defining rate limiting and other access policies in a standardized way. Usage plans govern how APIs can be used across internal and external consumers, ensuring that no API has unlimited access or usage, requiring all consumers to be identified by a key, and providing observability into API usage by individual consumers, or across an entire plan. Usage plans are central to the management of public and private APIs, and are how the value exchange between API producers and consumers is maximized, ensuring all API usage is in alignment with business objectives.
It is common practice to require API consumers to obtain an API key before they can access any API. While there are exceptions to this rule, most APIs require developers to register an application, select a usage plan, then are issued a key that is required to accompany every API call. API keys are a ubiquitous aspect of API consumption, and an essential part of the API lifecycle for API producers. Keys are managed via the gateway and management layer for APIs, and should be considered as part of both the producer and consumer side of the lifecycle, when it comes to issuing, managing, and then applying keys across many different environments. Standardizing how environment variables are used to apply keys across many different collections used to document, mock, test, and provide clients for APIs.
A test-driven API lifecycle ensures that each API delivers the intended outcomes it was developed for in the first place, providing manual as well as automated ways to ensure an API hasn't changed unexpectedly and is as performant as required, helping establish a high quality of service consistently across all APIs. API testing should not be an afterthought and should be a default aspect of the API lifecycle for any API being put into production. API testing takes a solid investment in establishing proper testing practices across teams, but once you do the work to establish a baseline of testing, properly train teams on the process and tooling involved, the investment will pay off down the road.
Contract testing ensures that each individual API and collection paths comply with the contract that was put forth for each individual version of an API. OpenAPI for each API can be used to generate collections providing 100% coverage for all of the API paths present. JSON Schema provides the details of the contract, and when applied as part of an OpenAPI definition, you end up with a machine readable contract that can be manually or automatically tested against. It is common for collections to be defined for the contract tests of each API, and then have these contracts tests scheduled for regular run via a monitor, as well as included as part of CI/CD pipelines, providing robust coverage for contract tests that developers can manually execute, or automated as required as part of the API lifecycle.
Performance testing establishes a benchmark for what can be expected when it comes to the response time for each API. The OpenAPI for each API can be used to generate collections that provide 100% coverage for all of the API paths present, or performance tests can be applied to just a subset of API paths. It makes sense to have performance test collections separate from other types of testing so that they can be run independently from contract, integration, or other types of tests. Helping establish that the performance levels are met from potentially different regions that reflect actual application usage of APIs. Providing a comprehensive look at the performance of an API from across multiple regions for 100%, or a sensible sampling of API paths, ensuring that APIs and the teams behind them are meeting their SLAs over time and across the entire API lifecycle.
Security must be its own area of the API lifecycle, but it is something that should span testing, authentication, and potentially other areas of the API lifecycle. Over the last five years the world of API security has expanded, while also moving further left in the API lifecycle as part of a devops shift in how APIs are delivered. There are a number of elements present when it comes to security, but depending on the overall maturity of API operations the available resources and prioritization available to adequately realize these elements vary.
Authentication begins on the producer side by selecting the appropriate authentication method when deploying an API to the gateway, setting the tone for how identity, authentication, and access control will work across the API lifecycle. Next, authentication can be defined as part of API consumption at the collection, folder, or individual request level, providing a configurable authentication layer for automating across API operations. Authentication should be balanced across both the producer and consumer side of the API lifecycle, putting well-known practices to work, allowing both sides to securely access, refresh, and engage with necessary authentication.
Security testing ensures that 100% of the surface area of an API is secure against the common types of attacks as defined by OWASP. The OpenAPI for each API can be used to generate collections that provide 100% test coverage for all API paths, ensuring that every API, and detail of the API is evaluated as part of common security practices. No API should move into production without having a security collection defined for testing for all of the known vulnerabilities, moving the security conversation further left in the API lifecycle–allowing developers to manually test as they develop an API, but then also allowing for the automation of API security testing as part of the deployment pipeline, or scheduled via a monitor. Security testing is just one part of a larger testing, but also security strategy, helping standardize how security is applied, while also properly equipping teams with the latest information and practices they need to be successful in securing API operations.
Monitors can be used to execute any Postman collection applied to any environment. Due to the versatility of what a Postman collection can define, collections turn monitors into a powerful API automation and orchestration tool. Beginning with the ability to schedule contract, performance, and other types of tests, but then also allowing for automating specific workflows across many different APIs. Since collections can be used to define anything that can be defined via an API, monitors can be used to schedule the running of each capability from multiple cloud regions, applying many different environmental variables. Making monitors an essential, versatile, and executable part of defining how the API lifecycle works.
Contract Testing Monitor
Monitors can be used to automate the scheduling of any contract test, taking any collection of API requests, complete with test scripts, combined with a targeted environment, and run in a variety of cloud regions on a recurring schedule. Providing the ability to independently execute the contract testing for each API, but then monitor the contract testing of each API over time to make sure nothing has changed. Providing the essential automation of contract testing needed to ensure it is consistently applied across all APIs, defining whether or not an API is not just meeting the uptime portion of its service level agreement, but actually is meeting business needs in an ongoing way.
Performance Testing Monitor
Monitors can be used to automate the scheduling of any performance test, taking a single, or a variety of API paths and making sure they meet the established baseline for response time. Monitors take your performance testing collections and allow for their execution on a recurring schedule, from any cloud region, with a specific set of environment variables applied at execution time. It is important that performance testing exist as their own collections so that performance testing monitors can become more precise in where and when they run, providing a robust look at API performance from the regions and times of day that matter the most. Performance testing monitors are essential in realizing a certain quality of service at scale across all APIs in production, providing an executable unit for understanding the performance of all API resources and capabilities over time.
Security Testing Monitor
Monitors can be used to automate the scheduling of any security test, taking collections that are generated from an OpenAPI and scanning every path for potential vulnerabilities. Collections provide the definition of the surface area of the API and test scripts provide the ability to automate security testing against each individual path, which is something that when combined with a specific environment, and then schedule to run as a monitor becomes a very robust way to scale security testing across the surface area all APIs.. Allowing any developer to add security testing to the stack of contract, performance, and other testing, but then also contributing to the overall security strategy applied across API operations, meeting the needs of individual APIs, but automating security in a way that prevents any API from falling behind, and becoming the next breach we’ll read about in the news.
The ability to discover APIs at all stages of the API lifecycle is essential for reducing redundancy across operations, helping teams find existing APIs before they develop new ones, properly matching API consumers with the right APIs, while supporting documentation, relevant workflows, and the feedback loops that exist as part of the operation of APIs internally within the enterprise, or externally with 3rd party developers. API discovery does not live at the beginning or the end of the API lifecycle, but should be considered across all areas of the API lifecycle, ensuring that APIs, as well as the operations around them are as discoverable as possible, but well informed when it comes to privacy, security, and terms of service.
Search is the fundamental element of API discovery, allowing API producers to search for existing APIs before they begin the development of new APIs, as well as API consumers to search for APIs they can use as part of their applications and integrations. A healthy API lifecycle depends on the ability to search internally and externally for APIs, as well as the operations around them. The indexing of workspaces, APIs, and the elements of the API lifecycle like documentation, mock servers, and tests should occur by default across all teams before the desired efficiency and velocity will be realized across operations. Postman platform search provides visibility across these dimensions of API operations, looking across private, partner, and public dimensions, but then also leverages a role based approach to what elements of API operations are surfaced based upon your role–making API search a priority as part of the API lifecycle.
A private network is available for any team as part of the Postman platform. The private team network is accessible via the home page and provides a place to publish APIs, and each version of an API so they can be discovered by internal consumers. APIs can be added to the private network from the network itself, as well as part of managing the API within any workspace where the API is being managed. APIs in the private network will show up in the wider search, and allow for grouping and browsing by folder. Enabling teams who are producing APIs to easily increase the visibility of their APIs across teams, and beyond the workspaces where they are being developed. Increasing the chance that other teams will find before they begin developing a potentially duplicate API, or be able to put it to use within the application or integration they have planned.
The Postman public network is where you will find APIs from leading providers like Stripe, Twilio, and Salesforce, while also being the place you can publish your own public APIs for discovery by millions of Postman users. The public network is where teams can discover other public APIs they can put to work, or help make their own public APIs more discoverable by other 3rd party consumers. APIs can be published to the public network by changing the visibility of the workspaces they are in to be public, making the APIs, collections, environments, and other elements viewable by anyone browsing or searching the Postman public API network. The public network isn’t just about API discovery, it is also about helping bring observability and engagement with public API consumers more discoverable and accessible. Moving from the many siloed API portals of the past, where API producers work to build their own API ecosystems, to where API producers just participate as part of a larger API platform ecosystem–where developers already exist.
End of Life
Having a plan for the eventual retirement and ultimate deprecation of an API, or for specific paths or versions of an API should be a part of every API lifecycle, and even when there is no plan for deprecation there should be a process in place for setting consumer expectations for how long an API will be supported, as well as formal process to follow once retirement comes into view on the horizon. Planning for the end of life of each API will be commonplace, but only becomes a problem when there is no plan, or no communication with consumers.
As an API, or a specific version of an API approaches its end of life it is common to retire the API, setting expectations with consumers about the eventual deprecation of an API within a specific time period. The retirement of an API can involve updating documentation to reflect the retirement state of an API, adding a potential header that goes out with each API response, as well as sending out notifications and other communications with consumers that an API has reached the retirement stage. Retirement is the stage that signals to consumers that an API is still available for use, but will not be evolving forward anymore, and at some date in the near future it will eventually be deprecated.
API deprecation should be considered as early on in the lifecycle of an API as possible, establishing estimates for how long each version of an API will be maintained and what the overall lifespan of an API should look like. However, it is common for many teams to not think about deprecation early on as they are focused on only delivering new features and increasing the adoption of an API. Even with this reality there should still be a formal strategy for developers to consider when it comes to how they should be thinking about API deprecation, providing a common blueprint they can follow to properly shut-down an API without causing friction with consumers.
This blueprint intends to provide a high level walk through of one possible way of defining a standardized API lifecycle which is centered around an API design-first approach to delivering an API with Postman collections at the center. This view of the API lifecycle will not work for all teams and for all APIs, but it does provide one possible overview that may work for many situations. Each element within this blueprint works to provide a simple overview of what is involved across the entire life of an API, with more detail present on the detail page for each element (if you are viewing this on the API lifecycle project site). If you are reading this via a PDF or printed version you can visit the landing page for this blueprint to access more information and view specific actions you might possibly consider taking as part of applying each element of this proposed lifecycle within your own operations. This blueprint is a living document and will continue to evolve and be added to over time based upon feedback from readers. If you have any questions, feedback, or feel like there is more information you need, feel free to jump on the Github discussion for this blueprint, or any of the individual elements present--the value this blueprint provides is actively defined by the feedback community members like you.