SmartBear defines API standardisation
This is a guest post for the Computer Weekly Developer Network written by Nathan Wright in his role as SwaggerHub product marketing manager and API evangelist at SmartBear.
SmartBear Software is know for its application performance monitoring (APM), software development, software testing and API management tools.
Wright defines API standardisation (or API standardization, depending on which side of the pond you sit on) for us in light of the prevalence of this technology approach, which essentially exists to ‘glue’ different parts of applications execution streams together.
SmartBear’s state of API 2019 report uncovers API standardisation as the top challenge facing API teams as they support the continued growth of microservices… so what is it? Wright writes as follows…
What is API standardisation?
With the continued growth in the number of services (both internal and external) that we are seeing organisations start to support, there is a common trend for overarching requirements to be followed by the smaller, distributed teams building and supporting them.
We see API standardisation as a way to enforce guidelines on how these services communicate and govern how the data itself is being exposed — something like a common definition of what information a ‘user’ object includes regardless of which service is returning it.
One of the major benefits of enforcing these standards is speed of delivery – when teams don’t have to redefine existing assets, it allows them to focus on the core requirements of the new project. Additionally, this helps with not only simplifying something like a new integration and speeding up the amount of time, but also reinforces a positive developer experience regardless of the service that is being used.
What defines it and makes it so?
What we find in many cases is that there is a high-level group, or team, who defines overarching standards and rules that development groups follow. It could be an architecture team or something like a centre of excellence in an organisation.
These rules are communicated out to the groups that are tasked with developing, testing and implementing services.
Often, the most successful companies have a very open feedback loop between these various groups. The standards that are defined should be open to evolve like other services – requirements change, and the standards need to grow with them.
Is it a never ending task?
So is API standardisation (and classification) something of a never ending task as new standards constantly need to be built?
While it might seem like a never ending task, we find there is typically a core set of standards that are laid out through a larger architectural project.
An example would be something like an organisation-wide shift to microservices, tying to a standard for how paths are defined or the content types that services must support. These generally don’t change much after an initial sign off as doing so would in many cases would mean re-architecting an entire system.
Where we see a shorter evolution of standards is in the format of data being served – this is very much in line with traditional development, where a team takes on feedback, adjusts requirements, roadmaps changes, gets sign off and implements.
As the scale of the amount of data that is being consumed continues to grow, being flexible in matching that growth, and recognising what the line is between a breaking and non-breaking change, becomes critical.
Who agrees on the standards?
We find in many cases there is a small group of practitioners who will lay out new rules and standards, but there will be buy-in or a ‘sign off’ from a larger group of key stakeholders, such as a group of team leads who will be the primary consumers of a service.
While they may not be laying out the initial standards, in successful organisations there is a very short feedback loop between these groups and most importantly it is a relationship that is defined by collaboration and fast iteration, not push back or top down implementation.