Wilson Mar bio photo

Wilson Mar

Hello!

Calendar YouTube Github

LinkedIn

It does your job. And helps others to do your job.

US (English)   Norsk (Norwegian)   Español (Spanish)   Français (French)   Deutsch (German)   Italiano   Português   Estonian   اَلْعَرَبِيَّةُ (Egypt Arabic)   Napali   中文 (简体) Chinese (Simplified)   日本語 Japanese   한국어 Korean

Overview

Gitter

This tutorial is about the elements of an ecosystem for development of Application Programming Interfaces (APIs).

By “API” I include the GraphQL data query language ( from Facebook) and programs such as browser app GraphiQL Apollo Codegen which read them to create client programs.

In this 3 minute video, click “CC” to toggle Closed Captioning text below.

API services obtain revenue based on access by client apps written by other developers.

But first those developers need to find (discover) that the API exists, perhaps from galleries of available APIs such as apis.guru.

It’s helpful to have a comparison of results from static quality scans evaulating machine-readable interface specs of the API.

There are several competing API specification formats (such as RAML, WADL, and Swagger). But Swagger seems to be the most popular at this time.

Discussion and evaluation of APIs can be more focused when they revolve around community agreement of a specific set of rules driving quality scans.

Documentation about the API helps not only for the API to be discovered from the public galleries, but also for developers to more quickly appreciate the intricacies of that API.

What’s even better than reading documentation is to interact live with a sample demonstration app so developers and potential users can really grasp the value of the services exposed.

Then developers would be more enticed to get API keys client apps need to provide when accessing API services.

Having enough sample data is important so test automation scripts have the test coverage needed to make sure that all features really work, and work quickly, even under load.

A mock server doesn’t provide all the logic and data from a real server, but some developers use them while they build their clients because they provide a stable end-point running locally while off the network.

The Ecosystem

All this is what makes up a “full” API ecosystem today.

Would you like this? Let me know.


It takes effort to create docs, demo, test, and mock server code.

That is why many have begun writing automated generation of such code.

But ideally, the logic used to generate this code would be based on both the interface specifications and wisdom culled from analysis of the metadata gleaned from patterns in data over time and analysis of history previously only used for billing.

More sophisticated variation of data values in generated code is now the frontier. Such data include values and statistics from both historical points in time and projections plus predictions of values expected in the future.

With apps of enterprise scope and complexity, manual coding of the client and server code-base by human developers can seem repetitive and be error-prone, therefore taking more time and be more expensive than what could be.

So instead of manually defining interface specs, they can now be generated from the code base by marking up server code with comments recognizeble by a parser such as Doxegen.

All this cuts time to market because changes to server code can now be quickly reflected in the docs, the client demo code, mock server code, test code, and benchmark run results.

Ask me questions about this.

Recap: My Ask

What I’m advocating here are:

  1. Programming of code generation programs so that the many future changes in requirements is automatically reflected in working code.

  2. Scan swagger JSON interface specs for issues based on commonly accepted rules, just like we now use SonarQube to statically scan Java code for issues.

  3. Expand a central museum (marketplace) of APIs out in the wide so people can discover and compare APIs techniques employed based on various evaluation criteria (like Consumer Reports does with consumer products).

  4. Elicit insights about billing on where databases are growing organically in order to predict areas of stress, so developers and programs have the wisdom to alter testing code to proacatively verify if the database is ready for those specific types of growth.

  5. Leverage a community of developers and other professions to achieve the above through a smart forum for collaboration.

  6. Perhaps the biggest one is that organizations can be inundated by so many APIs that a (easy) way to manage them together as a whole.

    When an organization has a way of generating client apps from code, it can quickly make use of additional APIs by leveraging prior API work (such as company security and branding). That makes the business more nimble.

    When changes occurs, the business can adapt quicker if can re-generate rather than

Email or call me so we can see how this can work for you and your organization.

We’re talking about generating code based on a standard specification (Swagger) with known formats.

API search engines

Gitub: https://github.com/

Postman Explore: https://www.postman.com/explore/apis

ProgrammableWeb API Directory: https://www.programmableweb.com/apis/directory

APIs Guru: https://apis.guru/

Public APIs Github Project: https://github.com/public-apis/public-apis

RapidAPI Hub: https://rapidapi.com/search/

References

  • http://www.w3.org/Submission/wadl/
  • https://developers.helloreverb.com/swagger/

More on API Microservices

This is one of a series: