API Docs-Bots, Assistants & OpenAPI Auto Fixing

API Docs-Bots, Assistants & OpenAPI Auto Fixing

Let’s look into our crystal ball to see how Docs-Bots, API assistants, and OpenAPI auto-fixing will drive future API adoption.

2024 marks the first decade of APIMatic incorporation. That’s quite a lot of time being part of this beautiful community of API developer tools and witnessing the API industry evolve. From new API description formats popping up to old ones fading away, from the love of HATEOAS to the rise and fall of microservices, the GraphQL storm, some big acquisitions, and a few sad exits, tech alliances, patent fights, standardization efforts – it’s been an exciting rollercoaster!

Where will the industry be heading in the future? From the learnings of a company focused on improving the developer experience of both API providers and consumers, we pick three active areas of innovation:

1- API Docs-Bots to Massively Reduce Developers’ Onboarding Time

API documentation solutions have come a long way in the past few years. Features like runnable code samples and guided walkthroughs help developers get up and running quickly with an API. Still, a learning curve is involved before a developer can start writing the integration code.

Generative AI will likely replace the learning curve and long API onboarding steps in the future. For example, instead of going through each endpoint, reading the docs, and writing the code for authentication flow, a developer can ask a query to an “API Docs-Bot” in natural language and get the integration code for the desired use case in a few seconds.

What About Code Reliability?

At the moment, Gen-AI is good at generating syntactically correct code, but it lacks an understanding of the semantics. Therefore, a challenge would be training the AI bot to generate production-ready integration code for different scenarios. A couple of ways to solve this problem are using:

An expert developer: to feed detailed prompts to GenAI and keep refining until the integration code of a desired quality level is returned. It is manual and hard to scale.

A sophisticated code generator for APIs: to train the AI on the SDK of an API so that its responses are based on deterministically produced code by the code generator.

To test solution # 2, we’ve conducted a few experiments using APIMatic, and the initial results of creating an API Docs-Bot by combining the two machines have been quite promising (see below). Expect to see similar attempts in 2024 by API solution providers.

2- AI Assistants to let Non-Developers Play with APIs

Developers or non-developers, we all use APIs. The difference is that developers build apps using APIs, while non-developers use APIs via those apps. Moreover, developers can use an API for countless use-cases, while non-developers can only try limited use-cases built in an app by a developer.

Imagine, I want to know how many times I have been to restaurants in the past 12 months, the total expense, and the average cost per visit. Today, my options are:

to download and look manually into my bank statements or
find a FinTech app which offers this functionality, or
get a developer to write an app using OpenBanking APIs.
Later, what if I want to see my top expense categories for the last two years as a graph? Again, I can do it manually or depending on the functionality built by a developer.

The good news is that Gen-AI can remove this dependency, i.e., letting non-developers access an API the way they want without needing a developer in the middle using AI assistants. Because of its conversational nature, the end users can easily provide their input in natural language and further refine the output conversationally if needed.

In 2024, we expect to see a plethora of AI assistants. However, similar to the API Docs-Bot case, training Gen-AI to reliably access data from an API and convert that into information takes a considerable amount of time. A quicker way would again be to use a code generator to reliably and instantly get the integration code and train the AI to process the data coming through the APIs.

We experimentally built a financial assistant during the recent holidays using Payments New Zealand’s OpenBanking APIs. Here are a couple of queries in the screenshot below to depict the outcome (the sandbox data has been expanded to try different scenarios):

3- Governance – from OpenAPI Ratings to OpenAPI Fixing

As the significance of API governance and design continues to grow, we see some new tools arriving in the market, focusing on assessing an API’s quality through its API definition. Two noteworthy examples include Zuplo’s Rate My API and Trebble’s API Insights. Both tools audit API definitions against certain standards/conventions and assign them a good/bad score for various categories, including design, security, completeness, and more. By highlighting specific areas of improvement, these tools empower API designers to make their definitions flawless. At APIMatic, we also help API designers improve their definitions for stronger SDK and documentation generation. Following is what an auto-generated audit snapshot of an OpenAPI file with some validation and linting issues usually looks like:

Why is a Flawless API Definition Important?

GIGO (Garbage In, Garbage Out) is a familiar slang in the computer science world, meaning a flawed input would produce a nonsense output. GIGO is perfectly applicable to the processes that need OpenAPI definitions as input. If you’re looking to feed your OpenAPI definition to GenAI, want to generate Docs, SDKs, or do anything else, you need to ensure a flawless API definition or don’t mind piling up some garbage. This problem is not new for API definition files (see some common mistakes from 2018 in OpenAPI, APIBlueprint, and RAML definitions), but it is growing in magnitude and demands immediate attention. Based on APIMatic users, we saw:

20% of the total OpenAPI definitions brought to APIMatic violated OpenAPI standards and failed during the import process, whereas
50% conformed to OpenAPI standards but lacked the elements required to generate viable integration code.

Towards a Future of Auto-Fixing

While an audit report helps identify the issues, somebody still has to fix those problems. This is the hard part due to the time and resources required to rectify a complex OpenAPI definition, especially if one is not too familiar with the syntax. Can AI help with the auto-fixing? Yes, but reliability will remain a question unless somebody pays to train like an LLM. Till then, OpenAPI experts or man-made machines with pre-defined rules will keep on helping the API designers.

Therefore, in 2024, we expect to see the audit tools moving forward and help designers with (semi) automatically fixing the issues. At APIMatic, we’re running a “Fix My OpenAPI” movement primarily focused on SDK and Docs generation. A free-to-use VS Code extension comes with over 1200 predefined out-of-the-box rules. It ensures strict adherence to API standards and recommends improvements and best practices to enhance one’s API definition. Moreover, the extension has an “auto-fix” feature to help remove some pesky issues with just a few clicks.

That’s it for today. Let’s see what new and improved tooling 2024 brings up to help API designers while they curate OpenAPI definitions and how far the Generative-AI wave takes the API space. Feel free to reach out at adeel@apimatic.io for any feedback or if you want to try out the Gen-AI POCs mentioned in this blog. I also want to acknowledge team APIMatic, especially Saeed, Ali, Faria, and Sidney, for their valuable input and R&D initiatives.

Leave a Reply

Your email address will not be published. Required fields are marked *