What does the rise of bots mean for APIs

What Does the Rise of Bots Mean for APIs?

What does the rise of bots mean for APIsAs bot technology evolves and migrates into the API space, many API practitioners are left asking — “what does this mean for our industry?” It’s an interesting question and intellectual experiment to undertake, and one that is heavily rooted in futurism.

In this piece, we’ll address the rise of bots, and see what areas of the API space they might influence in coming years. We’ll look at how pronounced this influence could be, as well as areas where bots may be more limited in application.

The Rise of the Bot

There’s been a lot of media chatter about bots disrupting the API economy, each with varying levels of prediction as to how large the impact will be. Beerud Sheth of TechCrunch thinks that “we’re reaching the limits of the mobile “OS + apps” paradigm”, and that bots will soon replace the apps that APIs often drive in the mobile space. Niko Nelissen of VentureBeat thinks that “death of the API” is coming, and coming fast. So is this all hyperbole, or is there some truth behind these opinions? What are bots, and what space do they occupy in the modern tech landscape?

Bots, or services that act semi-independently given an established goal and functionality, have existed almost as long as the languages that support them, as far back as the introduction of scripting shells in the 1960s. Calvin Mooers introduced active functions, which insert data into active scripts utilizing command substitution. As far back as 1964, Louis Pouzin was creating processors specifically for command scripts to automate and interface with the CTSS platform.

Our modern concept of bots, however, are independent machines that carry out a task at the behest of a system or program. They are distinct from scripting in the sense that they provide near human-like interactions with the interfacing client. These bots communicate in code to their backend, and then communicate with the client in clear English, or in other intelligible formats outside the spectrum of machine processable language.

A classic example of this sort of interaction is the ELIZA program. Created in 1964 by MIT Artificial Intelligence Laboratory member Joseph Weizenbaum, ELIZA responded to human interaction with a set of responses culled from documented conversations or data of a medical nature. As people interacted with ELIZA, it would respond using pattern matching scripting, seeming to learn and adapt to input. While ELIZA was, at its core, a simple parroting piece of software, the idea caused many developers to rethink how artificial intelligence would function, what it would look like, and how it might interface with other systems.

Bots, therefore, can be seen as an intermediary between scripts and true artificial intelligence. Bots in this space essentially function as a universal gateway to an API, eliminating time spent toying around with company APIs and internal structures by tying this exploration to a single, simple interface.

The Sea Change of Intractability

One might see by looking through the history of bot technology that, by and large, they have been crafted to make user interaction, both in user-user communication and user-resource communication paradigms, more automated and effective. In this same vein, bots are poised to trigger a sea change in intractability with APIs, and the front ends they drive.

In the current function paradigm, a user or client must submit a long string or sequence to an API in order to get the data or action they need. While this is fine for machine-to-machine interface, where hundreds of thousands of processes are run on hundreds of threads in a data center per second, in the human sense, where each character needs to be typed out or shorthand memorized to represent said characters, this is a huge roadblock to common usage.

Bots represent a great shift in this basic interaction. A great example of this is the various bots provided by the Slack API. Users can interface with these bots as if they were humans with their own accounts, asking them to carry out basic fundamental tasks. Not only are these tasks taken care of quickly and accurately, the bots who do them interact with the requester as if they were human, with human-like syntax and intonation.

bots-interaction

Interaction is as simple as stating your request in plain English. “@officebot, please send an email to Scott wishing him a Happy Birthday” will elicit two responses — the first, a simple acknowledgement of “Ok Kristopher!”, and the second, an actual API-driven, programmed automation that sends the actual email.

The addition of human-like interaction seems like a small feature, a “creature comfort” addition more so than a revolution, but its importance cannot be overstated. Having human interaction is the greatest goal of user design experience, removing the cold, calculating, mechanical responses of the API for one that is friendly and informative; increasing adoption rates and usability.

In this way, bots will certainly replace parts of the API space in the sense that they will be the new face of existent systems and services, which is really not replacing but redefining. It’s this redefinition that is much lauded, as it means APIs can now be designed more often than not for simple, easy to understand, but powerful and direct instruction, and in plain language. Designing for bot to bot communication removes many layers of obfuscation out of necessity, and has many secondary human-centric benefits.

As APIs are designed for greater coordinated communication between themselves and automated devices that use them, this will result in a better defined, more completely documented API ecosystem by second hand.

Technical Obfuscation

One of the huge benefits of the emerging bot space is that, by removing the need for the user to interact with an API backend, you create a more secure interface.

letterbox

A bot adds another abstraction layer to an API

This added layer is like a “letterbox” entry point for your API, where the API is hidden behind a single, central, separate point of entry. When you can’t peek inside, and can’t enter by force, security is thereby inherently increased. APIs with bot frontends are doing just this. Users can interface with a dialogue box or automated web portal, and are able to see resources, while not directly accessing them. Users do not have to look through the API code to create their own methodologies of interacting with the backend, and thus the code is not open to manipulation or prone to bug discovery.

Many readers might of course think “my code doesn’t have bugs, it’s perfect!”, or that because they have security features already, they are secured from these types of threats. The reality is, however, that every system, every security methodology, and every obfuscation technique is able to be broken — it’s not a matter of if, but when, and determinant largely on how valuable your data is perceived. Giant corporations and companies have been hacked by social engineering techniques, poor file encryption, or simple cluster failure on nodes responsible for security.

So then the question is this — if you know people want to attack you, and that your garden gate is vulnerable, do you leave your front door open? Or do you lock the front door, and peer out the letter box when communication is needed?

Standardization Nullification

A benefit of bot technology is that it could negate issues of standardization and compatibility. When bots are designed to utilize simple English (or the language of the user, for that matter) as commands, then it doesn’t matter if the originating requester is coded in Python, with the receiver coded in Java — the bots that are the frontend for both systems can understand, and translate.

This is somewhat of a holy grail for online communication. For years, the idea of the semantic web — everything connected with rich multimedia — was the golden goal for all, but was severely hampered by differences in choice of browser, language, and syntax. Anyone who once had a Myspace page will tell you that visiting a friend’s profile was a gamble — would you be able to view it in this browser, with these extensions, and would it support this type of file? Or would the entire thing crash and burn the second you tried to load it, with -webkit calls breaking your IE experience?

Being able to negate this experience, and bridge between disparate systems, isn’t just a creature comfort — it’s a huge potential. Unfortunately, this comes with its own potential issues — bot standardization.

Consider just how complex systems would have to be to tie in hundreds, perhaps thousands, of bots designed in various languages and using various syntaxes that all have to talk to each other. This can be managed, of course. It was managed in the early days of the internet up to the current decade through the authorization and approval methodologies of The Internet Engineering Task Force, laying the foundation for basic interaction on the internet.

Of course, this is a shifting of concern — we’ve replaced issues with API language standardization with issues of API bot standardization. While the bot moved issues of standardization away from the consumer, it placed it firmly in the lap of the developer. Of course, something like the IETF can be created for bots to negate much of this, but like the IETF, this might result in slower innovation and adoption of new standards, thereby negating much of the benefit of the rise of bots.

Learning is Not Learning

Much of the futurism surrounding bots are steeped in what they “could do” — and what they “could do” is severely limited by the nature of bots themselves.

While it’s easy for the average user to think a truly artificially intelligent system exists, the fact is that bots are not thinking. Whereas a human is given information that is then correlated in their mind with thousands, perhaps millions, of various synonyms, combinations, antonyms, and intonations based on inflection, context usage, or intent, a bot can only respond in the way they have been programmed to respond. While bots are intelligent in that they have been taught how to act by their human designers, this intelligence is mimicry.

glass-container

Bots can’t design bots – at least not in their current form

For example, what happens when a user places a food order using, say, a Sushi Bot? The user might say “I’m hungry, can you deliver Sushi to me at work?” A bot can parse this, but in order to do so every variation had to be anticipated and programmed. Variations of “I’m hungry” ranging from “I’m famished” to “Hey, can you get me some food” had to be thought of during the design and implementation phase, and unless deep learning algorithms are consistently updating the machine’s vocabulary, this had to be done by hand by the designer of the bot itself.

For a great live example of this, use any chatbot on the internet. While the responses might appear “human” enough given how long many of these services have been live, make a note of just how many errors in understanding and conceptualization there are. Simple errors, which can often be noted when the bot responds by parroting the question (e.g., saying “my head hurts” might elicit the response “why does my head hurts?”), quickly mount.

While these errors are certainly acceptable when we chat for fun, consider them in the constraints of an API interaction. When healthcare records need to be requested using an API to call to an external record handler, the last thing a surgeon or urgent care responder needs is to see the phrase “please restate your question in the following format” or “I don’t understand” displayed ad nauseum.

Roadblocks to conceptualization may dissolve in time, especially with such huge strides in development from teams like the Machine Learning department at MIT, but for now, bots are severely limited by this basic quality.

Still, Plenty of Organizations Will Rely on Direct API Integrations

With all this being said, even with widespread bot adoption, organizations will still rely on direct API integrations. Not every integration that is vital to a system’s operations requires a direct user interface — such is true for Business Processing Outsourcing (BPO) APIs, for APIs powering SMS messaging, financial tech, scheduling, and much more.

Accordingly, there are a great deal of services that utilize APIs that simply don’t need a human-like interface — they’re a cog in the background functionality of the service ecosystem. For example, when you need to automatically retrieve real time stock ticker price fluctuation, doing so by setting up an API call to a data provider is far more efficient than chatting with a human friendly StockBot that interrupts you on Facebook Messenger every hour. APIs that power the DevOps pipeline are similarly built for utility rather than interactiveness. Though ChatOps portals may initiate functions differently, those underlying connections must remain machine readable.

For consuming machines, efficiency triumphs human-like user experience, therefore direct connectivity with APIs will still be the best approach.

Conclusion

If anything, this author could be considered a “realist” first, and a “futurist” second. While bots are an incredible technology, they are also a fundamentally flawed one — a fact often missed by many hyperbolic journalists touting bots as the next best thing, the next tech revolution.

The problem is that a lot of the opinions on this subject right now use hyperbole rather than tempered optimism. Bots are a wonderful technology, and over time, it is possible, however unlikely it might be at this point in time, that they will evolve into truly thinking, reacting machines.

In their current state, however, bots are not going to “replace APIs” — they are simply going to make them better to interact with, more human like. The rise of bots means a change in how we consume APIs and how they respond to our requests. They are a new layer of organization and data exchange, but they, at least in their current state, needn’t replace the API as we know it.