Skip to main content

The Case for Event/Stream Management: Maximizing the ROI of Kafka and Event Streaming

Solace Blog Featured Image Kafka Mesh Green

In this Post

    In my role I have the privilege of talking with all kinds of IT practitioners and executives representing businesses around the world, and have picked up on a theme that transcends industry, company size and technology stack. It’s not that the COVID pandemic changed everything, no newsflash there, but that it did so in a way that’s changed the way consumers expect to have experiences with both digital and physical products.

    In addition, the pandemic left in its wake an enormous amount of technical debt that businesses are struggling with. Take for example, grocery stores – before COVID most businesses focused solely on in-store, non-digital experience. COVID forced business to become online retailers overnight, integrating their storefronts with a variety of hybrid delivery, pickup and return services. Restaurants, hospitality and healthcare all experienced similar transformations of experience, making the need for real-time event driven architecture and streaming an imperative for IT groups in all industries.

    Fast forward to today, and you can see that businesses that made the necessary digital experience changes but are now faced with two new problems at the same time:

    • Recession warnings in the world economy are flashing red
    • Technical debt is at an all time high due to the rapid changes to IT since the start of the pandemic

    The challenge faced is that we must repair and heal the fallout that came from the pandemic, all while doing less as companies cut IT budgets while expecting the kind of innovation they’ve seen in recent years.

    The Age-Old Challenge: Do More with Less

    Remember how I mentioned that the use of real-time event-driven architecture and streaming with Kafka and Pulsar blossomed with the transformation in customer experience? It turns out that has created a new opportunity for IT and businesses alike. Most of the events and data streams that exist today are used to solve specific real-time use cases. I have found most of these interactions are real-time application to application integrations. Thus, the need was real-time, asynchronous processing between two applications. An example of this would be an online order needing to be fulfilled within a grocery store, an order being delivered etc. But the power of event-driven architecture and streaming is that it natively was built for one-to-many integrations, and you essentially get that at runtime for free!

    So what’s the problem and what’s the opportunity you may ask? Here are the problems, each paired within an opportunity you can seize with Event Portal for Kafka!

    Problem 1: The Data and Discovery Conundrum

    If you don’t know about it, you cannot reuse it. So, when all the architects were developing these real-time data streams, and generally getting things done, they did not spend time documenting what they were doing let alone exposing information about the event streams. They were under crazy time crunches and expected to deliver trailblazing business capabilities. Sure, maybe it got documented somewhere in a confluent page, draw.io diagram for a technical review, but not in any sort of social aspect, where the business and IT could discover event streams and reuse them in new, novel ways.

    The parallel universe to all this is API Management which we all know and love. Before this, the enterprise was littered with RESTful APIs that were unknown to the general Business and IT Populus. The solution was to use a standard like OpenAPI to describe these services, import them into a API Management Solution that has a developer portal, and expose the APIs to internal and external parties.

    Associated Opportunity: Expose the Most Valuable Data in your Business

    Today, event management is starting to enter the fold. We now have AsyncAPI specs that enable architects and developers to document applications, import them into an Event Management Solution that integrates with developer portals and expose event streams to internal and external parties. But this time, it’s all about real-time data, vs APIs that query entity state. This is important as Gartner and others believe data is the most valuable at the moment it was produced. Thus opportunity #1 is the exposure of the most valuable data in your business. It can even create new value streams for your business especially if you think about providing a third-party marketplace for your event streams.

    Problem 2: Lack of Developer Productivity

    Using a Streaming Platform is far from free as you know. Most think exclusively about licensing and operationalization costs, but what about developer costs? Your developers are very highly skilled and costly resources that cannot be idle or non-productive. Consider the activities required today to create a new event stream from existing stream data coming from another team in the business that uses Kafka:

    • Meeting to discuss what is available to consume, reviewing schemas to determine security sensitivity, understanding which clusters have the data of interest: 1 hour
    • Design Review: prep of documenting and drawing data flows in ppt/draw.io etc., plus meeting: 4hrs
    • Development time for team to write application connection scaffolding to use Kafka: 1 hr.
    • Create business logic/value: Unavoidable Variable Time
    • Getting Kafka Infrastructure team to provision topics, schemas, and ACLs: hours-days depending on automation and process
    • Deploy application!: Unavoidable Fixed Time, Ideally automated with CICD pipelines
    • Promote though environments to production. Unavoidable Fixed Time, ideally automated with CICD pipelines

    What these activities display is that each application and the process therein is bespoke and suffers from a lack of specialized tooling and process automation.

    Associated Opportunity: Enable Self Service to Boost Agility

    Again, taking lessons from the API Management world, imagine if we could make the experience of building event streaming applications as easy as writing RESTful Services or RESTful clients.

    • We could avoid bespoke meetings on what’s available,
    • Design reviews could be done in an automate way, using artifacts that represent the architecture an application,
    • Use code generators to create the skeleton/scaffolding in a rapid and consistent way,
    • Have a developer self-service experience with the underlying infrastructure all integrated with your CICD pipeline.

    This would save countless hours of wasted developer time, while also providing tangible benefits to documentation, consistency, and time to market.

    Problem 3: Lack of Security and Lifecycle and Change Management

    Indefinable Costs such as security and data breaches, failures/downtime of applications and project delays are nearly impossible predict. These costs can become huge depending on the business and the criticality of the data and use case. Consider the following two problems:

    1. Lack of Security and Data Awareness
      • Are you sure that your event/streaming broker has fine-grained access controls that provide minimal access for each application? Most think so, but upon further examination ACLs are far too wide open.
      • Are you sure that the data flowing across your topics does/does not contain security sensitive information AND that only the right applications are entitled to consume that data? Most cannot authoritatively answer this question.
      • As changes happen overtime, are problems a and b consistently considered and reviewed? Almost never!

    All these challenges are inviting regulatory fines, lawsuits, and customer lack of trust. The challenge is that all of this takes time and effort especially when you don’t have a concise view of what’s going on AND automation that ensures least access.

    1. Lack of Lifecycle and Change Management
      • Good news! Event-Driven architecture and streaming provide application decoupling at runtime. Bad news: this decoupling makes it challenging to know the impact/blast radius of a change. Sure, we should all evolve things in a forwards/backwards compatible way but…
      • Often, we make changes thinking it only effects my set of applications, totally unaware that downstream there are other applications that will break or have unintended consequences causing at a minimum delay and at a maximum creating outages.
      • In addition, even if your change is forwards/backwards compatible, how do you socially communicate these changes? Take for an example, the addition of an attribute of data. How do other stakeholders know this data now exists as it could create and enhance their business outcomes?

    These scenarios are real, and they happen. I have the battle scars to prove it. The cost of these problems can easily mount as you continue to enhance and evolve your applications in response to new and changing business requirements. Can you continue to afford delayed time to value simply because you cannot easily lifecycle and change manage your real-time event streams and applications?

    Associated Opportunity: Safely and Securely Evolve Event Streams

    With the proper event management strategy, these challenges can be overcome while enhancing time to market and agility. This strategy entails the following key steps:

    • Version Everything! (Ideally using semantic versioning): This includes Application Interfaces, Event/Topic definitions and Schemas. Semantic version provides change context and enables proper communication as to what is being used.
    • Leverage Release States: Inform users as to if something is Draft, Released, Deprecated or Retired so that they make the right decision as to what to use. It avoids costly confusion and rework early in the process.
    • Do change impact analysis early: If you’re going to make a change, review the stakeholders that are affected by your change and do not assume it’s just yourself. In addition, check all environments and clusters so you have 360-degree awareness so that there are no surprises. These changes include changes to key/value schemas, topic names, configuration, application interfaces, etc.

    By doing these steps it will reduce your risk of outages and failures (and associated costly troubleshooting and hot bug fixing) while increasing he speed in which you deliver value to your customers. Remember, each day a business capability is running in production will add to your top and bottom line.

    Conclusion

    If you want to seize upon any of the aforementioned opportunities, you will have to decide if you want to implement the tooling yourself our purchase an event management tool. I am pretty sure your business is not to create the worlds best event management Tool, but rather to provide the best digital experiences for your customers. To that end, I would suggest you check out Solace PubSub+ Event Portal which enables all the opportunities presented, with minimal capital investment. The gains in productivity as well as the ability to provide new, real-time digital experiences will enable you to do more with less.

    The post The Case for Event/Stream Management: Maximizing the ROI of Kafka and Event Streaming appeared first on Solace.