Event Streaming Overview

Use event streaming to integrate downstream message systems with MATRIXX. Fully optimize these message systems to filter, transform, and deliver events to multiple secondary targets.

MATRIXX event streaming creates custom event streams from the events that MATRIXX Engine generates. An event stream is an ordered, unbounded, and continuous flow of events. Monitor or analyze event data in real time using an external event-handling solution by creating event streams of specific events relevant to your networks and businesses.

MATRIXX produces events for many reasons. Some events are single data records containing information about actions that occurred in the MATRIXX Engine cluster. If an event impacts multiple wallets, it becomes a primary event and is packaged with the secondary events for those other wallets. When primary and secondary Event Detail Records (EDRs) are generated, they are always in the same MtxEventRecordData MATRIXX Data Container (MDC). For more information about primary and secondary events, see the discussion about EDRs in MATRIXX Integration.

Different events are derived from the MtxEvent object to represent different actions that occur in the system. Some events, such as usage events, are produced often, while other events, such as those for payment processing and records of a purchase, are produced less often.

Event streaming can be used to deliver to a direct target system. For performance reasons, MATRIXX Support recommends that event messaging services to which events are sent be configured for onward delivery.

You can configure multiple streams in parallel. MATRIXX Support recommends that you connect to one to three target systems. A target system, for example, ESB, can offer flexible delivery to other BSS systems for key events such as purchase, cancel, and state changes. That stream can have a more targeted filter configuration.

MATRIXX provides these components to support event streaming:
  • Event Stream Server — A MATRIXX process that generates, stores, and archives the events generated by MATRIXX Engine, and if configured, streams the events through the Event Streaming Framework to external event-handling solutions. The Event Stream Server runs on the publishing cluster of MATRIXX Engine. For more information, see the discussion about Event Stream Server.
  • Event Streaming Framework — A framework that connects a MATRIXX Engine cluster to external systems to process event streams. The Event Streaming Framework provides these connectors to stream events for processing:
    • Multi-connector (a list of other connectors)
    • Apache Kafka
    • ActiveMQ
    • Google Pub/Sub

    For more information, see the discussion about Event Streaming Framework connectors.

The Event Streaming Framework streams at least one copy of each event. Typically, only one copy of each event is streamed. Duplicate events might be sent if MATRIXX Engine or Event Streaming Framework encounters processing errors, restarts, fails over, or is processing few transactions per second (TPS). The event consumer takes any action appropriate for duplicate messages. Identify duplicate messages by checking whether they have the same EventId field values. For details about event IDs, see the discussion about MATRIXX event detail records in MATRIXX Architecture.

Figure 1 shows a basic view of the event streaming components and flow. The processing pod sends transactions to the publishing pod. The Event Stream Server generates an event sequence ID for each transaction. The Event Streaming Framework retrieves events from the Event Stream Server. The adapter (with the relevant SDK) loads events to the stream target and then updates the current position in a session object that is stored in the activity database of the processing pod.

Figure 1. Event Stream Server
Event Stream Server high-level functional view
The MATRIXX Engine publishing cluster includes two running publishing pods, one active and one standby. The publishing pod performs these tasks:
  • The active publishing pod:
    • Processes the transaction stream sent by the processing pod.
    • Runs the Stream Server process that generates, stores, and streams events to the Event Streaming Framework.
    • Creates transaction log files from transaction logs.
    • Moves the transaction log files to the archive directory.
    • Creates MATRIXX Event Files (MEFs).
    • Sends the transaction stream to the processing cluster in the MATRIXX standby engine, which in turn sends it to the active processing pod in the next MATRIXX standby engine (if any).
    • Publishes stream events to the Event Streaming Framework if event streaming is enabled. For information about enabling event streaming, see the discussion about enabling the Event Streaming Framework.
  • The standby publishing pod participates in replay transaction processing.
Note: The Event Streaming Framework does not communicate with the processing cluster directly. Event Streaming Framework goes through RS Gateway and Traffic Routing Agent (TRA).

Both publishing pods (active and standby) create internal transaction log files that are used for fast failover. For information about recovering event stream data for a system outage or engine failover, see the discussion about MATRIXX transaction logs in MATRIXX Architecture and recovering lost events in this guide.

For a summary of tasks required to deploy event streaming in a MATRIXX environment, see the discussion about implementing event streaming.

For more information about reconfiguring an event stream to change the data type streamed during runtime, see the discussion about configuring event streaming for failed events.

Event Streaming Framework High Availability

Use MATRIXX Event Streaming Framework high availability (HA) if real-time event streaming is critical to your business. HA ensures pause-free real-time event streaming using software redundancy. If you expose event streams to customers, MATRIXX strongly recommends that you use the HA features.

The Event Streaming Framework uses a leader-follower model for each event stream per sub-domain. The recommended minimal configuration includes a node with two Event Streaming Framework pods. During initial start-up, when the Event Streaming Framework registers with a Stream Server, it determines which Event Streaming Framework pod is a leader and which pod(s) are followers. The main principle is that there is only one leader per stream. A node can be configured to have two or more streams, with one leader per stream, running on the same node.

The Event Streaming Framework is part of MATRIXX, which can be deployed on Kubernetes. MATRIXX includes a Kubernetes operator that supports active Kubernetes auto-healing, allowing Kubernetes to reinstate a pod that has disappeared or is not functioning correctly. HA is implemented by specifying a replica set to ensure that a specified number of pods are running at any given time.