Hybrid Messaging Solutions OpenStack Summit Boston 2017 About Us...

Ken Giusti ([email protected]) irc: kgiusti

Project ● Oslo.Messaging Developer ● Software Engineer at

Mark Wagner ([email protected])

● Performance and Scale Engineering ● Performance Engineer at Red Hat Agenda

● Oslo.messaging (RPC and Notification Service Patterns) ● Backend Scenarios (Hybrid Messaging) ● The Drivers ● Testing Methodology and Results ● Next Steps... Oslo.Messaging Oslo.Messaging

oslo.messaging oslo.messaging oslo.messaging oslo.messaging oslo.messaging oslo.messaging

Messaging Bus

● Part of the OpenStack Oslo project that provides intra-service messaging ○ Remote Procedure Calls (RPC) & Notifications ● Abstraction that hides the details of the underlying messaging technology from the OpenStack Services Oslo.Messaging Services

Caller Messaging Server Application System Application

Notifications

● Asynchronous exchange from Notifier to Listener ● Listener need not be present when notification is sent ● Temporally decoupled ● Requires store-and-forward capability (e.g. queue or store)

Remote Procedure Call (RPC)

● Synchronous exchange between client and server ● Temporally bracketed ● If Server not present, call should fail Notification - Broker Backed Messaging

Notify OpenStack Listener Service . . OpenStack Notify . Service Client Topic Queue Notify OpenStack Listener Service

oslo.messaging

oslo.messaging

● Notification API provides the ability to publish an event to a known topic ● Notification Listener(s) fetch the event when ready to consume ● Unidirectional and asynchronous - no traffic flows back to the notifier ● Queue store-and-forward capability is necessary for notify messaging pattern RPC - Broker Backed Messaging

Request Queue

OpenStack OpenStack RPC RPC Service Client Server Service Reply Queue

oslo.messaging oslo.messaging

● RPC transaction takes place across 4 discrete message ownership transfers ○ Requires two queues with associated resource overhead ● Detachment of client and server ○ Messages are acknowledged but not guaranteed to be delivered ● Clients do not readily know when a server is “unavailable” RPC - Direct Messaging

OpenStack OpenStack RPC RPC Service Client Server Service

oslo.messaging oslo.messaging

● End-to-end transfer of message ownership ○ No store-and-forward message queueing (stateless intermediaries) ● Logical link between clients and server ● Clients can immediately know when a server is “unavailable” Oslo.Messaging

API API

Notification RPC Service Service

Shared Messaging Stuff

Transport (driver)

Message Bus Oslo.Messaging

API API

Notification RPC Service Service

Shared Messaging Stuff

RPC Transport Notification Transport

Message Message Bus Bus Oslo.Messaging

API API get_transport(URL) get_rpc_transport(URL) [TBD] get_notification_transport(URL) Notification RPC Service Service

Shared Messaging Stuff

RPC Transport Notification Transport

Message Message Bus Bus Service Configuration (e.g. nova.conf, etc.)

[DEFAULT] transport_url=rabbit://rpc_user:rpc_pw@rpc_host:rpc_port

...

[oslo_messaging_notifications] transport_url=rabbit://notify_user:notify_pw@notify_host:notify_port Single Backend

OpenStack Service OpenStack Service OpenStack Service OpenStack Service

oslo.messaging oslo.messaging oslo.messaging oslo.messaging

RPC Notify RPC Notify RPC Notify RPC Notify

Broker Cluster Dual Backend

OpenStack Service OpenStack Service OpenStack Service OpenStack Service

oslo.messaging oslo.messaging oslo.messaging oslo.messaging

RPC Notify RPC Notify RPC Notify RPC Notify

Broker Cluster Broker Cluster “Hybrid” Backend

OpenStack Service OpenStack Service OpenStack Service OpenStack Service

oslo.messaging oslo.messaging oslo.messaging oslo.messaging

RPC Notify RPC Notify RPC Notify RPC Notify

Direct Broker Cluster Messaging Benefits of Hybrid Messaging

● Optimal alignment of messaging patterns to messaging backend ○ Peer-to-peer messaging for RPC services (direct) ■ “Fail fast - fail clean” ○ Store-and-Forward for Notification services (queueing) ■ Mirroring and message persistence ○ Increase scale - Increase performance ● Diverse Topologies & Alternative Messaging Technology ○ Centralized brokers (hub-n-spoke) ○ Distributed architectures ○ Messaging as a Service Alternative Oslo Messaging Transports

● ZeroMQ Socket Library ● AMQP 1.0 Protocol Transport + Qpid Dispatch Router ● (experimental) ○ Notifications only (no RPC support) ZeroMQ Transport

● Dedicated TCP connection between client and server ● Matchmaker (Redis) - maps topics ←→ host addresses ● TCP concentrator to limit TCP resource consumption ● Deployer’s guide: https://docs.openstack.org/developer/oslo.messaging/zmq_driver.html

Client Client Server A Server A

Client Client Proxy Client Client Server B Client Server B Client AMQP 1.0 + Message Router

● Network of “message routers” ● Routers learn location of servers - optimal shortest path ● Stateless (no queueing) - end to end transfers ● Barcelona: https://www.youtube.com/watch?v=R0fwHr8XC1I ● Deployer’s guide: https://docs.openstack.org/developer/oslo.messaging/AMQP1.0.html

Client Server C Server A Client

Client Server B Client Client Oslo.messaging matrix (drivers, backend, patterns)

Driver Messaging Backend Type RPC Notification Backend

Rabbit -server Broker (kombu,pika)

AMQP 1.0 qdrouterd Direct Messaging

ZMQ TCP, proxied Direct Messaging

Kafka kafka server Broker-like Distributed Streaming Test Methodology Testing Methodology

● Objective: benchmark RPC using both direct and queued approaches ○ Observe behavior and quantify ● Tool - Oslo Messaging Benchmark Tool ○ A driver independent tool for distributed messaging load generation and measurement ○ https://github.com/kgiusti/ombt ○ Does NOT simulate Openstack project(s) traffic patterns ● Scenarios ○ Separation of RPC and Notification traffic ○ Scoped to rabbit:// and amqp:// drivers for now ● Traffic modeling assumptions ○ RPC-Notify Message ratios, Producer-Consumer Ratios, Message Payload size ● Single server deployments for this testing phase ○ Clusters and meshes planned for follow-up scale and resiliency comparisons Scenario 1 - Single Broker Backend

ombt controller

rpc rpc clients servers

rabbit

Notify Notify clients listeners

● Broker used for both RPC and Notifications - not durable ● Messaging Assumptions ○ RPC-Notify Traffic ratios - 50/50 ○ Producer/Consumer ratios - 2/1 ○ Payload size - 1K Scenario 2 - Single Broker Backend - Durable

ombt controller

rpc rpc clients servers

rabbit

Notify Notify clients listeners

● Broker used for both RPC and Notifications ● Messaging Assumptions ○ RPC-Notify Traffic ratios - 50/50 ○ Producer/Consumer ratios - 2/1 ○ Payload size - 1K Scenario 3 - Separate Broker Backend

ombt controller

rpc rpc clients rabbit servers

Notify Notify clients rabbit listeners

● Dual rabbitmq-server backends ● Notifications measured with persistence

Scenario 4 - Hybrid Messaging Backend

ombt controller

rpc rpc clients amqp servers

Notify Notify clients rabbit listeners

● Direct Messaging Backend used for RPC (amqp:// and qpid-dispatch-router) ● Broker Backend used for Notifications (rabbit:// and rabbitmq-server)

Final Comparison

Next Steps

● Try it out ○ AMQP 1.0 devstack plugin supports a hybrid configuration “qpid-hybrid” ■ https://git.openstack.org/openstack/devstack-plugin-amqp1 ○ Zmq devstack plugin: https://git.openstack.org/openstack/devstack-plugin-zmq ● [developers] ○ “Messaging” != “Queueing” and oslo.messaging >= rabbitmq ○ Use “get_notification_transport” and “get_rpc_transport” - keep ‘em separated... ○ Get Involved! - oslo.messaging needs you! ● Expand hybrid messaging scenarios in gate checks ● Test and measure additional hybrid scenarios ○ Plausible driver-backend combinations for RPC and Notifications ● Improve the ease-of-use and configuration of hybrid backends ○ Make it easy for the operator to deploy and get immediate value