Table of Contents
- Introduction
- Limitations of the Current gRPC Design
- WebSocket Layer for Bidirectional Event Notifications
- Maintaining Protocol Buffers for Structured Data
- HTTP Endpoints via Envoy (gRPC–REST Transcoding)
- Integration with Tor Hidden Services (No NAT Issues)
- Haveno Daemon as the Unified Interface Point
- Maintaining gRPC for Core P2P Communication
- Comparison to Bisq’s Architecture and Inspiration
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Introduction
Haveno is a decentralized exchange for Monero that currently uses a gRPC API for communication between the Haveno daemon and clients. While this gRPC interface provides a strong typed API, it is heavily request/response oriented. This means clients must poll for many updates, and real-time asynchronous notifications (such as price changes or new trade offers) are not fully supported. The result is a less responsive user experience, for example, users have noted they must manually refresh to catch new offers in the market. To address these limitations, we propose adding an abstraction layer with WebSocket and HTTP endpoints on each Haveno peer. This will enable two-way, event-driven communication similar to how the mobile app communicates with the daemon, but generalized to all peers and clients. The enhancement will preserve Haveno’s existing privacy and P2P characteristics (e.g. Tor networking and Protobuf message structures) while greatly improving real-time communication and integration options.
Limitations of the Current gRPC Design
Haveno’s current gRPC-based API was primarily designed for on-demand requests and basic streaming, which makes certain event notifications cumbersome. In practice, the gRPC spec offers limited notification listener support, clients can only get certain updates by continuously polling or using a single callback mechanism, which doesn’t cover all event types. For instance, there is no easy way to be alerted the moment a new offer matching your criteria appears; users have to repeatedly check the offer book. Likewise, updates like price fluctuations, wallet balance changes, or incoming trades are not proactively pushed to UIs. This request-response model leads to latency in the UI and inefficient use of resources (constant polling).
While gRPC does support server streaming, it isn’t widely leveraged in Haveno’s current API for all the needed notifications. Relying solely on gRPC streaming also poses challenges for web or mobile clients (e.g. gRPC-Web has limitations, and maintaining multiple stream calls for different event types can get complex). Modern API design often advocates combining RPC and push channels, for example, using gRPC for core service calls and WebSockets for real-time updates. In summary, the current design’s lack of a robust pub/sub or event system is a bottleneck for responsiveness. This justifies an improvement: we need a more flexible, asynchronous communication layer so that Haveno nodes can broadcast events the moment they occur, without clients constantly asking for updates.
WebSocket Layer for Bidirectional Event Notifications
To enable truly two-way, event-driven communication, we will introduce a WebSocket server in each Haveno daemon. Every Haveno node (including your own local daemon or any peer’s node) can expose a WebSocket endpoint. Clients (UIs, mobile apps, or even other services) can maintain a persistent WebSocket connection to subscribe to events and also send messages if needed. This bidirectional channel means the daemon can push out updates in real time, and clients can send commands or acknowledgments over the same connection without the overhead of new HTTP requests for each interaction. It essentially extends the current mobile-daemon pattern (which keeps an open gRPC stream for notifications) to a more general and accessible WebSocket mechanism.
Key events and data that the WebSocket layer will support include:
- New Offer Listings, when a peer in the network posts a new offer or an existing offer is removed, the daemon can immediately push a notification to connected clients so the offer book UI updates live. Users would no longer need to manually refresh to catch new offers. If desired, clients could subscribe with filters (e.g. only notify for offers matching certain currency or payment method).
- Price Changes, if Haveno uses price feeds or an index, any change in reference price (e.g. XMR/USD rate) can be broadcast. This allows UIs to update price displays or charts in real time. Even if using an external price source, the daemon could fetch and push updates periodically to all listeners.
- Trade Progress Events, as trades move through phases, events are emitted. For example, when a trade’s escrow deposit transaction is published to the mempool, both the maker and taker UIs can be notified instantly (instead of polling the trade status). Confirmations, payment sent/received marks, and trade completion can similarly be pushed. This makes the trading experience more reactive.
- Wallet Balance and Transaction Updates, the moment the user’s wallet sees an incoming payment (e.g. a deposit or a payout transaction) or a transaction confirmation, the daemon will send a WebSocket event. This could include updated balance info (unlocked vs locked funds) , new transaction IDs, or confirmations count, enabling the UI to show updated wallet status without delay.
- Chat Messages and Arbitrator Messages, Haveno supports chat between traders and with arbitrators during disputes. Using WebSockets, chat can be truly instant. When one party sends a message, it’s immediately pushed to the other party’s UI as a notification (currently, the mobile app relies on local notifications for new chat messages ). Similarly, if an arbitrator issues a ruling or comment on a dispute, both trader clients get it in real time.
- Arbitration and Dispute Events, If a trade dispute is opened, all involved parties (buyer, seller, arbitrator) can be notified via event. Likewise, resolution events (dispute closed, winner decided) can be broadcast. This ensures time-sensitive actions (like releasing funds after a dispute) are not missed.
- Network or System Events, e.g., peer connectivity changes (a known trading peer goes offline/online), or system warnings (low balance, backup reminders) could be pushed as well. For instance, a “peer is offline” event might inform a trader that the counterparty’s node dropped (if such info is exposed), or a “new version available” notification from the network.
All these notifications would be pushed over the WebSocket channel as soon as the triggering event occurs in the daemon. The communication is two-way, so in addition to receiving events, the client could also send requests or actions. For example, a web client could invoke an action by sending a message over WebSocket (which the daemon interprets as an RPC command), though normal gRPC/HTTP requests can also be used, WebSockets allow a unified channel. This bidirectionality could be useful for interactive features like a live chat: the same socket can carry outgoing chat messages from the user and incoming messages from the peer or arbitrator.
Notably, this approach aligns with modern application design where a persistent channel is kept for live updates. The Haveno mobile app already anticipates this need by handling local notifications for events like new trades or messages ; our WebSocket layer generalizes it in a standard way. External services or GUIs will be able to “listen” to the Haveno daemon easily, making the system more event-driven rather than polling-driven.
Maintaining Protocol Buffers for Structured Data
Even as we add WebSocket and HTTP capabilities, Haveno will continue using Protocol Buffers (Protobuf) for message serialization across the board. This means the same data models defined in Haveno’s .proto
files (offers, trades, chat messages, prices, etc.) will be reused for the new communication channels. By preserving Protobuf serialization, we ensure that the protocol remains strongly-typed and efficient in encoding, we are not reinventing new JSON formats for events, but rather transporting the existing binary messages over new mediums.
This approach retains all the benefits of Haveno’s current design. Protobuf messages are compact and schema-defined, making communication efficient and unambiguous. We avoid the risk of diverging data models (one for gRPC and another for WebSocket/HTTP); instead, there is a single source of truth for what data an “Offer” or “Trade” contains. For example, a “NewOffer” event over WebSocket can carry the same OfferInfo
protobuf message that the gRPC GetOffers
call returns. A client receiving that event can deserialize it using the same proto definitions it uses for normal RPC replies.
By maintaining a unified serialization format, we also make it easier to extend and version the protocol. Any changes to the proto (new fields, message types) automatically apply to all communication channels. This avoids duplicate effort of updating REST/WS schemas separately. It also means we continue to leverage Google’s efficient binary wire format, which is significantly smaller than JSON in payload size , important for bandwidth over Tor. In summary, the new WebSocket and HTTP layers are an abstraction on top of the existing protobuf-defined RPC interface, not a replacement. We thus get the flexibility of new communication methods without sacrificing the structured, type-safe nature of Haveno’s protocol. As one API design guide puts it, this lets us “leverage all the benefits of gRPC and Protocol Buffers without… needing to write a lot of wrapper code”, even as we expose the services over other channels .
HTTP Endpoints via Envoy (gRPC–REST Transcoding)
In addition to WebSockets, we plan to offer plain HTTP/REST endpoints for Haveno’s API by using Envoy Proxy as a translation layer. Envoy’s gRPC-JSON transcoding filter can expose each gRPC service method as an HTTP RESTful endpoint automatically. In practice, we will annotate our proto definitions with HTTP options (using Google’s HTTP annotation syntax) for each RPC. Envoy can then listen for HTTP requests (e.g. POST /v1/offers
for a PostOffer
RPC, or GET /v1/offers
for a GetOffers
RPC) and translate them into the corresponding gRPC call on the Haveno daemon. The response is converted back to JSON and returned to the client. This transcoding happens behind the scenes, so developers can consume Haveno’s API either via gRPC (for internal or high-performance use) or via simple HTTP calls.
The advantage of providing an HTTP interface is that it greatly expands integration possibilities. Many environments, web browsers, scripting languages, command-line tools, speak HTTP+JSON readily but cannot directly speak gRPC. With this in place, a web frontend could call Haveno REST endpoints using fetch/XHR, and those get executed on the daemon’s gRPC API without us writing custom HTTP handlers. Similarly, developers can cURL the Haveno API or use Postman to test it, treating it like any typical web service, which is easier than dealing with binary gRPC payloads. We effectively get a “free” REST API on each Haveno node, powered by the existing gRPC methods. This runs on the same peer-to-peer network (over Tor), but speaks HTTP for compatibility.
Importantly, by defining HTTP mappings in the proto, we can also auto-generate documentation for this REST interface. Tools like gRPC Gateway and OpenAPI generators can read the proto annotations and produce an OpenAPI (Swagger) specification for the HTTP API. This means we can offer human-readable API docs and even interactive Swagger UI for Haveno’s endpoints without much extra work. Developers exploring Haveno integration would have an up-to-date reference of all REST endpoints (which correspond 1:1 to gRPC calls) and could try them out in a browser. As one write-up notes, it’s possible to “leverage auto-generated documentation of our HTTP endpoints with the OpenAPI Specification and allow developers to interact with it via a Swagger UI” , this is exactly what we aim for. In summary, using Envoy as a gateway provides:
- RESTful access to Haveno, each peer can be an HTTP server for API calls (in addition to gRPC), expanding client compatibility.
- No duplication of logic, Envoy maps JSON to gRPC, so the Haveno daemon continues to implement methods just once (in gRPC). Envoy handles conversion, ensuring the REST and gRPC paths behave the same.
- Auto-generated client SDKs and docs, With OpenAPI specs, developers could generate client libraries in many languages or use API explorer UIs, accelerating third-party integrations.
- Incremental adoption, This does not force current gRPC clients to change at all; it’s an additive feature. Teams can gradually transition or offer both gRPC and REST, as is often done to ease integration .
Technically, deploying this could mean bundling an Envoy sidecar with Haveno or integrating a gRPC-Gateway library in the daemon. Envoy would listen on an HTTP port (or perhaps even on the same onion address with different path) and forward to the daemon’s gRPC port. Because Haveno already uses HTTP/2 over Tor for gRPC in the mobile app , we could potentially reuse that same connection for HTTP/1.1+JSON calls via the proxy. The result is that Haveno nodes become easily accessible to a wide range of clients: gRPC for those who want performance and binary protocols, WebSockets for live event streaming, and REST/JSON for quick integration and debugging.
Integration with Tor Hidden Services (No NAT Issues)
A cornerstone of Haveno (inherited from Bisq) is that all network traffic runs through Tor hidden services. We will maintain this for the new WebSocket and HTTP layers. Every Haveno peer already has an onion address, and we will expose the WebSocket server and REST API via that same hidden service. This has multiple benefits. First, it completely sidesteps NAT traversal problems, Tor hidden services are reachable globally without any port forwarding, as Tor handles inbound connections into each node. Even if a Haveno node is behind a firewall or carrier-grade NAT, any other peer or client can still reach its APIs through the .onion address. We essentially delegate the NAT traversal to Tor, which has proven extremely effective (Tor even bypasses many country-level firewalls). This means the event notifications and HTTP calls truly work peer-to-peer, without relying on any centralized server.
Second, using Tor preserves anonymity and privacy. In Bisq, moving to Tor solved the issue of peers leaking IP addresses when posting offers. Haveno likewise will ensure that when a WebSocket client connects to a peer, it’s via an onion, no IP addresses are revealed, and the traffic is end-to-end encrypted through the Tor network. Each Haveno node effectively becomes a small web service accessible at an onion URL, and onion addresses replace IPs for all interactions. For example, if your Haveno node is abc123.onion
, your WebSocket endpoint might be ws://abc123.onion:PORT/api/v1/stream
(or wss:// with TLS over onion), and your REST base URL could be http://abc123.onion/api/v1/...
. Tor Hidden Service routing will carry these connections securely to your daemon.
This approach means we incur no loss of privacy or censorship-resistance by adding WebSockets/HTTP. On the contrary, it strengthens decentralization: any two peers can directly communicate events and API calls over Tor, rather than, say, using a centralized pub/sub server. There are no NAT woes or need for ICE/STUN as in other P2P networks; Tor provides connectivity. This was a key design decision in Bisq’s network , and Haveno continues the same philosophy. It’s worth noting that running over Tor does add some latency, but the volume of data for these notifications (offers, texts, small JSON or protobuf messages) is small, so Tor’s overhead is manageable. Moreover, users are already accustomed to Tor’s slight delays as a trade-off for privacy. In summary, all new communications will be tunneled through Tor by default, ensuring that Haveno remains decentralized, anonymous, and unconstrained by network obstacles. Each peer’s WebSocket/HTTP interface will be just as private and censorship-resistant as the existing gRPC over Tor is.
Haveno Daemon as the Unified Interface Point
Implementing these features will involve extending the Haveno daemon to handle additional interface endpoints. The daemon will act as the central hub for all communication, it already hosts the gRPC server that the Haveno UI and CLI use. We will augment this such that the daemon can also serve WebSocket connections and (via Envoy or an embedded gateway) HTTP requests. From the user’s perspective, the Haveno daemon they run will now offer multiple ways to talk to it: gRPC, WebSocket, and REST, all ultimately hitting the same internal logic.
Internally, we can integrate an event broadcasting system into the daemon. Haveno’s core is built in Java; we could use an event bus or observer pattern where different services (wallet manager, offer book, trade manager, etc.) publish events when something noteworthy happens (e.g., a new offer is added or a trade phase changes). The Core Notification Service would collect these and forward them to any registered WebSocket clients (and also any gRPC streaming listeners, if those exist). In fact, Haveno already introduced an API to add a notification listener in code , primarily for testing and basic UI callbacks. We will build on that by plugging it into a WebSocket broadcast: whenever a NotificationMessage
is generated in the core, the daemon will serialize it (still as protobuf or maybe JSON) and send it out on every active WebSocket session. Essentially, the daemon will maintain a list of connected WS clients and push messages to them as topics/events occur. If needed, we can allow clients to specify which event types they care about (to filter traffic), but initially it might send all and let the client filter.
For HTTP, since that will be handled by mapping to gRPC, the daemon just needs to ensure all state-modifying actions (posting offers, taking offers, sending chat, etc.) are implemented in the gRPC API (which they are or will be). Envoy will invoke those on behalf of REST clients. The daemon might also expose some HTTP endpoints for liveness or info (e.g., a simple status or health check at /
) if needed, but that’s optional.
One important consideration is authentication and access control. Currently, Haveno’s gRPC API is secured by a password (the user’s Haveno account password is used to auth API calls, often via an Authorization
metadata). We must extend similar auth to WebSocket and HTTP. For WebSockets, this could be done by requiring an auth token or using a subprotocol; for HTTP/REST, Envoy’s transcoder can pass through auth headers to gRPC. We will ensure that only authorized clients can subscribe or call the APIs on a daemon, just as only an authorized mobile app or UI should control a Haveno node. In practice, when the mobile app connects via Tor gRPC, it uses a hashed password in the onion URI for authentication ; we could leverage the same mechanism for WebSocket (e.g., include the token in the connection query string or perform a login handshake over the socket).
The bottom line is that the Haveno daemon will serve as the single interface point where new listeners or endpoints attach. This keeps the architecture clean: the daemon encapsulates all core functionality and state, and clients, whether the official desktop UI, a mobile app, a web UI, or an automated script, all communicate through the daemon’s exposed interfaces (not directly to each other’s wallets or such). This design mirrors Bisq’s: Bisq runs as a monolithic app, but if one were to split out a Bisq backend, it would similarly host APIs. By doing this, we make Haveno more modular. For example, one could run a headless Haveno daemon on a server (or VPS) and interact with it purely via API from a custom UI or integrate Haveno’s trading capabilities into another platform. The daemon will manage all P2P network interactions under the hood, while the external layer speaks WebSocket/HTTP/gRPC as needed.
Maintaining gRPC for Core P2P Communication
It’s important to emphasize that these enhancements do not replace Haveno’s core peer-to-peer protocol, they augment it. The existing gRPC API and underlying protobuf messages will continue to be used for the fundamental communications between nodes for trades, offers, and disputes. In fact, Haveno’s P2P network messages (offer announcements, trade negotiations, etc.) are already based on these proto-defined structures, and that will remain the case. All we are doing is adding new ways to access the same functions and data.
By keeping gRPC at the core, we ensure that Haveno’s trade protocol and network logic remain stable and compatible. A Haveno node will still communicate with another Haveno node using the same set of RPC calls or messages as before, for example, when one peer takes an offer, under the hood it might invoke a gRPC method on the maker’s node (or send a serialized protobuf message via the Tor socket). Those mechanisms aren’t being thrown out; they are battle-tested and necessary for the decentralized logic to work correctly. The WebSocket and HTTP interfaces exist largely for clients to talk to their own daemon (or a peer’s daemon with permission), not to fundamentally change how the peers coordinate with each other during a trade (though in some cases, a peer could use the HTTP API of another peer instead of a direct gRPC call, effectively it’s the same thing with one extra translation step).
Retaining gRPC for core operations also means that existing tools (like the Haveno TypeScript client library and the desktop UI) continue to function as is. The new layer is additive. We won’t force all components to switch to WebSockets or REST, they can opt in based on what suits them. For instance, the desktop GUI might continue using direct gRPC calls for simplicity, whereas a web-based UI will use the new REST/WebSocket combo. Both can co-exist. This dual approach follows best practices: “Use gRPC for service-to-service calls and WebSockets for client real-time interaction,” as noted in API design recommendations. Haveno’s internal peer communications can be seen as service-to-service, which benefit from gRPC’s performance, whereas the user-facing frontends benefit from WebSocket updates for responsiveness.
In short, the gRPC API remains the backbone of Haveno’s distributed network. It will still handle things like order matching, trade escrow flows, and data queries. The abstraction layer we’re adding sits on top of this backbone, translating where necessary (in the case of REST) or tapping into event streams (in the case of WebSockets). This layered design ensures that if, say, Haveno v1.0 nodes communicate via gRPC messages, Haveno v1.1 nodes with WebSockets are still speaking the same language underneath. We maintain full backwards compatibility and network harmony. And developers who have built around Haveno’s gRPC (there is already a TypeScript SDK, etc.) are not forced to rewrite anything, they just gain new options. The robust underlying protocol (Protobuf + Tor) stays in place, and we simply deliver it in more accessible ways.
Comparison to Bisq’s Architecture and Inspiration
Bisq, being the precursor in the space of decentralized exchanges, provides a valuable reference for these design choices. Haveno’s architecture is heavily inspired by Bisq’s, and our enhancements continue in that spirit while leveraging modern tools:
-
Tor for P2P Connectivity: Bisq made the decision to run its entire P2P network over Tor hidden services to solve NAT traversal and preserve anonymity . Haveno does the same, all peers use Tor, and our new WebSocket/HTTP layers will also use Tor. This ensures that, like Bisq, Haveno has no central servers and no exposed IP addresses, aligning with the core values of privacy and censorship-resistance. Each Haveno node is just as hidden as a Bisq node, despite now offering richer interfaces.
-
Event-Driven Peer Communications: Bisq’s network is event-driven in that nodes gossip messages to each other. Bisq employs a flooding gossip protocol to propagate offers throughout the network , every node learns of new offers, and stores them, via this distributed messaging (as opposed to querying a central server). In Haveno, we maintain a similar decentralized propagation for offers and trades (likely also using gossip or direct messaging via seed nodes). The addition of WebSocket notifications in Haveno is conceptually akin to Bisq’s node receiving a message and immediately acting on it. The difference is Haveno will make these events available to external clients via WebSockets. In a way, we are exposing the internal event bus to the outside. Bisq’s UI is built-in and reacts to events internally, whereas Haveno’s architecture (daemon + client) with WebSockets allows external UIs to react to those events in real time as well. Both systems aim for real-time updates; we’re just enabling it over standard protocols.
-
No Single Point of Failure: Bisq demonstrated that an exchange can run with no centralized server, each node is equal and participates in data distribution and enforcement. Haveno upholds this by ensuring our new layers do not introduce centralization. For example, if we enable filtered offer notifications (like the “bell icon” idea for new offers matching criteria ), it will be implemented at the user’s node level (the node itself can decide what to notify the user about), rather than relying on a centralized push service. The WebSocket connections are directly between the user’s app and their Haveno node (or potentially between trading peers for certain messages), not through a hub. This peer-to-peer ethos is the same as Bisq’s, just augmented with user-friendly APIs.
-
Offline message handling: One challenge in purely P2P networks is delivering messages when a node is offline. Bisq tackled this by implementing an offline mailbox system, encrypted messages (e.g. a trade completion or dispute message) destined for an offline node are stored distributedly in the network until that node comes online to retrieve them. Haveno can draw from this approach for events as well. While WebSockets require an online connection (you obviously can’t push to a client that isn’t connected), Haveno’s core could ensure that critical events (like “you have a new trade request”) are not lost if the user’s UI is offline, perhaps by storing them in the daemon or requiring the user’s node to be online for certain interactions. In the future, we could integrate an asynchronous delivery mechanism (even something like Matrix or email notifications) for truly offline notifications, similar to how Bisq now has a mobile notification app. But fundamentally, the idea is the same: the system should be robust against one party being temporarily offline, using store-and-forward techniques if needed.
-
Architecture Evolution: It’s worth noting that Bisq is a monolithic desktop app, which made internal communication simpler (everything runs in one process). Haveno has from the start aimed to be more modular, with a daemon that can be headless and clients that connect to it. Our proposal enhances this modular design by standardizing the interfaces (WebSocket/HTTP in addition to gRPC). In essence, we are bridging Bisq’s proven P2P network model with modern API design. The result will be a system that offers the same decentralized trustlessness, privacy, and security as Bisq , while also being far more accessible to integrate with other applications and to use in diverse environments (web browsers, mobile apps, etc.). Bisq’s architecture showed that decentralization is viable; Haveno’s architecture seeks to make it user-friendly and developer-friendly as well.
In conclusion, this project outline brings together the strengths of Haveno’s current protocol (secure P2P over Tor, efficient protobuf messages) with new capabilities (WebSocket events, HTTP/REST access) to create a more responsive and extensible Haveno network. By learning from Bisq and employing technologies like Envoy and WebSockets, we ensure Haveno remains fully peer-to-peer and private, but with the real-time, two-way communication users expect in 2025. This will greatly improve the user experience (instant updates and notifications) and open the door for rich integrations and third-party tools, all while maintaining the core principles of decentralization and privacy that Haveno and Bisq stand for.
Sources:
- Haveno design and proposals
- Bisq network architecture (Tor, P2P gossip)
- gRPC and WebSocket integration guidance
- Envoy gRPC-JSON transcoding for REST APIs
- Haveno user feedback on notifications