<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Coral Blocks &#187; Architecture</title>
	<atom:link href="https://www.coralblocks.com/index.php/category/architecture/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.coralblocks.com/index.php</link>
	<description>Building amazing software, one piece at a time.</description>
	<lastBuildDate>Tue, 21 Apr 2026 23:44:16 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.1</generator>
	<item>
		<title>On-Premises and Cloud Infrastructure with CoralSequencer</title>
		<link>https://www.coralblocks.com/index.php/on-premises-and-cloud-infrastructure-with-coralsequencer/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=on-premises-and-cloud-infrastructure-with-coralsequencer</link>
		<comments>https://www.coralblocks.com/index.php/on-premises-and-cloud-infrastructure-with-coralsequencer/#comments</comments>
		<pubDate>Fri, 03 Apr 2026 14:47:09 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[CoralSequencer]]></category>
		<category><![CDATA[architecture]]></category>
		<category><![CDATA[cloud]]></category>
		<category><![CDATA[multicast]]></category>
		<category><![CDATA[Sequencer]]></category>
		<category><![CDATA[tcp]]></category>
		<category><![CDATA[udp]]></category>

		<guid isPermaLink="false">https://www.coralblocks.com/index.php/?p=3212</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<style>
* {
font-size: 101%;
}</p>
<p>.li_facts { margin: 0 0 17px 0; }</p>
</style>
<p style="margin-top: 20px;">
CoralSequencer supports multiple transport protocols, offering flexibility when building on-premises and cloud infrastructures. You can use UDP, multicast or unicast, TCP, or a combination of both to design your infrastructure in the way that best fits your needs, whether on-premises, in the cloud, or across both. <span id="more-3212"></span> In this article we&#8217;ll see some examples through diagrams.
</p>
<h3>On-Premises with Multicast</h3>
<p>
The primary and recommended transport for CoralSequencer, using an industry established reliable multicast UDP protocol. It is not only more efficient for distributing messages across nodes, but it also offers useful features such as multicast discovery.
</p>
<p><center><img src="https://www.coralblocks.com/wp-content/uploads/2026/04/Screenshot-2026-04-03-at-9.35.44-AM.png" alt="OnPremMulticast" width="737" height="655" class="alignnone size-full wp-image-3216" /></center></p>
<p style="margin-top: 1px;">
The <font color="#6bdc7e">green</font> arrows represent multicast UDP connections, while the <font color="#72bcf9">blue</font> arrows represent TCP connections. The <strong>Bridge</strong> serves as a TCP entry point into the distributed system.</p>
<p/>
<p>
<strong><font color="#0d75c4">IMPORTANT:</font></strong> Later below, we’ll see how to run the entire sequencer in the cloud using only TCP, with no multicast at all.
</p>
<p>
To make this diagram smaller, let&#8217;s abbreviate some components: Replayer = <strong>R</strong>, Logger = <strong>L</strong>, Bridge = <strong>B</strong>, Archiver = <strong>A</strong> and Sequencer = <strong>SEQ</strong>.<br />
<center><img src="https://www.coralblocks.com/wp-content/uploads/2026/04/Screenshot-2026-04-03-at-9.55.14-AM.png" alt="OnPremSmall2" width="381" height="333" class="alignnone size-full wp-image-3231" /></center>
</p>
<h3>Extending the Distributed System to the Cloud</h3>
<p>
When multicast UDP is not available, we can use an industry established sequenced TCP protocol. Your choice for the CoralSequencer transport protocol does not affect your application code and logic in any way.
</p>
<p><center><img src="https://www.coralblocks.com/wp-content/uploads/2026/04/Screenshot-2026-04-03-at-10.07.45-AM.png" alt="ExtensionToCloud" width="760" height="652" class="alignnone size-full wp-image-3242" /></center></p>
<p>
Note that we are extending our distributed system to the cloud through a <strong>single bridge-to-bridge TCP connection</strong>. The bridge on the cloud side provides connectivity to all cloud instances. It can have a backup bridge ready to take over in case it fails, so it does not become a single point of failure. You can also deploy more than one bridge on the cloud side to better distribute the load. It is important to understand that <font color="#72bcf9">bridges can be chained together to build any network graph</font>, but simplicity is often the best approach.
</p>
<p>
Also note that <strong>there is <em>no</em> multicast UDP connectivity in the cloud</strong>, only TCP and shared memory. The <strong>Dispatcher</strong> provides shared memory connectivity to nodes on the same machine, in this case within the same cloud instance. It connects out to a bridge over TCP and distributes all messages locally to all nodes through shared memory, using the same memory mapped file. As a result, it does not have to open a different TCP connection to each of its local nodes. It only opens a single TCP connection out to its bridge.
</p>
<p>
<strong><font color="#0d75c4">IMPORTANT:</font></strong> Both the bridge and the dispatcher operate in <strong>full-duplex</strong>, handling both downstream messages and upstream commands.
</p>
<p style="margin-top: 35px;">
<h3>Extending to Data Centers and External Clients</h3>
</p>
<p><center><img src="https://www.coralblocks.com/wp-content/uploads/2026/04/Screenshot-2026-04-03-at-10.39.50-AM.png" alt="SmallDataCenter" width="709" height="575" class="alignnone size-full wp-image-3252" /></center></p>
<h3>Pure TCP Sequencer Infrastructure</h3>
<p><center><img src="https://www.coralblocks.com/wp-content/uploads/2026/04/Screenshot-2026-04-03-at-10.51.55-AM.png" alt="PureTcpSeq" width="728" height="649" class="alignnone size-full wp-image-3265" /></center></p>
<p>Note that there are <em>no</em> multicast UDP connections anywhere, only TCP connections.</p>
<p style="margin-top: 35px;">
<h3>Sequencer Deployed on the Cloud</h3>
</p>
<p><center><img src="https://www.coralblocks.com/wp-content/uploads/2026/04/Screenshot-2026-04-03-at-10.57.36-AM.png" alt="AllCloud" width="1060" height="794" class="alignnone size-full wp-image-3269" /></center></p>
<p>
There are <em>no</em> multicast UDP connections anywhere, only TCP. Different cloud regions are connected through bridge-to-bridge connections. Dispatchers reduce the number of TCP connections by using shared memory when running on the same cloud instance.
</p>
<p style="margin-top: 35px;">
<h3>CoralSequencer Transport Protocols</h3>
<p>CoralSequencer offers a variety of transport protocols that can be used without requiring any code changes. Simply change a config from UDP to TCP and your application is ready to be deployed with a totally different transport protocol. Below we list the available CoralSequencer transport protocols:
</p>
<ul>
<li style="margin-top: 15px;"><strong>Multicast UDP</strong>: The primary and recommended transport for CoralSequencer, using an industry established reliable multicast UDP protocol.</li>
<li style="margin-top: 15px;"><strong>TCP</strong>: The transport used by CoralSequencer when multicast UDP is not available, such as in the cloud, or not desirable, such as for external clients, using an industry established sequenced TCP protocol.</li>
<li style="margin-top: 15px;"><strong>Shared Memory</strong>: The transport used within the same machine or cloud instance to minimize the number of network connections. The Dispatcher is the component that provides connectivity through shared memory to nodes on the same machine or cloud instance.</li>
<li style="margin-top: 15px;"><strong>Dual (UDP + TCP)</strong>: Two redundant connections, one TCP and one unicast UDP, streaming identical messages. The receiving side processes whichever arrives first and discards the duplicate.</li>
<li style="margin-top: 15px;"><strong>Fuse (reliable UDP)</strong>: One unicast UDP connection streaming messages, along with an idle TCP connection used for retransmission of lost messages. The receiving side requests retransmission of any gaps through the TCP connection.</li>
</ul>
<p style="margin-top: 35px;">
<h3>Conclusion</h3>
<p>
CoralSequencer is designed to adapt to the infrastructure you have, rather than forcing you into a single network model. Whether your deployment is fully on premises, fully in the cloud, or split across both, you can combine multicast and unicast UDP, TCP, shared memory, bridges, and dispatchers to build a topology that matches your performance, reliability, high availability, and operational requirements. Most importantly, these transport choices do not require application code changes, enabling incremental evolution of your infrastructure while preserving deterministic behavior and message ordering across the system.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/on-premises-and-cloud-infrastructure-with-coralsequencer/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Architecture Case Study #1: CoralReactor + CoralQueue</title>
		<link>https://www.coralblocks.com/index.php/architecture-case-study-1-coralreactor-coralqueue/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=architecture-case-study-1-coralreactor-coralqueue</link>
		<comments>https://www.coralblocks.com/index.php/architecture-case-study-1-coralreactor-coralqueue/#comments</comments>
		<pubDate>Fri, 23 Jan 2015 00:14:20 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Consulting]]></category>
		<category><![CDATA[CoralQueue]]></category>
		<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[coralqueue]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[disruptor]]></category>
		<category><![CDATA[gc]]></category>
		<category><![CDATA[low-latency]]></category>
		<category><![CDATA[netty]]></category>
		<category><![CDATA[nio]]></category>
		<category><![CDATA[queue]]></category>
		<category><![CDATA[selector]]></category>
		<category><![CDATA[throughput]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=780</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>You need a high throughput application capable of handling thousands of client connections simultaneously but some client requests might take long to process for whatever reason. How can that be done in an efficient way without impacting other connected clients and without leaving the application unresponsive for new client connections? <span id="more-780"></span></p>
<h3 class="coral">Solution</h3>
<p>To handle thousands of connections an application must use non-blocking sockets over a single <a href="http://docs.oracle.com/javase/7/docs/api/java/nio/channels/Selector.html" target="_blank">selector</a>, which means <font color="#26619b"><b>the same thread will handle thousands of connections simultaneously</b></font>. Problem is, if one of these connections lags for whatever reason, all other ones and the application as a whole must not be affected. In the past this problem was solved with the infamous <em>one-thread-per-client</em> approach which does not scale and leads to all kinds of multithreading pitfalls like race conditions, visibility issues and deadlocks. By using <font color="#26619b"><b>one thread for the selector and a fixed number of threads for the heavy-duty work</b></font>, a system can solve this problem by distributing client work (and not client requests) among the heavy-duty threads without affecting the overall performance of the application. But how does this communication between the selector thread and the heavy-duty threads happen? Through CoralQueue demultiplexers and multiplexers.<br />
<br/></p>
<h3 class="coral">Diagram</h3>
<p><a href="http://www.coralblocks.com/wp-content/uploads/2015/02/arch1.jpg"><img src="http://www.coralblocks.com/wp-content/uploads/2015/02/arch1.jpg" alt="arch1" width="1024" height="768" class="alignnone size-full wp-image-872" /></a></p>
<h3 class="coral">Flow</h3>
<style>
.li_flow { margin: 0 0 4px 0; }
</style>
<ul style="padding: 0 40px">
<li class="li_flow">CoralReactor running on single thread pinned to an isolated cpu core with CoralThreads.</li>
<li class="li_flow">CoralReactor opens one or more servers listening on a local port. All servers are running on the same reactor thread.</li>
<li class="li_flow">A server can receive one or thousands of connections from many clients across the globe.</li>
<li class="li_flow">Each client sends requests with some work to be performed.</li>
<li class="li_flow">The server does not perform this work. Instead it passes a message describing the work to a heavy-duty thread using a CoralQueue demultiplexer.</li>
<li class="li_flow">The CoralQueue demux distributes the messages among the heavy-duty threads.</li>
<li class="li_flow">The heavy-duty threads are also pinned to an isolated cpu core with CoralThreads.</li>
<li class="li_flow">A heavy-duty thread executes the work and sends back a message with the results to the server using a CoralQueue multiplexer.</li>
<li class="li_flow">The server picks up the message from the CoralQueue mux and reports back the results to the client.</li>
</ul>
<h3 class="coral">FAQ</h3>
<style>
.li_faq { margin: 0 0 17px 0; }
</style>
<ol style="padding: 12px 40px">
<li class="li_faq"><font color="#26619b">Won&#8217;t you have to create garbage when passing messages back and forth among threads?</font><br />
<b>A:</b> No. CoralQueues is a ultra-low-latency, lock-free data-structure for inter-thread communication that does not produce any garbage.
</li>
<li class="li_faq"><font color="#26619b">What happens if the queue gets full?</font><br />
<b>A:</b> A full queue will cause the reactor thread to block waiting for space. This creates latencies. To avoid a full queue you can start by increasing the number of heavy-duty threads and/or increasing the size of the queue.
</li>
<li class="li_faq"><font color="#26619b">I did number 2 above but I am still getting a full queue. Now what?</font><br />
<b>A:</b> CoralQueue has the built-in feature to write messages to disk asynchronously when it hits a full queue so it does not have to block waiting for space. Then the heavy-duty threads can get the messages from the queue file when they don&#8217;t find them in memory. You can use this approach not to disturb the reactor thread but at this point it is probably a good idea to also try to make whatever work your heavy-duty threads are performing more efficient.
</li>
<li class="li_faq"><font color="#26619b">How many connections can the application handle?</font><br />
<b>A:</b> CoralReactor can easily handle 10k+ connections concurrently in a single thread. If your machine has additional cores, you can also add more reactor threads to increase this number even more.
</li>
<li><font color="#26619b">How many heavy-duty threads should I have?</font><br />
<b>A:</b> That depends on the number of available cpu cores that your machine has. A cpu core is a scarce resource so you should allocate them across your applications wisely. Creating more threads than the number of available cpu cores won&#8217;t bring any benefit and it will actually degrade the performance of the system due to context switches. Ideally you should have a fixed number of heavy-duty threads pinned to their own isolated core so they are never interrupted.
</li>
</ol>
<h3 class="coral">Variations</h3>
<p>Instead of using one CoralQueue demultiplexer to randomly distribute messages across all heavy-duty threads, you can introduce the concept of <em>lanes</em>, with each lane having a <em>heaviness</em> number attached to it. For example, heavy tasks go all to lane 1, not-so-heavy tasks go to lane 2 and light tasks go to lane 3. The application would then decide in which lane a message should be dispatched. If a lane will be processed by a single heavy-duty thread, it can use a regular <em>one-producer-to-one-consumer</em> CoralQueue queue. If a lane will be served by 2 or more heavy-duty threads, then it can use a CoralQueue demultiplexer. To report back the results to the server, all heavy-duty threads can continue to use a CoralQueue multiplexer.<br />
<br/></p>
<h3 class="coral">Code Example</h3>
<p>Below you can see a simple server illustrating the architecture described above. To keep it simple it receives a string (the request) and returns the string prepended by its length (the response). It supports many clients and distribute the work among worker threads using a demux. Then it uses a mux to collect the results from the worker threads and respond to the appropriate client. In a more realistic scenario, the worker threads would be doing some heavier work, like accessing a database. You can easily test this server by connecting through a telnet client.</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralreactor.client.bench.queued;

import java.nio.ByteBuffer;

import com.coralblocks.coralbits.util.Builder;
import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralqueue.demux.AtomicDemux;
import com.coralblocks.coralqueue.demux.Demux;
import com.coralblocks.coralqueue.mux.AtomicMux;
import com.coralblocks.coralqueue.mux.Mux;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.AbstractLineTcpServer;
import com.coralblocks.coralreactor.server.Server;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralreactor.util.MapConfiguration;

public class QueuedTcpServer extends AbstractLineTcpServer {
	
	static class WorkerRequestMessage {
		
		long clientId;
		ByteBuffer buffer;
		
		WorkerRequestMessage(int maxRequestLength) {
			this.clientId = -1;
			this.buffer = ByteBuffer.allocateDirect(maxRequestLength);
		}
		
		void readFrom(ByteBuffer src) {
			buffer.clear();
			buffer.put(src);
			buffer.flip();
		}
	}
	
	static class WorkerResponseMessage {
		
		long clientId;
		ByteBuffer buffer;
		
		WorkerResponseMessage(int maxResponseLength) {
			this.clientId = -1;
			this.buffer = ByteBuffer.allocateDirect(maxResponseLength);
		}
	}
	
	private final int numberOfWorkerThreads;
	private final Demux&lt;WorkerRequestMessage&gt; demux;
	private final Mux&lt;WorkerResponseMessage&gt; mux;
	private final WorkerThread[] workerThreads;

	public QueuedTcpServer(NioReactor nio, int port, Configuration config) {
	    super(nio, port, config);
	    this.numberOfWorkerThreads = config.getInt(&quot;numberOfWorkerThreads&quot;);
	    final int maxRequestLength = config.getInt(&quot;maxRequestLength&quot;, 256);
	    final int maxResponseLength = config.getInt(&quot;maxResponseLength&quot;, 256);
	    
	    Builder&lt;WorkerRequestMessage&gt; requestBuilder = new Builder&lt;WorkerRequestMessage&gt;() {
			@Override
            public WorkerRequestMessage newInstance() {
	            return new WorkerRequestMessage(maxRequestLength);
            }
	    };
	    
	    this.demux = new AtomicDemux&lt;WorkerRequestMessage&gt;(1024, requestBuilder, numberOfWorkerThreads);
	    
	    Builder&lt;WorkerResponseMessage&gt; responseBuilder = new Builder&lt;WorkerResponseMessage&gt;() {
	    	@Override
            public WorkerResponseMessage newInstance() {
	            return new WorkerResponseMessage(maxResponseLength);
            }
	    };
	    
	    this.mux = new AtomicMux&lt;WorkerResponseMessage&gt;(1024, responseBuilder, numberOfWorkerThreads);
	    
	    this.workerThreads = new WorkerThread[numberOfWorkerThreads];
    }
	
	@Override
	public void open() {
		
		for(int i = 0; i &lt; numberOfWorkerThreads; i++) {
			if (workerThreads[i] != null) {
				try {
					// make sure it is dead!
					workerThreads[i].stopMe();
					workerThreads[i].join();
				} catch(Exception e) {
					throw new RuntimeException(e);
				}
			}
		}
		
		mux.clear();
		demux.clear();
			
		for(int i = 0; i &lt; numberOfWorkerThreads; i++) {
			workerThreads[i] = new WorkerThread(i);
			workerThreads[i].start();
		}
		
		nio.addCallback(this); // we want to constantly receive callbacks from 
							   // reactor thread on handleCallback() to drain responses from mux
		
		super.open();
	}
	
	@Override
	public void close() {
		
		for(int i = 0; i &lt; numberOfWorkerThreads; i++) {
			if (workerThreads[i] != null) {
				workerThreads[i].stopMe();
			}
		}
		
		nio.removeCallback(this);
		
		super.close();
	}
	
	@Override
	protected void handleMessage(Client client, ByteBuffer msg) {
		
		if (ByteBufferUtils.equals(msg, &quot;bye&quot;) || ByteBufferUtils.equals(msg, &quot;exit&quot;)) {
			client.close();
			return;
		}
		
		// on a new message, dispatch to the demux so worker threads can process it:
		
		WorkerRequestMessage req;
		
		while((req = demux.nextToDispatch()) == null); // busy spin...
		
		req.clientId = getClientId(client);
		req.readFrom(msg);
		
		demux.flush();
	}
	
	class WorkerThread extends Thread {
		
		private final int index;
		private volatile boolean running = true;
		
		public WorkerThread(int index) {
			super(&quot;WorkerThread-&quot; + index);
			this.index = index;
		}
		
		public void stopMe() {
			running = false;
		}
		
		@Override
        public void run() {
            
			while(running) {
			
    			// read from demux and process:
    			
    			long avail = demux.availableToPoll(index);
    			
    			if (avail &gt; 0) {
    				
    				for(int i = 0; i &lt; avail; i++) {
    					
    					// get the request:
    					WorkerRequestMessage req = demux.poll(index);
    					
    					// do something heavy with the request, like accessing database or big data...
    					// for our example we just prepend the message length
    					
    					long clientId = req.clientId;
    					int msgLen = req.buffer.remaining();
    					
    					// get a response object from mux:
    
    					WorkerResponseMessage res = null;
    					
    					while((res = mux.nextToDispatch(index)) == null); // busy spin
    
    					// notice below that we are just copying data from request to response:
    					res.clientId = clientId; // copy clientId
    					res.buffer.clear();
    					ByteBufferUtils.appendInt(res.buffer, msgLen);
    					res.buffer.put((byte) ':');
    					res.buffer.put((byte) ' ');
    					res.buffer.put(req.buffer); // copy buffer contents
    					res.buffer.flip(); // don't  forget
    				}
    				
					mux.flush(index);
    				demux.donePolling(index);
    				nio.wakeup(); // don't forget so handleCallback is called
    			}
			}
        }
	}
	
	@Override
	protected void handleCallback(long nowInMillis) {
		
		// this is the reactor thread calling us back to check whether the mux has pending results:
		
		long avail = mux.availableToPoll();
		
		if (avail &gt; 0) {
			
			for(long i = 0; i &lt; avail; i++) {
				
				WorkerResponseMessage res = mux.poll();
				
				Client client = getClient(res.clientId);
				
				if (client != null) { // client might have disconnected...
					client.send(res.buffer);
				}
			}
			
			mux.donePolling();
		}
	}
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;numberOfWorkerThreads&quot;, 4);
		Server server = new QueuedTcpServer(nio, 45451, config);
		server.open();
		nio.start();
		
	}
	
}
</pre>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/architecture-case-study-1-coralreactor-coralqueue/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
