<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Coral Blocks &#187; CoralMQ</title>
	<atom:link href="https://www.coralblocks.com/index.php/category/coralmq/feed" rel="self" type="application/rss+xml" />
	<link>https://www.coralblocks.com/index.php</link>
	<description>Building amazing software, one piece at a time.</description>
	<lastBuildDate>Fri, 03 Apr 2026 15:31:21 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.1</generator>
	<item>
		<title>State-of-the-Art Distributed Systems with CoralSequencer</title>
		<link>https://www.coralblocks.com/index.php/state-of-the-art-distributed-systems-with-coralmq/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=state-of-the-art-distributed-systems-with-coralmq</link>
		<comments>https://www.coralblocks.com/index.php/state-of-the-art-distributed-systems-with-coralmq/#comments</comments>
		<pubDate>Sat, 09 Apr 2016 02:47:27 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralMQ]]></category>
		<category><![CDATA[CoralSequencer]]></category>
		<category><![CDATA[distributed system]]></category>
		<category><![CDATA[ECN]]></category>
		<category><![CDATA[exchange]]></category>
		<category><![CDATA[mold]]></category>
		<category><![CDATA[MoldUDP]]></category>
		<category><![CDATA[MQ]]></category>
		<category><![CDATA[pub/sub]]></category>
		<category><![CDATA[rabbitmq]]></category>
		<category><![CDATA[Sequencer]]></category>
		<category><![CDATA[soup]]></category>
		<category><![CDATA[soupbin]]></category>
		<category><![CDATA[SoupBinTCP]]></category>
		<category><![CDATA[tibco]]></category>
		<category><![CDATA[zeromq]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1093</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<link rel="canonical" href="https://www.coralblocks.com/index.php/state-of-the-a…s-with-coralmq/" />
<p>In this article we introduce the big picture of CoralSequencer, a full-fledged, ultra-low-latency, high-reliability, software-based middleware for the development of distributed systems based on asynchronous messages. We discuss CoralSequencer&#8217;s main parts and how it uses a sophisticated and low-latency protocol to distribute messages across nodes through reliable UDP multicast. <span id="more-1093"></span><br />
<br/></p>
<p><!-- You should also check <a href="https://www.youtube.com/watch?v=b1e4t2k2KJY" target="_blank">the YouTube video below</a> presented by Brian Nigito from Jane Street describing the philosophy behind the CoralSequencer architecture. &#8211;><br />
<!-- center><br />
<iframe width="560" height="315" src="https://www.youtube.com/embed/b1e4t2k2KJY" title="How To Build An Exchange" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe><br />
</center --></p>
<p>You should also check <a href="https://www.youtube.com/watch?v=DyktSiBTCdk" target="_blank">the YouTube video below</a> where we present the main characteristics of the sequencer architecture together with some advanced features of CoralSequencer. <b>Note:</b> Contrary to the YouTube video, the video below has <font color="red"><b>no ads</b></font>.<br />
<center><br />
<!-- iframe width="560" height="315" src="https://www.youtube.com/embed/DyktSiBTCdk?si=Rp-mN6kQYQ9PAkIX" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe --><br />
<div style="width: 600px; max-width: 100%;" class="wp-video"><!--[if lt IE 9]><script>document.createElement('video');</script><![endif]-->
<video class="wp-video-shortcode" id="video-1093-1" width="600" height="338" preload="metadata" controls="controls"><source type="video/mp4" src="/wp-content/uploads/videos/CoralSequencer.mp4?_=1" /><a href="/wp-content/uploads/videos/CoralSequencer.mp4">/wp-content/uploads/videos/CoralSequencer.mp4</a></video></div><br />
</center></p>
<style>
.li_faq { margin: 0 0 17px 0; }
</style>
<p><!-- h3 class="coral">Fundamentals</h3>
<ul style="padding: 12px 40px">
<li class="li_faq"><font color="#26619b">What is a distributed system?</font><br />
It is an integrated system that spans multiple machines, clients or <i>nodes</i> which execute their tasks in parallel in a non-disruptive way. A distributed system should be robust enough to continue to operate when one or more nodes fail, stop working, lag or are taken down for maintenance. Each machine can run multiple nodes, not just one. Each node communicates with each other by sending messages to the sequencer (<a href="https://en.wikipedia.org/wiki/Atomic_broadcast" target="_blank">atomic broadcaster</a>). Nodes can produce and consume messages.
</li>
<li class="li_faq"><font color="#26619b">What is a Messaging Queue or MQ?</font><br />
It is a <a href="http://en.wikipedia.org/wiki/Message-oriented_middleware" target="_blank">message-oriented middleware</a> that enables the development of a distributed system by using asynchronous messages for <i>inter-node communication</i>. A traditional approach for message distribution is the <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern" target="_blank">publish-subscribe pattern</a>. CoralSequencer uses a different <i>all-messages-to-all-nodes</i> approach, through multicast, by implementing a reliable UDP protocol.
</li>
<li class="li_faq"><font color="#26619b">What are distributed systems built on top of a MQ good at?</font><br />
They are great at implementing applications with a lot of moving parts (i.e. nodes) which need real-time monitoring and real-time reaction capabilities. To summarize they provide: parallelism (nodes can truly run in parallel), tight integration (all nodes see the same messages in the same order), decoupling (nodes can evolve independently), failover/redundancy (when a node fails, another one can be running and building state to take over immediately), scalability/load balancing (just add more nodes), elasticity (nodes can lag during activity peaks without affecting the system as a whole) and resiliency (nodes can fail / stop working without taking the whole system down).
</li>
<li class="li_faq"><font color="#26619b">What is the advantage of CoralSequencer?</font><br />
CoralSequencer is a state-of-the-art implementation of a messaging queue for ultra-low-latency distributed systems that require a high degree of reliability. It has all the Coral Blocks&#8217; advantages: high performance, low latency, low variance, zero garbage and simplicity. As you will see in subsequent articles, CoralSequencer is extremely easy to develop, configure, deploy, monitor and maintain.
</li>
<li class="li_faq"><font color="#26619b">What are some examples of systems that can be built on top of CoralSequencer?</font><br />
Electronic exchanges (ECNs), trading platforms, banking systems, automated online advertising systems, defense/military systems, credit card systems or any distributed system with a large number of transactions that requires all the advantages listed above plus ultra low latency and high reliability.
</li>
<li class="li_faq"><font color="#26619b">Is CoralSequencer fully deterministic?</font><br />
All nodes read all messages in the exact same order, allowing nodes to become finite state machines, which in turn makes perfect clusters and high-availability possible. CoralSequencer even provides a centralized clock (and timers) in the event-stream so nodes can rebuild the exact same state every time, with the same timestamp they have observed in the past.
</li>
</ul -->
<h3 class="coral">Quick Facts</h3>
<ul style="padding: 12px 40px">
<li class="li_faq">All nodes read all messages in the exact same order, dropping messages they are not interested in</li>
<li class="li_faq">All messages are persisted so late-joining nodes can rewind and catch up to build the exact same state as other nodes</li>
<li class="li_faq">Message broadcasting is done through a reliable multicast UDP protocol: no message is ever lost</li>
<li class="li_faq">Supports all cloud environments through TCP without using multicast.</li>
<li class="li_faq">Change the transport protocol from UDP to TCP by simply flipping a configuration flag with no code changes.</li>
<li class="li_faq">Hybrid approach (UDP + TCP) to support extensions of the distributed system in the cloud.</li>
<li class="li_faq">No single point of failure, from the software down to the hardware infrastructure</li>
<li class="li_faq">Each session is automatically archived with all its messages so it can be replayed later for testing, simulation, analysis and auditing</li>
<li class="li_faq">High-level, straightforward API makes it easy to write nodes that publish/consume messages</li>
<li class="li_faq">As low-latency as it can be through UDP multicast</li>
<li class="li_faq">Zero garbage created per message &#8211; no gc overhead</li>
</ul>
<h3 class="coral">Quick Features</h3>
<ul style="padding: 12px 40px">
<li class="li_faq">Message agnostic &#8211; send and receive anything you want</li>
<li class="li_faq">Automatic replayer discovery through multicast, making it easy to move your replayers across machines</li>
<li class="li_faq">Message fragmentation at the protocol level, so you can transparently send messages of any size</li>
<li class="li_faq">Comprehensive test framework for deterministic single-threaded memory-transport automated tests</li>
<li class="li_faq">Choose your transport protocol through configuration without changing a single line of your application code: TCP (for the cloud), UDP (multicast), Shared-Memory (same machine) and Memory (for tests)</li>
<li class="li_faq">TCP Rewind</li>
<li class="li_faq">Non-rewinding nodes</li>
<li class="li_faq">Transparent batching and in-flight messages</li>
<li class="li_faq"><a href="/index.php/shared-memory-transport-x-multicast-transport/" target="_blank">Shared-memory Dispatcher node</a> to avoid multicast fan-out in machines running several nodes pinned to the same CPU core</li>
<li class="li_faq">Full CLOUD support through TCP transport. Later to switch from TCP to multicast UDP you can simply flip a configuration flag</li>
<li class="li_faq">Easy replay a full past session archive file through an offline node for testing, validation, auditing, reports, etc. (from a local file or from a centralized remote server)</li>
<li class="li_faq">Tiered replayer architecture for scalability (optional)</li>
<li class="li_faq">Comprehensive, zero-garbage, binary and high-performance native serialization protocol with repeating groups, optional fields, IDL, etc. (optional)</li>
<li class="li_faq">Full duplex bridges (UDP and TCP)</li>
<li class="li_faq">Long distance bridges with TCP and UDP redundant channels for performance</li>
<li class="li_faq">A variety of internal messages providing features like node active/passive, node heartbeats, force passive, etc</li>
<li class="li_faq">Fully deterministic sequencer clock for the centralized distributed system time</li>
<li class="li_faq">Local and centralized timers with nanosecond precision</li>
<li class="li_faq">Sequencer-generated messages</li>
<li class="li_faq">Hot-Hot nodes in a perfect cluster, using the same node account with different instance IDs</li>
<li class="li_faq">Multiple sequencers in parallel with cross-connect nodes</li>
<li class="li_faq"><a href="/index.php/writing-a-c-coralsequencer-node/" target="_blank">⁠⁠C++ Node</a> support (write a node in C++ that receives and sends messages through JNI)</li>
<li class="li_faq">⁠Nodes can choose from which sequence number to rewind from (allowing for managing state in customer snapshot servers)</li>
<li class="li_faq">⁠Nodes can commit a sequence number so that they don&#8217;t need to reprocess the whole event-stream in case of rewinding</li>
<li class="li_faq">Remote administration (telnet, rest and http)</li>
<li class="li_faq">Logger node</li>
<li class="li_faq">Archiver node</li>
<li class="li_faq">Admin Node</li>
<li class="li_faq">And many others</li>
</ul>
<h3 class="coral">Node Example</h3>
<p><br/></p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralsequencer.node;

import java.nio.ByteBuffer;

import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralbits.util.DateTimeUtils;
import com.coralblocks.coralreactor.admin.AdminAction;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralsequencer.message.Message;
import com.coralblocks.coralsequencer.mq.Node;

public class SampleNode extends Node {
	
	public SampleNode(NioReactor nio, String name, Configuration config) {
		
	    super(nio, name, config);
	    
	    addAdminAction(new AdminAction(&quot;sendTime&quot;) {
			@Override
			public boolean execute(CharSequence args, StringBuilder results) {
				sendTime();
				results.append(&quot;Time successfully sent!&quot;);
				return true;
			}
	    });
    }
	
	private void sendTime() {
		sendCommand(&quot;TIME-&quot; + System.currentTimeMillis());
	}
	
	@Override
    protected void handleMessage(boolean isMine, Message msg) {
		
		if (!isMine) return; // not interested, quickly drop it...
		
		ByteBuffer data = msg.getData(); // the raw bytes of the message...
		
		long epochInNanos = eventStreamEpoch(); // deterministic centralized sequencer clock...
		
		CharSequence now = DateTimeUtils.formatDateTimeInNanos(epochInNanos);
		
		System.out.println(&quot;Saw my message at &quot; + now + &quot;: &quot; + ByteBufferUtils.parseString(data));
    }
}
</pre>
<p><br/><br />
<br/></p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/state-of-the-art-distributed-systems-with-coralmq/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CoralSequencer Performance Numbers</title>
		<link>https://www.coralblocks.com/index.php/coralmq-performance-numbers/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=coralmq-performance-numbers</link>
		<comments>https://www.coralblocks.com/index.php/coralmq-performance-numbers/#comments</comments>
		<pubDate>Fri, 21 Aug 2015 03:03:19 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralMQ]]></category>
		<category><![CDATA[CoralSequencer]]></category>
		<category><![CDATA[coralmq]]></category>
		<category><![CDATA[latency]]></category>
		<category><![CDATA[throughput]]></category>
		<category><![CDATA[zeromq]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1236</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In this article we will present the latency and throughput numbers of CoralSequencer. <span id="more-1236"></span> In doing that we compute the time it takes for a node to publish a message in the sequencer and receive the response in the event-stream, in other words, <strong>we are measuring network round-trip times</strong>. There are two independent JVMs: one running the sequencer and one running the benchmark node. The node sends the message to the sequencer, the sequencer picks up the message and publishes a response in the event-stream so that the node can get it. CoralSequencer comes with a <code>BenchmarkNode</code> implementation that you can use to measure the throughput and latency in your own environment.</p>
<p><!-- The machine used for the latency benchmarks below was a fast Intel Xeon E-2288G octa-core (8 x 3.70GHz) Ubuntu box not overclocked. --><br />
The machine used for the latency benchmarks below was a fast Intel 13th Generation Core i9-13900KS (8 x 3.20GHz Base / 6.00GHz Turbo) Ubuntu box.</p>
<p><strong>NOTE:</strong> Everyone’s network environment is different, and we usually have a hard time comparing over-the-wire benchmark numbers. To make this simple we present loopback numbers (i.e. client and server running on the same physical machine but different JVMs) which are easy to compare and weed out external factors, isolating the performance of the application + network code. To calculate total numbers you should add your typical over-the-wire network latency. A 256-byte packet traveling through a 10 Gigabits ethernet will take at least 382 nanoseconds to go from NIC to NIC (ignoring the switch hop). If your ethernet is 1 Gigabits then the latency is at least 3.82 micros on top of CoralSequencer numbers. Another factor is the network card latency. Going from JVM to kernel to NIC can be costly and some good network cards optimize that by offering kernel bypass (i.e. Open OnLoad from SolarFlare).</p>
<h3 class="coral">Latency Numbers</h3>
<pre>
Message Size: 1024 bytes
Messages: 1,000,000
Avg Time: <font color="blue"><strong>3.379 micros</strong></font> (round-trip network time)
Min Time: 2.717 micros
Max Time: 73.174 micros
75% = [avg: 3.236 micros, max: 3.428 micros]
90% = [avg: 3.286 micros, max: 3.775 micros]
99% = [avg: 3.359 micros, max: 5.155 micros]
99.9% = [avg: 3.376 micros, max: 5.547 micros]
99.99% = [avg: 3.378 micros, max: 5.719 micros]
99.999% = [avg: 3.378 micros, max: 11.311 micros]
</pre>
<p><!-- pre><br />
Message Size: 256 bytes<br />
Messages: 1,000,000<br />
Avg Time: <font color="blue"><strong>4.771 micros</strong></font><br />
Min Time: 3.64 micros<br />
Max Time: 616.274 micros<br />
75% = [avg: 4.563 micros, max: 4.876 micros]<br />
90% = [avg: 4.621 micros, max: 4.95 micros]<br />
99% = [avg: 4.666 micros, max: 5.963 micros]<br />
99.9% = [avg: 4.68 micros, max: 6.958 micros]<br />
99.99% = [avg: 4.736 micros, max: 279.053 micros]<br />
99.999% = [avg: 4.766 micros, max: 485.136 micros]
</pre -->
<p><br/></p>
<h3 class="coral">Throughput Numbers</h3>
<p style="margin-top: 26px; margin-bottom: 16px;">
The machine used for the throughput benchmarks below was a fast Intel Xeon E-2288G octa-core (8 x 3.70GHz) Ubuntu box not overclocked.
</p>
<p style="margin-bottom: 16px;">
The throughput numbers below are measured when three nodes are pushing messages to the sequencer as fast as they can. <i>How many messages can the sequencer receive and send out?</i>
</p>
<pre style="margin-bottom: 28px;">
Message Size: 256 bytes
Messages Sent: 3,000,000
Total Time: 2.688 secs
Messages per second: <font color="blue"><strong>1,116,166</strong></font>
Average Time per message: 895 nanos
</pre>
<p>
The three nodes used to stress out the sequencer had throughput numbers close to:
</p>
<pre>
Message Size: 256 bytes
Messages Sent: 2,000,000
Messages per second: 470,483
Average Time per message: 2.125 micros
</pre>
<p style="margin-top: 30px;">
<b>NOTE ABOUT BATCHING:</b> The throughput numbers above were calculated with the nodes aggressively batching, in other words, the nodes were aggressively sending more than one message inside the same UDP packet. If you assume that the MTU (i.e. max UDP transmission unit size) is around 1500 bytes, then you can fit five 256-byte messages inside the same UDP packet, increasing throughput considerably.
</p>
<p style="margin-top: 28px;">
If you decrease the message size, say to <strong>64 bytes</strong>, then you will be able to batch even more, increasing the throughput numbers even further. Below the sequencer throughput numbers when the nodes are aggressively batching 64-byte messages inside the same UDP packet:
</p>
<pre style="margin-bottom: 28px;">
Message Size: 64 bytes
Messages Sent: 3,000,000
Total Time: 1.287 secs
Messages per second: <font color="blue"><strong>2,330,160</strong></font>
Average Time per message: 429 nanos
</pre>
<p>
The three nodes used to stress out the sequencer had throughput numbers close to:
</p>
<pre>
Message Size: 64 bytes
Messages Sent: 2,000,000
Messages per second: 1,641,239
Average Time per message: 609 nanos
</pre>
<p style="margin-top: 28px;">
Below we present the sequencer throughput numbers when no batching at the node level is taking place, in other words, when the nodes are sending only one 256-byte message per UDP packet.</p>
<pre style="margin-bottom: 28px;">
Message Size: 256 bytes
Messages Sent: 2,000,000
Total Time: 2.543 secs
Messages per second: <font color="blue"><strong>786,497</strong></font>
Average Time per message: 1.271 micros
</pre>
<p>
The three nodes used to stress out the sequencer had throughput numbers close to:
</p>
<pre>
Message Size: 256 bytes
Messages Sent: 2,000,000
Messages per second: 275,861
Average Time per message: 3.625 micros
</pre>
<p style="margin-top: 28px;">
The worst case scenario is when we have a big message filling up the entire UDP packet. Note that with this big message size not even the sequencer will be able to batch anything. Below the throughput numbers when the nodes are sending a 1400-byte message.</p>
<pre style="margin-bottom: 28px;">
Message Size: 1,400 bytes
Messages Sent: 2,000,000
Total Time: 6.097 secs
Messages per second: <font color="blue"><strong>328,025</strong></font>
Average Time per message: 3.048 micros
</pre>
<p>
The three nodes used to stress out the sequencer had throughput numbers close to:
</p>
<pre>
Message Size: 1,400 bytes
Messages Sent: 2,000,000
Messages per second: 111,119
Average Time per message: 8.999 micros
</pre>
<p><br/></p>
<h3 class="coral">Conclusion</h3>
<p>CoralSequencer can sustain a throughput of <strong>2 million messages per second</strong> if batching is used. The round-trip latencies are close to <strong>4.7 micros per message (256 bytes)</strong>. Without batching at the node level, the sequencer throughput number is around <strong>780 thousand messages per second</strong> for a 256-byte message. For the worst case scenario, a 1400-byte message inside the UDP packet, the sequencer throughput number is around <strong>320 thousand messages per second</strong>.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/coralmq-performance-numbers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Nodes (CoralSequencer article series)</title>
		<link>https://www.coralblocks.com/index.php/nodes-coralmq-article-series/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=nodes-coralmq-article-series</link>
		<comments>https://www.coralblocks.com/index.php/nodes-coralmq-article-series/#comments</comments>
		<pubDate>Tue, 23 Feb 2016 23:05:37 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralMQ]]></category>
		<category><![CDATA[CoralSequencer]]></category>
		<category><![CDATA[coralmq]]></category>
		<category><![CDATA[MQ]]></category>
		<category><![CDATA[node]]></category>
		<category><![CDATA[rabbitmq]]></category>
		<category><![CDATA[zeromq]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1977</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In a distributed system, <strong>Nodes</strong> are responsible for executing the application logic in a decentralized/distributed way. With CoralSequencer you can easily code a node that will send commands to the sequencer and listen to messages in the <i>event-stream</i> (i.e. message-bus). <span id="more-1977"></span> Below we show an example of a simple node that sends a <code>TIME</code> command to the sequencer and waits to see the corresponding message in the event-stream. When it receives the message, it waits 3 seconds and sends another command, repeating the process. </p>
<pre class="brush: java; highlight: [41,45]; title: ; notranslate">
package com.coralblocks.coralsequencer.node;

import java.nio.ByteBuffer;

import com.coralblocks.coralbits.ts.TimeUnit;
import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralsequencer.message.Message;
import com.coralblocks.coralsequencer.mq.Node;

public class SampleNode extends Node {
	
	private final static int PERIOD = 3; // 3 seconds...
	
	public SampleNode(NioReactor nio, String name, Configuration config) {
	    super(nio, name, config);
    }
	
	@Override
	protected void handleActivated() {
		// this method is called when the node becomes active
		sendCommand();
	}
	
	@Override
	protected void handleDeactivated() {
		// called when a node has been deactivated
		// once deactivated a node will not send commands
		removeEventTimeout(); // turn off event timeout if set
	}
	
	@Override
	protected void handleEventTimeout(long now, long period, TimeUnit unit) {
		// this method is triggered by the event timeout you are setting in the handleMessage method
		// Note: it is triggered only once so you must re-register the timeout if you want to do it again (it is not a loop timer)
		sendCommand();
	}
	
	private void sendCommand() {
		sendCommand(&quot;TIME-&quot; + System.currentTimeMillis());
	}

	@Override
    protected void handleMessage(boolean isMine, Message msg) {
		
		if (!isMine || isRewinding()) return; // not interested, quickly ignore them...
		
		ByteBuffer data = msg.getData();
		
		System.out.println(&quot;Saw my message in the event-stream: &quot; + ByteBufferUtils.parseString(data));
		
		setEventTimeout(PERIOD, TimeUnit.SECONDS); // set a trigger to send the command again after 3 seconds
    }
}
</pre>
<p>Note that every command sent to the sequencer by a node will make the sequencer send a corresponding message to the event-stream with the sender&#8217;s account. That&#8217;s how the node makes sure that its command was received and processed by the sequencer. You don&#8217;t need to worry about that or do anything, but under the hood CoralSequencer will resend the command if it does not see the corresponding message (i.e. the ack) in the event-stream after N milliseconds. Again, this is totally transparent for the developer coding the node, as you can see in the source code above.</p>
<p>A node can be read-only, in other words, it will only listen to the event-stream and never send any command to the sequencer. Our node above does send a command to the sequencer (i.e. it is not read-only) using the <code>sendCommand(String)</code> method. Besides that method, you can also use <code>sendCommand(byte[])</code>, <code>sendCommand(ByteBuffer)</code> and <code>sendCommand(Proto)</code>.</p>
<p>The <code>isMine</code> flag passed by the <code>handleMessage(boolean, Message)</code> method is important as it tells you if you are receiving a message that belongs to this node or not. Recall that the sequencer always broadcasts all messages to all nodes, so you will be seeing messages from other nodes in this method. If you are only interested in your messages you can quickly drop them by checking the <code>isMine</code> boolean.</p>
<p>Another important check is done with <code>isRewinding()</code>. The first time the node connects to the sequencer it will receive a replay of all the previous messages from the current session, in a process called <i>rewinding</i>. You can use this messages to rebuild state if you need to. In our simple example we don&#8217;t want to do anything with past messages so we simply drop them.</p>
<p><br/></p>
<h3 class="coral">Configuring the Node</h3>
<p>Below the CoralSequencer DSL to configure a node:</p>
<pre class="brush: java; title: ; notranslate">
# allow this node to be managed by telnet
VM addAdmin telnet 51

# creates the node (account = NODE1)
VM newNode NODE1 com.coralblocks.coralsequencer.node.SampleNode

# the lines below can also be executed manually by admin
NODE1 open
NODE1 activate
</pre>
<p>Add the lines above to a file <i>time.mq</i> and use the script <code>./bin/start.sh</code> to execute the DSL and start the node:</p>
<p><a href="http://www.coralblocks.com/wp-content/uploads/2016/02/Screen-Shot-2016-02-23-at-4.52.56-PM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2016/02/Screen-Shot-2016-02-23-at-4.52.56-PM.png" alt="Screen Shot 2016-02-23 at 4.52.56 PM" width="1105" height="1049" class="alignnone size-full wp-image-1990" /></a></p>
<h3 class="coral">Managing the Node</h3>
<p>Because we configured the <i>telnet admin</i> in the DSL above, we can telnet to the admin port (i.e. 50000 + port id) to execute DSL commands on the node. For example, the last lines to open and activate the node can be executed manually through telnet, as the example below shows:</p>
<p><b>NOTE:</b> The highlighted lines below are the commands executed</p>
<pre class="brush: java; highlight: [8,21,25]; title: ; notranslate">
$ telnet localhost 50051
Trying ::1...
Connected to localhost (::1).
Escape character is '^]'.

Hi! What can I do for you? You can start by typing 'list'...

list NODE1

NODE1 open
NODE1 close
NODE1 setMessageReceiver
NODE1 setCommandSender
NODE1 activate
NODE1 sendCommand
NODE1 status

NODE1-CommandSender-255.255.255.255:60010
NODE1-MessageReceiver-0.0.0.0:60066

NODE1 open

NODE1 was opened!

NODE1 activate true

activate called!
</pre>
<p><br/></p>
<h3 class="coral">Conclusion</h3>
<p>Writing a Node using CoralSequencer is extremely easy. Moreover you can configure and manage your nodes using CoralSequencer&#8217;s DSL.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/nodes-coralmq-article-series/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
