<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Coral Blocks &#187; CoralReactor</title>
	<atom:link href="https://www.coralblocks.com/index.php/category/coralreactor/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.coralblocks.com/index.php</link>
	<description>Building amazing software, one piece at a time.</description>
	<lastBuildDate>Sat, 25 Apr 2026 13:00:06 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.9.1</generator>
	<item>
		<title>The Simplicity of CoralReactor</title>
		<link>https://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=the-simplicity-of-coralreactor</link>
		<comments>https://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/#comments</comments>
		<pubDate>Wed, 16 Apr 2014 00:29:27 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[low-latency]]></category>
		<category><![CDATA[reactor]]></category>
		<category><![CDATA[simplicity]]></category>
		<category><![CDATA[thread-safe]]></category>

		<guid isPermaLink="false">http://cb-blog.soliveirajr.com/?p=40</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>CoralReactor is a powerful, easy to use and ultra-low-latency Java library for network communication with zero garbage creation and minimal variance. Moreover, <strong>what stands out about CoralReactor is its simplicity</strong>. In this article we will demonstrate some examples of clients and servers to get you started with CoralReactor. <span id="more-40"></span></p>
<h2 class="coral">Discard Server</h2>
<p>A discard server accepts clients and drops any data received from them, without sending any message back. Below the CoralReactor implementation:</p>
<pre class="brush: java; highlight: [9,20]; title: ; notranslate">
import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralreactor.server.AbstractTcpServer;
import com.coralblocks.coralreactor.server.Server;
import com.coralblocks.coralreactor.server.ServerClient;

public class DiscardServer extends AbstractTcpServer {
	
	public DiscardServer(NioReactor nio, int port) {
	    super(nio, port);
    }

	public DiscardServer(NioReactor nio, int port, Configuration config) {
	    super(nio, port, config);
    }
	
	@Override
    protected void handleBuffer(Client client, ByteBuffer buf) {
	    
		buf.position(buf.limit()); // smart way to read the buffer without reading it...
    }

	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();		
		Server server = new DiscardServer(nio, 54321);
		server.open();
		nio.start();
	}
}
</pre>
<p>You can see on line 9 above that we are extending <code>AbstractTcpServer</code>. CoralReactor comes with a variety of base classes that you can use to easily implement all sorts of UDP and TCP clients and servers with minimum code and effort. The <code>AbstractTcpServer</code> class requires the implementation of only one method: <code>handleBuffer(Client client, ByteBuffer buf)</code>.</p>
<p>As we will see later, the purpose of the <code>handleBuffer</code> method is to parse a message from the byte buffer but for our simple DISCARD server it just advances the byte buffer pointer, pretending it is reading it (line 22). Note that if you don&#8217;t <em>read</em> the byte buffer by advancing its position, bytes will accumulate, the buffer will eventually get full and an exception will be raised. Bytes accumulate because the buffer might contain a partial message that requires another reading cycle to be completed. When performing non-blocking i/o, bytes are read from the network as they are available therefore it is possible that the byte buffer will contain partial messages. When that happens, you just return from <code>handleBuffer</code> without reading or processing the partial message. CoralReactor then compacts the buffer and another reading cycle begins to read more data from the network and complete the message.</p>
<p>Notice how easy it is to create and open the server in the <code>main</code> method. By default, the server binds to <code>0.0.0.0</code> but that can be changed by passing a <code>configuration</code> in the constructor. For example, to bind the server to <code>192.168.1.20</code> instead, you would do:</p>
<pre class="brush: java; highlight: [5]; title: ; notranslate">

	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();	
		Configuration config = new MapConfiguration();
		config.overwriteDefault(&quot;bindAddress&quot;, &quot;192.168.1.20&quot;);	
		Server server = new DiscardServer(nio, 54321, config);
		server.open();
		nio.start();
	}
</pre>
<h2 class="coral">Print Server</h2>
<p>The DISCARD server is too boring because it does not do anything. Let&#8217;s change the <code>handleBuffer</code> method to print the data it receives to <code>stdout</code>:</p>
<pre class="brush: java; title: ; notranslate">
	@Override
    protected void handleBuffer(Client client, ByteBuffer buf) {
		
		while(buf.hasRemaining()) {
			char c = (char) buf.get();
			System.out.print(c);
		}
		System.out.flush();
    }
</pre>
<p>Now you can telnet to port 54321 and see something happening in the server side.</p>
<h2 class="coral">Echo Server</h2>
<p>We can easily change the <code>handleBuffer</code> method to echo back everything it receives:</p>
<pre class="brush: java; title: ; notranslate">
	@Override
    protected void handleBuffer(Client client, ByteBuffer buf) {
		
		client.send(buf); // send = write + flush
    }
</pre>
<p>Notice that we are using the <code>send</code> method of the <code>Client</code> interface, which is equivalent to calling <code>client.write(buf)</code> and then <code>client.flush()</code>.</p>
<h2 class="coral">Echo Line Server</h2>
<p>To make things more interesting, let&#8217;s implement an ECHO server that works with ascii characters delimited by the carriage return character (&#8216;\n&#8217;), in other words, it considers each line received from a client to be a new message to be processed. For that we will inherent from <code>AbstractLineTcpServer</code> and overwrite the <code>handleMessage</code> method:</p>
<pre class="brush: java; highlight: [10,19]; title: ; notranslate">
import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.AbstractLineTcpServer;
import com.coralblocks.coralreactor.server.Server;
import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralbits.util.ByteArrayUtils;

public class EchoLineServer extends AbstractLineTcpServer {
	
	private final byte[] message = new byte[32];
	
	public EchoLineServer(NioReactor nio, int port) {
	    super(nio, port);
    }

	@Override
    protected void handleMessage(Client client, ByteBuffer msg) {
		
		if (ByteBufferUtils.equals(msg, &quot;bye&quot;)) { // check if message == &quot;bye&quot;
			client.send(&quot;Goodbye!&quot;); // send &quot;Goodbye!&quot;
			client.close(); // adios
		} else {
			client.send(msg); // write and flush message
		}
    }
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		Server server = new EchoLineServer(nio, 54321);
		server.open();
		nio.start();
	}
}
</pre>
<p>The first thing you must notice is that now we are overriding the method <code>handleMessage</code> instead of <code>handleBuffer</code>. Why? That&#8217;s because the <code>AbstractLineTcpServer</code> class implements <code>handleBuffer</code> to parse lines from the byte buffer and considers them <em>protocol messages</em>. When <code>handleMessage</code> is called, you can be sure that its byte buffer contains a single message. Also, because the <code>handleMessage</code> method does not care how the message was parsed, the carriage return character is not included in the byte buffer representing the message. We will soon see an example of how to implement the <code>handleBuffer</code> method to parse protocol messages, but for now you can rely on the <code>AbstractLineTcpServer</code> class to do this job for you and provide you with a clear <em>line message</em>.</p>
<p>Now it gets interesting when you process the message inside the <code>handleMessage</code> method. As mentioned in the introduction, CoralReactor produces zero garbage, so why would you produce garbage yourself if you don&#8217;t have to? Real-time programming without generating garbage and GC overhead is out of the scope of this article, but we will stick to the best practices for our ECHOLINE server. Instead of creating a <code>String</code> from the message byte buffer, we will read its contents into a byte array and perform a string comparison using <code>ByteBufferUtils</code> and <code>ByteArrayUtils</code> respectively. This is not just faster but it leaves zero garbage behind. Techniques like that (and other tricks) allow CoralReactor to be a ZERO garbage and super-fast library. You can refer to its <a href="/coralreactor.pdf" target="_blank">white paper</a> for more details about the zero garbage feature. For more information about <code>ByteBufferUtils</code>, <code>ByteArrayUtils</code> and other real-time utility classes offered by Coral Blocks, refer to <a href="/coralbits.pdf" target="_blank">CoralBits</a>.</p>
<p>Notice that when you send a message back to a client through the <code>send</code> method, you do not have to include the carriage return character. All the protocol level work is being done for you by the <code>AbstractLineTcpServer</code> class.</p>
<h2 class="coral">Single Threaded</h2>
<p>You might be asking yourself whether the <code>message</code> byte array is safe to be shared among multiple clients as it looks there is only one instance of it. The answer is a solid yes! CoralReactor, by design and on purpose, is single-threaded, in other words, all clients and all network i/o are handled inside the same super-fast, isolated (i.e. pinned), non-blocking reactor thread. This not just provides super-fast performance but also allows for much simpler code that does not have to worry about thread synchronization, lock contention, race-conditions, deadlocks, thread starvation and many other pitfalls of multithreaded programming.</p>
<h2 class="coral">Time Server</h2>
<p>To continue with our server examples, we will now implement a TIME server that sends an integer with the current time to a connecting client and closes the connection.</p>
<pre class="brush: java; highlight: [17]; title: ; notranslate">
import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.AbstractTcpServer;
import com.coralblocks.coralreactor.server.Server;

public class TimeServer extends AbstractTcpServer {
	
	private final ByteBuffer outBuffer = ByteBuffer.allocate(4);

	public TimeServer(NioReactor nio, int port) {
	    super(nio, port);
    }

	@Override
    protected void handleConnectionOpened(Client client) {
		
		int time = (int) (System.currentTimeMillis() / 1000L + 2208988800L);
		
		outBuffer.clear();
		outBuffer.putInt(time);
		outBuffer.flip();

		client.send(outBuffer); // write and flush
		client.close();
    }
	
	@Override
    protected void handleBuffer(Client client, ByteBuffer buf) {
		
		throw new IllegalStateException(&quot;This should never be called!&quot;);
    }
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();		
		Server server = new TimeServer(nio, 54321);
		server.open();
		nio.start();
	}
}
</pre>
<p>This time we do all the work inside the <code>handleConnectionOpened</code> method, which is called when a new client is connecting to the server. Note that we still have to override the abstract method <code>handleBuffer</code>, but this method will never be called, in other words, because we are closing the client connection on <code>handleConnectionOpened</code> (line 26), the client will never have a chance to send anything. It connects, get a time integer and the server hangs up on it. The base classes of CoralReactor provide all the callbacks you need to track state, such as the <code>handleConnectionOpened</code> method. The other <em>handle</em> methods are: <code>handleConnectionEstablished</code>, <code>handleConnectionTerminated</code>, <code>handleBatchProcessed</code>, <code>handleFlushed</code>, <code>handleReadTimeout</code>, <code>handleEventTimeout</code>, <code>handleCallback</code>, <code>handleSessionStarted</code>, <code>handleSessionEnded</code> and <code>handleReadException</code>.</p>
<h2 class="coral">Time Client</h2>
<p>To test our TIME server, we can use the UNIX <code>rdate</code> command:</p>
<pre>
$ rdate -o &lt;port&gt; -p &lt;host&gt;
</pre>
<p>However let&#8217;s take this opportunity to implement our first client. This will also give us the chance to revisit the <code>handleBuffer</code> method and parse a TIME protocol message.</p>
<pre class="brush: java; highlight: [8,19,23,43]; title: ; notranslate">
import java.nio.ByteBuffer;
import java.util.Date;

import com.coralblocks.coralreactor.client.AbstractTcpClient;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;

public class TimeClient extends AbstractTcpClient {
	
	private static final int MESSAGE_LENGTH = 4;

	public TimeClient(NioReactor nio, String host, int port) {
	    super(nio, host, port);
    }
	
	@Override
    protected void handleBuffer(ByteBuffer buf) {
		
		while(isConnectionOpen() &amp;&amp; buf.remaining() &gt;= MESSAGE_LENGTH) { // do we have a full msg?
			int endLimit = buf.limit();
			int msgLimit = buf.position() + MESSAGE_LENGTH;
			buf.limit(msgLimit); // parse/mark the message
			onMessage(buf); // here is a clear protocol message
			buf.limit(endLimit).position(msgLimit);
		}
    }
	
	@Override
	protected void handleMessage(ByteBuffer msg) {

		long t = (((long) msg.getInt()) &amp; 0xffffffffL); // unsigned int
		long time = (t - 2208988800L) * 1000L;
		// the garbage bellow can be easily avoided by using the class
		// com.coralblocks.coralreactor.util.CheapDateTimeFormatter
		// which returns a char array with the date and time
		Date d = new Date(time);
		System.out.println(d);
	}
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		Client client = new TimeClient(nio, &quot;localhost&quot;, 54321);
		client.open();		
		nio.start();
	}
}
</pre>
<p>The goal of the <code>handleBuffer</code> method is to parse a protocol message and call <code>onMessage</code> with it. When it does that, the <code>handleMessage</code> method callback is triggered and the client has a chance to do whatever it wants with the received message. Some best practices about the implementation of <code>handleBuffer</code> are:</p>
<ul>
<li>If for any reason a message causes a client to disconnect, we want to exit the while loop on line 19. That&#8217;s why we check if the client is still connected on every iteration with the method <code>isConnectionOpen</code>.</li>
<li>It does not make sense to try to parse partial messages, so we exit the while loop on line 19 if the byte buffer remaining is less than the message length.</li>
<li>We always save the <code>endLimit</code> and the <code>msgLimit</code> so the byte buffer is adjusted correctly for the next iteration no matter what the <code>handleMessage</code> callback does to the byte buffer pointers.</li>
<li>As mentioned before, the <code>onMessage</code> method must be called with a byte buffer containing exactly one clear protocol message.</li>
</ul>
<p>Now for the <code>handleMessage</code> method, we just get the integer from the byte buffer, do the math and print the date and time. You can do whatever you want with the byte buffer inside the <code>handleMessage</code> method because <code>handleBuffer</code> guarantees its integrity by saving its pointers.</p>
<p>Notice that coding a client is very similar to coding a server. One difference is that the <em>handle</em> callback methods do not have a <em>client</em> argument because they are called from the client itself, in other words, the client you&#8217;ll be using is the <code>this</code> reference. Another small difference is that now when you create a client in the <code>main</code> method, you must pass the host or ip to which it will be connecting in the constructor (line 43).</p>
<h2 class="coral">Time Beat Server</h2>
<p>You should have noticed that our <code>TimeClient</code> is ready to receive and process not just one TIME message but as many as the server wants to send it. Therefore, let&#8217;s implement a TIMEBEAT server that sends the time every second to connected clients.</p>
<pre class="brush: java; highlight: [19,24,38]; title: ; notranslate">
import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.AbstractTcpServer;
import com.coralblocks.coralreactor.server.Server;

public class TimeBeatServer extends AbstractTcpServer {
	
	private final static int BEAT_PERIOD = 1000; // 1s...
	
	private final ByteBuffer outBuffer = ByteBuffer.allocate(4);

	public TimeBeatServer(NioReactor nio, int port) {
	    super(nio, port);
    }
	
	@Override
	protected void handleConnectionOpened(Client client) {
		sendTime(client);
	}

    @Override
    protected void handleEventTimeout(Client client, long now, int timeout) {
    	sendTime(client);
    }
    
    private void sendTime(Client client) {
		
		int time = (int) (System.currentTimeMillis() / 1000L + 2208988800L);
		
		outBuffer.clear();
		outBuffer.putInt(time);
		outBuffer.flip();

		client.send(outBuffer); // write and flush
		
		client.setEventTimeout(BEAT_PERIOD); // send again in 1 sec
    }
    
	@Override
    protected void handleBuffer(Client client, ByteBuffer buf) {

		// client should not be sending anything
		buf.position(buf.limit()); // ignore!
    }
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		Server server = new TimeBeatServer(nio, 54321);
		server.open();		
		nio.start();
	}
}
</pre>
<p>Now instead of closing the client on <code>handleConnectionOpened</code>, we send the time and schedule a timeout event one second later. Then one second later the method <code>handleEventTimeout</code> gets called, we send the time again and schedule another timeout event one second later. And keep doing that in a loop, until the client disconnects. The client is still not supposed to be sending anything so if it does we just ignore inside the <code>handleBuffer</code> method, like we did for the DISCARD server.</p>
<p>As you can see, CoralReactor can schedule future events with millisecond precision that get triggered inside the reactor thread like any other i/o operation, in other words, it is fast, precise and thread-safe.</p>
<h2 class="coral">Conclusion</h2>
<p>In this article, we saw how to create clients and servers using CoralReactor. We learned how we can parse protocol messages, schedule future events, handle callbacks and other cool and simple tricks for coding your own ultra-fast and garbage-free clients and servers. CoralReactor comes with many base classes that you can use to implement your own protocols and although it has many powerful features, its main goal is to be simple to use with a straightforward API that shields the developer from any complexity commonly associated with the reactor pattern.</p>
<p><br/><br/></p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/the-simplicity-of-coralreactor/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CoralReactor Performance Numbers</title>
		<link>https://www.coralblocks.com/index.php/coralreactor-performance-numbers/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=coralreactor-performance-numbers</link>
		<comments>https://www.coralblocks.com/index.php/coralreactor-performance-numbers/#comments</comments>
		<pubDate>Fri, 18 Apr 2014 04:01:48 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[gc overhead]]></category>
		<category><![CDATA[kernel bypass]]></category>
		<category><![CDATA[latency]]></category>
		<category><![CDATA[loopback]]></category>
		<category><![CDATA[network infrastructure]]></category>
		<category><![CDATA[tcp]]></category>
		<category><![CDATA[udp]]></category>

		<guid isPermaLink="false">http://coralblocks.soliveirajr.com/index.php/?p=118</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In this article we present the CoralReactor UDP and TCP latency numbers when two separate JVMs exchange messages over loopback. <span id="more-118"></span></p>
<p>The machine used for the benchmarks below was an Intel i7 quad-core (4 x 3.50GHz) Ubuntu box overclocked to 4.50Ghz.</p>
<p>The test consists of the following steps:</p>
<ul>
<li> a client running in a JVM sends a 256-byte message to an echo server running in another JVM</li>
<li> the first 8 bytes of the message is a long containing the timestamp in nanos when the message was sent by the client</li>
<li> the echo server receives the message, reads the the timestamp, reads the remaining 248 bytes and then calculates the elapsed time (now &#8211; timestamp)</li>
<li> the echo server stores the elapsed time and sends the message back to the client (i.e. the echo)</li>
<li> the client gets the echo and sends the next message so the echo server can make another latency measurement</li>
<li> the cycle repeats one million times, in other words, one million messages are sent</li>
<li> two passes: one to warmup with one million messages and another one to measure with one million messages</li>
<li> then the average and the percentiles are calculated and presented</li>
</ul>
<p><strong>NOTE:</strong> Everyone&#8217;s network environment is different, and we usually have a hard time comparing over-the-wire benchmark numbers. To make this simple we present loopback numbers (i.e. client and server running on the same machine but different JVMs) which are easy to compare and weed out external factors, isolating the performance of the network code. To calculate total numbers you should add your typical over-the-wire network latency. A 256-byte packet traveling through a 10 Gigabits ethernet will take at least 382 nanoseconds to go from NIC to NIC (ignoring the switch hop). If your ethernet is 1 Gigabits then the latency is at least 3.82 micros on top of CoralReactor numbers. Another factor is the network card latency. Going from JVM to kernel to NIC can be costly and some good network cards optimize that by offering kernel bypass (i.e. Open OnLoad from SolarFlare).</p>
<h2 class="coral">UDP Latencies</h2>
<pre>
Messages: 1,000,000 (size 256 bytes)
Avg Time: <font color="blue"><b>1.747 micros</b></font> (one-way)
Min Time: 1.486 micros
Max Time: 11.117 micros
Garbage created: <font color="red"><b>ZERO</b></font>
75% = [avg: 1.696 micros, max: 1.832 micros]
90% = [avg: 1.724 micros, max: 1.899 micros]
99% = [avg: 1.742 micros, max: 1.979 micros]
99.9% = [avg: 1.745 micros, max: 2.784 micros]
99.99% = [avg: 1.746 micros, max: 5.232 micros]
99.999% = [avg: 1.747 micros, max: 6.531 micros]
</pre>
<h2 class="coral">TCP Latencies</h2>
<pre>
Messages: 1,000,000 (size 256 bytes)
Avg Time: <font color="blue"><b>2.15 micros</b></font> (one-way)
Min Time: 1.976 micros
Max Time: 64.432 micros
Garbage created: <font color="red"><b>ZERO</b></font>
75% = [avg: 2.12 micros, max: 2.17 micros]
90% = [avg: 2.131 micros, max: 2.204 micros]
99% = [avg: 2.142 micros, max: 2.679 micros]
99.9% = [avg: 2.147 micros, max: 3.022 micros]
99.99% = [avg: 2.149 micros, max: 5.604 micros]
99.999% = [avg: 2.149 micros, max: 7.072 micros]
</pre>
<h2 class="coral">Garbage Production and Garbage Collector Overhead</h2>
<p>No matter if you send one, one million or one billion messages, no garbage is produced by CoralReactor and the Garbage Collector never kicks in.</p>
<p><br/><br/></p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/coralreactor-performance-numbers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Tick-to-Trade Latency Numbers using CoralFIX and CoralReactor</title>
		<link>https://www.coralblocks.com/index.php/tick-to-trade-latency-numbers-using-coralfix-and-coralreactor/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=tick-to-trade-latency-numbers-using-coralfix-and-coralreactor</link>
		<comments>https://www.coralblocks.com/index.php/tick-to-trade-latency-numbers-using-coralfix-and-coralreactor/#comments</comments>
		<pubDate>Mon, 31 Aug 2015 03:59:39 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralFIX]]></category>
		<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[Other]]></category>
		<category><![CDATA[latency]]></category>
		<category><![CDATA[strategy]]></category>
		<category><![CDATA[tick]]></category>
		<category><![CDATA[tick-to-trade]]></category>
		<category><![CDATA[trade]]></category>
		<category><![CDATA[trading]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1587</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In this article we use wireshark and tcpdump to analyze the latency numbers of a test trading strategy that receives a market data tick through UDP using CoralReactor and places a trade order through FIX using CoralFIX. <span id="more-1587"></span></p>
<h3 class="coral">Test Details</h3>
<p>A pseudo market data generator sends UDP packets containing market data updates (i.e. ticks). Our test trading strategy receives these packets, parses them and places a trade order using the FIX protocol on a pseudo exchange that immediately executes the order. To warm up the test strategy, we initially send 1 million market data updates. Then we proceed to send one packet every 5 seconds. To measure the latency, we use tcpdump to record the UDP packet coming in and the FIX order going out. With the packet capture file from tcpdump we then use wireshark to calculate the difference in the timestamps of the packet arriving and the FIX order leaving.</p>
<h3 class="coral">Results</h3>
<p>As you can see from the wireshark screenshot below, the <font color="#008F63"><b>tick-to-trade latencies are around 8-9 microseconds</b></font>.</p>
<p><a href="/wp-content/uploads/2015/08/tick-to-trade2.png"><img src="/wp-content/uploads/2015/08/tick-to-trade2.png" alt="tick-to-trade" width="1268" height="1014" class="alignnone size-full wp-image-1589" /></a></p>
<h3 class="coral">Source Code</h3>
<p>Note that the source code down below produces zero garbage for the GC.</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralfix.bench.trade;

import java.net.InetSocketAddress;
import java.nio.ByteBuffer;

import com.coralblocks.coralbits.util.PriceUtils;
import com.coralblocks.coralfix.FixConstants;
import com.coralblocks.coralfix.FixMessage;
import com.coralblocks.coralfix.FixTags;
import com.coralblocks.coralfix.client.FixApplicationClient;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.client.ClientAdapter;
import com.coralblocks.coralreactor.client.receiver.ReceiverUdpClient;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.util.MapConfiguration;

public class TickToTrade extends ClientAdapter {
	
	private final NioReactor nio;
	private final FixApplicationClient fixGateway;
	private final byte[] symbol = new byte[8];
	private long orderIDs = 1;
	
	public TickToTrade(NioReactor nio, final Client marketDataFeed, FixApplicationClient fixGateway) {
		this.nio = nio;
		this.fixGateway = fixGateway;
		
		fixGateway.addListener(new ClientAdapter() {

			@Override
			public void onConnectionOpened(Client client) {
				// gateway is connected and ready to trade...
				marketDataFeed.open(); // open the feed...
				marketDataFeed.addListener(TickToTrade.this); // so that onMessage below is called
			}
		});
	}
	
	public void start() {
		fixGateway.open();
	}

	@Override
    public void onMessage(Client client, InetSocketAddress fromAddress, ByteBuffer msg) {
		
		// parse the market data quote
		long quoteId = msg.getLong();
		boolean isBid = msg.get() == 'B';
		msg.get(symbol);
		long size = msg.getInt();
		long price = msg.getLong();
		
		// hit the market data quote
		FixMessage outFixMsg = fixGateway.getOutFixMessage(FixConstants.MsgTypes.NewOrderSingle);
		outFixMsg.add(FixTags.ClOrdID, orderIDs++);
		outFixMsg.add(FixTags.OrigClOrdID, quoteId);
		outFixMsg.addTrimmed(FixTags.Symbol, symbol);
		outFixMsg.add(FixTags.Side, isBid ? &quot;2&quot; : &quot;1&quot;); // sell to the bid or buy from the offer
		outFixMsg.addTimestamp(FixTags.TransactTime, nio.currentTimeMillis());
		outFixMsg.add(FixTags.OrderQty, size);
		outFixMsg.add(FixTags.OrdType, &quot;2&quot;); // limit order
		outFixMsg.add(FixTags.Price, PriceUtils.toDouble(price));
		outFixMsg.add(FixTags.TimeInForce, &quot;3&quot;); // IoC
		fixGateway.send(outFixMsg);
    }

	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		
		Client marketDataFeed = new ReceiverUdpClient(nio, 44444);
		
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;fixVersion&quot;, 44);
		config.add(&quot;senderComp&quot;, &quot;testClient&quot;);
		config.add(&quot;forceSeqReset&quot;, true);
		
		FixApplicationClient fixGateway = new FixApplicationClient(nio, args.length &gt; 0 ? args[0] : &quot;localhost&quot;, 55555, config);
		
		TickToTrade tickToTrade = new TickToTrade(nio, marketDataFeed, fixGateway);
		tickToTrade.start();
		
		nio.start();
	}
}
</pre>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/tick-to-trade-latency-numbers-using-coralfix-and-coralreactor/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CoralReactor vs Netty Performance Comparison</title>
		<link>https://www.coralblocks.com/index.php/coralreactor-vs-netty-performance-comparison/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=coralreactor-vs-netty-performance-comparison</link>
		<comments>https://www.coralblocks.com/index.php/coralreactor-vs-netty-performance-comparison/#comments</comments>
		<pubDate>Wed, 30 Apr 2014 21:12:48 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[benchmark]]></category>
		<category><![CDATA[client]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[latency]]></category>
		<category><![CDATA[netty]]></category>
		<category><![CDATA[nio]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[reactor]]></category>
		<category><![CDATA[server]]></category>
		<category><![CDATA[tcp]]></category>

		<guid isPermaLink="false">http://cb.soliveirajr.com/index.php/?p=278</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In this article we compare CoralReactor and Netty in terms of performance to show that CoralReactor is <strong>10 times faster</strong> and produces zero garbage. <span id="more-278"></span> (Note: If you are looking for an API comparison instead, you can refer to the article <a href="/index.php/2014/04/the-simplicity-of-coralreactor/" target="_blank">The Simplicity of CoralReactor</a> which presents the same examples from the <a href="http://netty.io/wiki/user-guide-for-5.x.html" target="_blank">Netty 5.x User Guide</a>)</p>
<h2 class="coral">The Test Details</h2>
<p>In our benchmark test we have two separate JVMs running on the same machine and communicating through TCP over loopback. The details are outlined below:</p>
<ul>
<li>The first JVM runs the client, the second JVM runs the server.</li>
<li>The client connects to the server and sends a 256-byte message to the server.</li>
<li>The first 8 bytes of the message is the timestamp marked by the client of when the message was sent.</li>
<li>The server receives the message, reads the timestamp, reads the remaining 248 bytes and calculates the latency from client to server (one-way latency).</li>
<li>The server then echoes back the message to the client.</li>
<li>The client receives the echo and send the next message with a new timestamp.</li>
<li>To warmup we send 1 million messages. Then we send another 1 million messages and benchmark the latencies.</li>
</ul>
<h2 class="coral">CoralReactor Results</h2>
<pre>
Messages: 1,000,000
Message Size: 256 bytes
Avg Time: <font color="blue"><b>2.061 micros</b></font>
Min Time: 1.931 micros
Max Time: 66.968 micros
Garbage creation: zero
75% = [avg: 2.041 micros, max: 2.078 micros]
90% = [avg: 2.049 micros, max: 2.106 micros]
99% = [avg: 2.056 micros, max: 2.173 micros]
99.9% = [avg: 2.058 micros, max: 2.758 micros]
99.99% = <font color="blue">[avg: 2.06 micros, max: 5.714 micros]</font>
99.999% = [avg: 2.06 micros, max: 6.683 micros]
</pre>
<h2 class="coral">Netty Results</h2>
<pre>
Messages: 1,000,000
Avg Time: <font color="blue"><b>21.167 micros</b></font>
Min Time: 5.529 micros
Max Time: <font color="red"><b>10.264 millis</b></font>
Garbage creation: <font color="red">the more messages you send the more garbage is created</font>
75% = [avg: 20.082 micros, max: 20.667 micros]
90% = [avg: 20.272 micros, max: 25.595 micros]
99% = [avg: 21.02 micros, max: 29.672 micros]
99.9% = [avg: 21.121 micros, max: 39.088 micros]
99.99% = <font color="blue">[avg: 21.141 micros, max: 55.728 micros]</font>
99.999% = [avg: 21.146 micros, max: 91.416 micros]
</pre>
<h2 class="coral">Conclusion</h2>
<ul>
<li>CoralReactor has an average latency 10 times smaller than Netty (2 micros x 21 micros).</li>
<li>CoralReactor produces zero garbage. Netty produces garbage in proportion to the number of messages sent.</li>
<li>CoralReactor has lower variance than Netty, with a max latency of 67 micros against 10 millis from Netty.</li>
<li>At the 99.99 percentile, CoralReactor shows an average latency of 2.06 micros with a max latency of 5.714 micros. Netty shows for the same percentile an average latency of 21.141 micros with a max latency of 55.728 micros.</li>
</ul>
<h2 class="coral">CoralReactor Code</h2>
<p>Client:</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralreactor.client.bench.oneway;

import static com.coralblocks.corallog.Log.*;

import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.client.AbstractFixedSizeTcpClient;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralbits.util.SystemUtils;

public class BenchmarkTcpClient extends AbstractFixedSizeTcpClient {
	
	private int count = 0;
	private boolean warmingUp = false;
	private boolean benchmarking = false;
	private final ByteBuffer tsMsg; 
	private long tsSent;
	private final byte[] readArray;
	private final int messages;
	private final int msgSize;

	public BenchmarkTcpClient(NioReactor nio, String host, int port) {
		
		super(nio, host, port);
		
		this.msgSize = SystemUtils.getInt(&quot;msgSize&quot;, 256);
		this.messages = SystemUtils.getInt(&quot;messages&quot;, 1000000);
		
		this.tsMsg = ByteBuffer.allocateDirect(msgSize);
		this.readArray = new byte[msgSize];
		
		tsMsg.clear();
		for(int i = 0; i &lt; msgSize; i++) {
			tsMsg.put((byte) 'x');
		}
		tsMsg.flip();
	}
	
	@Override
	protected void handleConnectionOpened() {

		this.warmingUp = true;
		this.benchmarking = false;
		this.count = 0;

		sendMsg(-1); // very first message, so the other side knows we are starting...
	}
	
	@Override
	protected final int getMessageFixedSize() {
		return msgSize;
	}
	
	@Override
	public void handleMessage(ByteBuffer msg) {
		
		long tsReceived = msg.getLong();
		
		msg.get(readArray, 0, msg.remaining()); // read fully
			
		if (tsReceived != tsSent) {
			Error.log(name, &quot;Bad timestap received:&quot;, &quot;tsSent=&quot;, tsSent, &quot;tsReceived=&quot;, tsReceived);
			close();
			return;
		}
		
		if (warmingUp) {

			if (++count == messages) { // done warming up...
				
				Info.log(name, &quot;Finished warming up!&quot;, &quot;messages=&quot;, count);
				
				this.warmingUp = false;
				this.benchmarking = true;
				this.count = 0;
				
				sendMsg(System.nanoTime()); // first testing message
				
			} else {

				sendMsg(0);
			}
				
		} else if (benchmarking) {
			
			if (++count == messages) {
			
				Info.log(name, &quot;Finished sending messages!&quot;, &quot;messages=&quot;, count);
				
				// send the last message to tell the client we are done...
				sendMsg(-2);
				close();
				
			} else {
				
				sendMsg(System.nanoTime());
			}
		}
	}
	
	private final void sendMsg(long value) {
		// add the timestamp in the first 8 bytes...
		tsMsg.position(0);
		tsMsg.putLong(value);
		tsMsg.position(0);
		send(tsMsg);
		tsSent = value; // save to check echo msg...
	}

	public static void main(String[] args) throws Exception {

		NioReactor nio = NioReactor.create();
		
		String destAddress = SystemUtils.getString(&quot;destAddress&quot;, &quot;localhost&quot;);
		int destPort = SystemUtils.getInt(&quot;destPort&quot;, 8080);
		
		final BenchmarkTcpClient client = new BenchmarkTcpClient(nio, destAddress, destPort);
		client.open();
		nio.start();
	}
}
</pre>
<p>Server:</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralreactor.server.bench;

import static com.coralblocks.corallog.Log.*;

import java.io.IOException;
import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.AbstractFixedSizeTcpServer;
import com.coralblocks.coralreactor.server.Server;
import com.coralblocks.coralbits.util.SystemUtils;
import com.coralblocks.coralbits.bench.Benchmarker;

public class EchoTcpServer extends AbstractFixedSizeTcpServer {
	
	private byte[] readArray = new byte[1024 * 2];
	private final Benchmarker bench = Benchmarker.create();
	private final int msgSize;
	
	public EchoTcpServer(NioReactor nio, int port) throws IOException {
		
		super(nio, port);
		
		this.msgSize = SystemUtils.getInt(&quot;msgSize&quot;, 256);
	}
	
	@Override
	public final int getMessageFixedSize() {
		return msgSize;
	}
	
	@Override
	public void handleMessage(Client client, ByteBuffer msg) {

		int pos = msg.position();
		
		long tsReceived = msg.getLong();
		
		msg.get(readArray, 0, msgSize - 8);
		
		if (tsReceived &gt; 0) {
			bench.measure(System.nanoTime() - tsReceived);
		} else if (tsReceived == -1) {
			// first message
		} else if (tsReceived == -2) {
			// last message
			close();
			printResults();
			return;
		} else if (tsReceived &lt; 0) {
			Error.log(name, &quot;Received bad timestamp:&quot;, tsReceived);
			close();
			return;
		}
		
		msg.position(pos);
		echoBack(client, msg);
	}
	
	private final void echoBack(Client client, ByteBuffer buf) {
		
		int pos = buf.position();
		buf.position(pos + 8);
		buf.putLong(System.nanoTime());
		buf.position(pos);
		
		client.send(buf);
	}
	
	private void printResults() {
		StringBuilder results = new StringBuilder();
		results.append(&quot;results=&quot;);
		results.append(bench.results());
		System.out.println(results.toString());
	}
	
	public static void main(String[] args) throws Exception {

		NioReactor nio = NioReactor.create();
		
		int listeningPort = SystemUtils.getInt(&quot;serverPort&quot;, 8080);
		
		Server server = new EchoTcpServer(nio, listeningPort);
		server.open();
		nio.start();
	}
}
</pre>
<h2 class="coral">Netty Code</h2>
<p>Client:</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.nettybenchmarks.bench;

import io.netty.bootstrap.Bootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;

import java.io.IOException;
import java.nio.ByteBuffer;

import com.coralblocks.nettybenchmarks.util.SystemUtils;

public class BenchmarkTcpClient extends ChannelHandlerAdapter {
	
	private int count = 0;
	private boolean warmingUp = false;
	private boolean benchmarking = false;
	private long tsSent;
	private final byte[] readArray;
	private final int messages;
	private final int msgSize;
	
	public BenchmarkTcpClient() throws IOException {
		super();

		this.msgSize = SystemUtils.getInt(&quot;msgSize&quot;, 256);
		this.messages = SystemUtils.getInt(&quot;messages&quot;, 1000000);
		this.readArray = new byte[msgSize];
	}
	
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        // Close the connection when an exception is raised.
        cause.printStackTrace();
        ctx.close();
    }
    
    @Override
    public void channelActive(final ChannelHandlerContext ctx) {
    	
		this.warmingUp = true;
		this.benchmarking = false;
		this.count = 0;

		sendMsg(-1, ctx); // very first message, so the other side knows we are starting...
    }
	
	@Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
		 
    	ByteBuf in = (ByteBuf) msg;
    	ByteBuffer bb = in.nioBuffer();

    	handleBuffer(ctx, bb, msg);
    	
    	in.release(); // netty uses reference count...
    }
	
    private void handleBuffer(ChannelHandlerContext ctx, ByteBuffer buf, Object msg) {
    	
		while(buf.remaining() &gt;= msgSize) {
			int pos = buf.position();
			int lim = buf.limit();
			buf.limit(pos + msgSize);
			handleMessage(ctx, buf, msg);
			buf.limit(lim).position(pos + msgSize);
		}
	}
    
	private void handleMessage(ChannelHandlerContext ctx, ByteBuffer buf, Object msg) {
		
		long tsReceived = buf.getLong();
		
		buf.get(readArray, 0, buf.remaining()); // read fully
			
		if (tsReceived != tsSent) {
			System.err.println(&quot;Bad timestap received: tsSent=&quot; + tsSent + &quot; tsReceived=&quot; + tsReceived);
			ctx.close();
			return;
		}
		
		if (warmingUp) {

			if (++count == messages) { // done warming up...
				
				System.out.println(&quot;Finished warming up! messages=&quot; + count);
				
				this.warmingUp = false;
				this.benchmarking = true;
				this.count = 0;
				
				sendMsg(System.nanoTime(), ctx); // first testing message
				
			} else {

				sendMsg(0, ctx);
			}
				
		} else if (benchmarking) {
			
			if (++count == messages) {
			
				System.out.println(&quot;Finished sending messages! messages=&quot; + count);
				
				// send the last message to tell the client we are done...
				sendMsg(-2, ctx);
				ctx.close();
				
			} else {
				
				sendMsg(System.nanoTime(), ctx);
			}
		}
	}
	
	private final void sendMsg(long value, ChannelHandlerContext ctx) {
		
		ByteBuf tsMsg = ctx.alloc().directBuffer(msgSize);
		
		tsMsg.writeLong(value);
		
		for(int i = 0; i &lt; msgSize - 8; i++) {
			tsMsg.writeByte((byte) 'x');
		}
		
		ctx.writeAndFlush(tsMsg);
		
		tsSent = value; // save to check echo msg...
	}
	
    public static void main(String[] args) throws Exception {
 
    	String host = args[0];
        int port = Integer.parseInt(args[1]);
        EventLoopGroup workerGroup = new NioEventLoopGroup();

        try {
            Bootstrap b = new Bootstrap(); // (1)
            b.group(workerGroup); // (2)
            b.channel(NioSocketChannel.class); // (3)
            b.option(ChannelOption.SO_KEEPALIVE, true); // (4)
            b.handler(new ChannelInitializer&lt;SocketChannel&gt;() {
                @Override
                public void initChannel(SocketChannel ch) throws Exception {
                    ch.pipeline().addLast(new BenchmarkTcpClient());
                }
            });

            // Start the client.
            ChannelFuture f = b.connect(host, port).sync(); // (5)

            // Wait until the connection is closed.
            f.channel().closeFuture().sync();
        } finally {
            workerGroup.shutdownGracefully();
        }
    }
}
</pre>
<p>Server:</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.nettybenchmarks.bench;

import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelFuture;
import io.netty.channel.ChannelHandlerAdapter;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;

import java.io.IOException;
import java.nio.ByteBuffer;

import com.coralblocks.nettybenchmarks.util.Benchmarker;
import com.coralblocks.nettybenchmarks.util.SystemUtils;

public class EchoTcpServer extends ChannelHandlerAdapter {
	
	private byte[] readArray = new byte[1024 * 2];
	
	private final Benchmarker bench = Benchmarker.create();
	
	private final int msgSize;
	
	public EchoTcpServer() throws IOException {
		super();
		this.msgSize = SystemUtils.getInt(&quot;msgSize&quot;, 256);
	}
	
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
        // Close the connection when an exception is raised.
        cause.printStackTrace();
        ctx.close();
    }
	
	 @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
		 
    	ByteBuf in = (ByteBuf) msg;
    	ByteBuffer bb = in.nioBuffer();

    	handleBuffer(ctx, bb, msg);
    }
	
    private void handleBuffer(ChannelHandlerContext ctx, ByteBuffer buf, Object msg) {
    	
		while(buf.remaining() &gt;= msgSize) {
			int pos = buf.position();
			int lim = buf.limit();
			buf.limit(pos + msgSize);
			handleMessage(ctx, buf, msg);
			buf.limit(lim).position(pos + msgSize);
		}
	}
	
	private void handleMessage(ChannelHandlerContext ctx, ByteBuffer buf, Object msg) {

		int pos = buf.position();
		
		long tsReceived = buf.getLong();
		
		buf.get(readArray, 0, buf.remaining());
		
		if (tsReceived &gt; 0) {
			bench.measure(System.nanoTime() - tsReceived);
		} else if (tsReceived == -1) {
			// first message
		} else if (tsReceived == -2) {
			// last message
			ctx.close();
			printResults();
			return;
		} else if (tsReceived &lt; 0) {
			System.err.println(&quot;Received bad timestamp: &quot; + tsReceived);
			ctx.close();
			return;
		}
		
		buf.position(pos);
		ctx.writeAndFlush(msg);
	}
	
	private void printResults() {
		StringBuilder results = new StringBuilder();
		results.append(&quot;results=&quot;);
		results.append(bench.results());
		System.out.println(results);
	}
	
	public static void main(String[] args) throws Exception {
		
        int port;
        if (args.length &gt; 0) {
            port = Integer.parseInt(args[0]);
        } else {
            port = 8080;
        }
		
		EventLoopGroup bossGroup = new NioEventLoopGroup();
		EventLoopGroup workerGroup = new NioEventLoopGroup();
		try {
			ServerBootstrap b = new ServerBootstrap();
			b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
			        .childHandler(new ChannelInitializer&lt;SocketChannel&gt;() {
				                @Override
				                public void initChannel(SocketChannel ch) throws Exception {
					                ch.pipeline().addLast(new EchoTcpServer());
				                }
			                }).option(ChannelOption.SO_BACKLOG, 128)
			        .childOption(ChannelOption.SO_KEEPALIVE, true);

			ChannelFuture f = b.bind(port).sync(); // (7)
			f.channel().closeFuture().sync();
		} finally {
			workerGroup.shutdownGracefully();
			bossGroup.shutdownGracefully();
		}
	}
}
</pre>
<p><br/><br/></p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/coralreactor-vs-netty-performance-comparison/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>CoralReactor vs Vert.x Performance Comparison</title>
		<link>https://www.coralblocks.com/index.php/coralreactor-vs-vert-x-performance-comparison/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=coralreactor-vs-vert-x-performance-comparison</link>
		<comments>https://www.coralblocks.com/index.php/coralreactor-vs-vert-x-performance-comparison/#comments</comments>
		<pubDate>Tue, 02 Feb 2016 22:35:04 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[comparison]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[latency]]></category>
		<category><![CDATA[performance]]></category>
		<category><![CDATA[plaintext]]></category>
		<category><![CDATA[throughput]]></category>
		<category><![CDATA[vert.x]]></category>
		<category><![CDATA[vertx]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1872</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In this article we write two identical http servers using CoralReactor and Vert.x and compare their throughput and latency for different number of simultaneous connections. <span id="more-1872"></span></p>
<h3 class="coral">Test Details</h3>
<style>
.li_details { margin: 0 0 17px 0; }
</style>
<ul style="padding: 12px 40px">
<li class="li_details">The HTTP server will accept a GET request to <code>/plaintext</code> and respond with a simple plaintext response:
<pre style="margin: 15px 0 0 0;">
GET /plaintext HTTP/1.1
User-Agent: CoralReactor
Host: www.coralblocks.com
Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2
Connection: keep-alive
</pre>
<pre style="margin: 15px 0 0 0;">
HTTP/1.1 200 OK
Content-type: text/plain
Server: CoralReactor
Date: Tue, 2 Feb 2016 14:25:28 America/Chicago
Content-Length: 13

Hello, world!
</pre>
</li>
<li class="li_details">The connections are <code>keep-alive</code> so multiple requests are made through the same connection.</li>
<li class="li_details">Both CoralReactor and Vert.x HTTP server will handle all request from all connections on the same thread pinned to an isolated CPU core.</li>
<li class="li_details">The client begins by opening <code>X</code> http connections to the server. Each connection will make <code>Y</code> requests to the server. A http connection only proceeds to send the next request when it receives the response for the previous request from the server. After a connection receives <code>Y</code> responses it stops sending requests.</li>
<li class="li_details">The clients handles all HTTP connection inside the same reactor thread pinned to an isolated CPU core different than the one being used by the server.</li>
<li class="li_details">The client measures the latency of each individual request up until the response is received from the server (<i>ping-pong</i>). In other words, the start time is right before the request is sent and the stop time is right after the response is received.</li>
<li class="li_details">The client measures the total time it takes to send and receive the response for all messages. That way, the throughput can be calculated in messages per second. (<b>Note:</b> This is not really a true throughput number because we wait for the response from the previous message to send the next one)</li>
<li class="li_details">The client first warms up with <code>Z</code> requests before it starts taking measurements.</li>
<li class="li_details">Both client and server were run on the same machine, connected through loopback/localhost (127.0.0.1).</li>
<li>Both CoralReactor and Vert.x http servers were run using the <code>-verbose:gc</code> to check for garbage creation.</li>
</ul>
<h3 class="coral">CoralReactor Http Server Code</h3>
<p>You can <a href="/index.php/coralreactor-http-server-for-vert-x-comparison/" target="_blank">click here</a> to check the CoralReactor code for this simple plaintext http server.<br />
<br/></p>
<h3 class="coral">Vert.x Http Server Code</h3>
<p>The Vert.x code used is exactly the same as the one used for the <a href="https://www.techempower.com/benchmarks/#section=data-r8&#038;hw=i7&#038;test=plaintext" target="_blank">TechEmpower benchmarks</a>. You can check the source on github by <a href="https://github.com/TechEmpower/FrameworkBenchmarks/blob/master/frameworks/Java/vertx/src/main/java/vertx/WebServer.java" target="_blank">clicking here</a>. The only difference is that we made the Vert.x server single threaded by doing:</p>
<pre class="brush: java; title: ; notranslate">
Vertx.vertx(new VertxOptions().setEventLoopPoolSize(1));
</pre>
<p>We were then able to pin the <code>vert.x-eventloop-thread-0</code> to an isolated CPU core.<br />
<br/></p>
<h3 class="coral">Http Client Code</h3>
<p>You can <a href="/index.php/coralreactor-vs-vert-x-comparison-http-client-code/" target="_blank">click here</a> to check the client code used to perform the benchmark. As stated above, all connections and requests were handled by the same reactor thread pinned to an isolated CPU core.<br />
<br/></p>
<h3 class="coral">Results</h3>
<p><b>Note:</b> <a href="/index.php/general-inquiries" target="_blank">Contact us</a> if you want to run the benchmarks on your own environment.<br/><br />
Number of connections: 1<br />
Number of requests per connection: 10 million<br />
Number of requests to warmup: 1 million</p>
<p><u><b>CoralReactor:</b></u></p>
<pre>
Requests per second: 184,584

Iterations: 9,000,000 | Avg Time: 5.37 micros | Min Time: 5.004 micros | Max Time: 67.513 micros | 75% = [avg: 5.253 micros, max: 5.356 micros] | 90% = [avg: 5.303 micros, max: 5.83 micros] | 99% = [avg: 5.355 micros, max: 5.969 micros] | 99.9% = [avg: 5.365 micros, max: 8.165 micros] | 99.99% = [avg: 5.368 micros, max: 10.601 micros] | 99.999% = [avg: 5.369 micros, max: 19.984 micros]

Zero garbage created by the server
</pre>
<p><u><b>Vert.x:</b></u></p>
<pre>
Requests per second: 91,717

Iterations: 9,000,000 | Avg Time: 10.857 micros | Min Time: 8.747 micros | Max Time: 9.758 millis | 75% = [avg: 10.745 micros, max: 10.851 micros] | 90% = [avg: 10.768 micros, max: 10.94 micros] | 99% = [avg: 10.82 micros, max: 12.116 micros] | 99.9% = [avg: 10.839 micros, max: 15.402 micros] | 99.99% = [avg: 10.845 micros, max: 19.014 micros] | 99.999% = [avg: 10.846 micros, max: 120.687 micros]

Server produces a lot of garbage (GC kicks in 46 times)
</pre>
<p><br/><br />
Number of connections: 10<br />
Number of requests per connection: 1 million<br />
Number of requests to warmup: 1 million</p>
<p><u><b>CoralReactor:</b></u></p>
<pre>
Requests per second: 278,086

Iterations: 9,000,000 | Avg Time: 34.934 micros | Min Time: 6.024 micros | Max Time: 791.866 micros | 75% = [avg: 31.959 micros, max: 35.144 micros] | 90% = [avg: 32.832 micros, max: 44.813 micros] | 99% = [avg: 34.603 micros, max: 66.406 micros] | 99.9% = [avg: 34.894 micros, max: 67.513 micros] | 99.99% = [avg: 34.925 micros, max: 81.037 micros] | 99.999% = [avg: 34.932 micros, max: 145.331 micros]

Zero garbage created by the server
</pre>
<p><u><b>Vert.x:</b></u></p>
<pre>
Requests per second: 134,771

Iterations: 9,000,000 | Avg Time: 69.822 micros | Min Time: 9.207 micros | Max Time: 23.137 millis | 75% = [avg: 61.641 micros, max: 66.104 micros] | 90% = [avg: 62.979 micros, max: 127.968 micros] | 99% = [avg: 69.072 micros, max: 131.862 micros] | 99.9% = [avg: 69.661 micros, max: 142.079 micros] | 99.99% = [avg: 69.73 micros, max: 184.135 micros] | 99.999% = [avg: 69.774 micros, max: 1.814 millis]

Server produces a lot of garbage (GC kicks in 46 times)
</pre>
<p><br/><br />
Number of connections: 100<br />
Number of requests per connection: 100,000<br />
Number of requests to warmup: 1 million</p>
<p><u><b>CoralReactor:</b></u></p>
<pre>
Requests per second: 271,462

Iterations: 9,000,000 | Avg Time: 365.63 micros | Min Time: 35.111 micros | Max Time: 1.351 millis | 75% = [avg: 302.943 micros, max: 526.011 micros] | 90% = [avg: 342.4 micros, max: 554.915 micros] | 99% = [avg: 362.965 micros, max: 595.942 micros] | 99.9% = [avg: 365.226 micros, max: 730.979 micros] | 99.99% = [avg: 365.584 micros, max: 789.276 micros] | 99.999% = [avg: 365.623 micros, max: 911.946 micros]

Zero garbage created by the server
</pre>
<p><u><b>Vert.x:</b></u></p>
<pre>
Requests per second: 145,614

Iterations: 9,000,000 | Avg Time: 660.654 micros | Min Time: 26.134 micros | Max Time: 24.131 millis | 75% = [avg: 623.152 micros, max: 679.096 micros] | 90% = [avg: 633.264 micros, max: 689.056 micros] | 99% = [avg: 652.877 micros, max: 1.327 millis] | 99.9% = [avg: 659.032 micros, max: 1.356 millis] | 99.99% = [avg: 659.922 micros, max: 3.156 millis] | 99.999% = [avg: 660.426 micros, max: 23.336 millis]

Server produces a lot of garbage (GC kicks in 47 times)
</pre>
<p><br/><br />
Number of connections: 1,000<br />
Number of requests per connection: 10,000<br />
Number of requests to warmup: 1 million</p>
<p><u><b>CoralReactor:</b></u></p>
<pre>
Requests per second: 209,370

Iterations: 9,000,000 | Avg Time: 4.24 millis | Min Time: 153.437 micros | Max Time: 161.154 millis | 75% = [avg: 3.382 millis, max: 6.224 millis] | 90% = [avg: 3.874 millis, max: 6.436 millis] | 99% = [avg: 4.18 millis, max: 9.477 millis] | 99.9% = [avg: 4.229 millis, max: 9.784 millis] | 99.99% = [avg: 4.234 millis, max: 10.352 millis] | 99.999% = [avg: 4.239 millis, max: 159.802 millis]

Zero garbage created by the server
</pre>
<p><u><b>Vert.x:</b></u></p>
<pre>
Requests per second: 115,511

Iterations: 9,000,000 | Avg Time: 8.465 millis | Min Time: 169.897 micros | Max Time: 33.606 millis | 75% = [avg: 8.148 millis, max: 8.526 millis] | 90% = [avg: 8.217 millis, max: 8.646 millis] | 99% = [avg: 8.377 millis, max: 16.817 millis] | 99.9% = [avg: 8.454 millis, max: 17.358 millis] | 99.99% = [avg: 8.464 millis, max: 24.52 millis] | 99.999% = [avg: 8.465 millis, max: 28.741 millis]

Server produces a lot of garbage (GC kicks in 47 times)
</pre>
<p><br/><br />
Number of connections: 10,000<br />
Number of requests per connection: 1,000<br />
Number of requests to warmup: 1 million</p>
<p><u><b>CoralReactor:</b></u></p>
<pre>
Requests per second: 150,386

Iterations: 9,000,000 | Avg Time: 60.671 millis | Min Time: 551.754 micros | Max Time: 509.849 millis | 75% = [avg: 52.618 millis, max: 60.087 millis] | 90% = [avg: 54.554 millis, max: 79.888 millis] | 99% = [avg: 59.577 millis, max: 118.592 millis] | 99.9% = [avg: 60.277 millis, max: 448.415 millis] | 99.99% = [avg: 60.628 millis, max: 452.036 millis] | 99.999% = [avg: 60.667 millis, max: 509.795 millis]

Zero garbage created by the server
</pre>
<p><u><b>Vert.x:</b></u></p>
<pre>
Requests per second: 111,005

Iterations: 9,000,000 | Avg Time: 89.484 millis | Min Time: 8.177 millis | Max Time: 512.795 millis | 75% = [avg: 83.511 millis, max: 93.813 millis] | 90% = [avg: 85.537 millis, max: 98.795 millis] | 99% = [avg: 88.308 millis, max: 160.282 millis] | 99.9% = [avg: 89.118 millis, max: 427.392 millis] | 99.99% = [avg: 89.443 millis, max: 489.883 millis] | 99.999% = [avg: 89.479 millis, max: 512.751 millis]

Server produces a lot of garbage (GC kicks in 47 times)
</pre>
<h3 class="coral">Conclusions</h3>
<p>For a ping-pong HTTP test, CoralReactor is approximately twice as fast as Vert.x without producing any garbage.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/coralreactor-vs-vert-x-performance-comparison/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Inter-thread communication within CoralReactor</title>
		<link>https://www.coralblocks.com/index.php/inter-thread-communication-within-coralreactor-reactor-callbacks/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=inter-thread-communication-within-coralreactor-reactor-callbacks</link>
		<comments>https://www.coralblocks.com/index.php/inter-thread-communication-within-coralreactor-reactor-callbacks/#comments</comments>
		<pubDate>Fri, 24 Apr 2015 01:47:43 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[callback]]></category>
		<category><![CDATA[coralqueue]]></category>
		<category><![CDATA[inter-thread]]></category>
		<category><![CDATA[multithread]]></category>
		<category><![CDATA[multithreaded]]></category>
		<category><![CDATA[multithreading]]></category>
		<category><![CDATA[thread]]></category>
		<category><![CDATA[threads]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1328</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>CoralReactor was built on purpose, from the ground up, to be single-threaded. That means that no other thread besides the reactor thread should be executing any code or accessing any data belonging to servers and clients. This not only provides super-fast performance but also allows for much simpler code that does not have to worry about thread synchronization, lock contention, race-conditions, deadlocks, thread starvation and many other pitfalls of multithreaded programming. However there are common scenarios where other threads must interact with the reactor thread. In this article, we explore how this can be achieved while preserving the single-threaded principle, avoiding synchronization or locks, and generating zero garbage.<span id="more-1328"></span></p>
<p><br/></p>
<h3 class="coral">The Scenario</h3>
<p>Below we list the source code of a simple reactor client that increments a counter based on messages it receives from a server.</p>
<pre class="brush: java; highlight: [38,43]; title: ; notranslate">
package com.coralblocks.coralreactor.client.callback;

import static com.coralblocks.corallog.Log.*;

import java.nio.ByteBuffer;

import com.coralblocks.coralreactor.client.AbstractLineTcpClient;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralreactor.util.MapConfiguration;

public class CounterClient extends AbstractLineTcpClient {

	private long total;
	
	public CounterClient(NioReactor nio, String host, int port, Configuration config) {
	    super(nio, host, port, config);
    }
	
	@Override
	protected void handleOpened() {
		total = 0;
	}
	
	@Override
	protected void handleMessage(ByteBuffer msg) {
		
		if (!msg.hasRemaining()) return;
		
		char c = (char) msg.get();
		
		if (c == '+') increment();
		if (c == '-') decrement();
	}
	
	public void increment() {
		total++;
		Info.log(&quot;Total was incremented:&quot;, total, &quot;thread=&quot;, Thread.currentThread().getName());
	}
	
	public void decrement() {
		total--;
		Info.log(&quot;Total was decremented:&quot;, total, &quot;thread=&quot;, Thread.currentThread().getName());
	}
	
	public static void main(String[] args) throws InterruptedException {
		
		NioReactor nio = NioReactor.create();
		
		MapConfiguration config = new MapConfiguration();
		
		final CounterClient client = new CounterClient(nio, &quot;localhost&quot;, 45151, config);
		client.open();
		
		nio.start();
	}
}
</pre>
<p>As you can see, when the clients gets <code>'+'</code> from the server the counter is incremented. When it gets <code>'-'</code> the counter is decremented. You can easily simulate this server using <i>netcat</i>, as the screenshot below shows:</p>
<p><a href="http://www.coralblocks.com/wp-content/uploads/2015/04/Screen-Shot-2015-04-23-at-5.29.21-PM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2015/04/Screen-Shot-2015-04-23-at-5.29.21-PM.png" alt="Screen Shot 2015-04-23 at 5.29.21 PM" width="772" height="403" class="alignnone size-full wp-image-1333" /></a></p>
<p>After typing some pluses and minuses on the server, you can see the following log messages in the client console:</p>
<pre class="brush: java; title: ; notranslate">
17:28:47.517089-INFO CounterClient-localhost:45151 Client opened! sequence=1 session=null
17:28:47.566073-INFO NioReactor Reactor started! type=OptimumNioReactor impl=KQueueSelectorImpl
17:28:47.567366-INFO CounterClient-localhost:45151 Connection established!
17:28:47.567400-INFO CounterClient-localhost:45151 Connection opened!
17:28:53.740893-INFO Total was incremented: 1 thread=NioReactor
17:28:55.044887-INFO Total was decremented: 0 thread=NioReactor
17:28:56.169573-INFO Total was incremented: 1 thread=NioReactor
17:28:56.946586-INFO Total was incremented: 2 thread=NioReactor
17:28:58.161256-INFO Total was decremented: 1 thread=NioReactor
17:28:59.229615-INFO Total was decremented: 0 thread=NioReactor
</pre>
<p>So far so good. Note that we are printing on the log message the name of the thread doing the increment/decrement on the counter, in this case the NioReactor thread.<br />
<br/></p>
<h3 class="coral">Incrementing from Another Thread</h3>
<p>Now, suppose you need another thread, besides the reactor thread, to increment and decrement the counter. A common scenario is when actions originate from a GUI running on a separate thread. For example, when a user clicks a button, the action must be communicated to the reactor thread. Below, we modify our <code>main</code> method to simulate another thread interacting with the counter by incrementing and decrementing it:</p>
<pre class="brush: java; highlight: [18,19]; title: ; notranslate">
	public static void main(String[] args) throws InterruptedException {
		
		NioReactor nio = NioReactor.create();
		
		MapConfiguration config = new MapConfiguration();
		
		final CounterClient client = new CounterClient(nio, &quot;localhost&quot;, 45151, config);
		client.open();
		
		nio.start();
		
		for(int i = 0; i &lt; 100; i++) {
			
			Thread.sleep(2000);
			
			final boolean check = (i % 3 == 0);
			
			if (check) client.decrement();
			else client.increment();
		}
	}
</pre>
<p>When we run this client, type some pluses and minuses on the server, we get the following log output:</p>
<pre class="brush: java; highlight: [6,8]; title: ; notranslate">
17:45:38.069983-INFO CounterClient-localhost:45151 Client opened! sequence=1 session=null
17:45:38.126212-INFO NioReactor Reactor started! type=OptimumNioReactor impl=KQueueSelectorImpl
17:45:38.127543-INFO CounterClient-localhost:45151 Connection established!
17:45:38.127578-INFO CounterClient-localhost:45151 Connection opened!
17:45:39.686658-INFO Total was incremented: 1 thread=NioReactor
17:45:40.127196-INFO Total was decremented: 0 thread=main
17:45:41.035442-INFO Total was decremented: -1 thread=NioReactor
17:45:42.128232-INFO Total was incremented: 0 thread=main
17:45:42.408007-INFO Total was decremented: -1 thread=NioReactor
</pre>
<p>Note that <strong>we have just broken the single-threaded principle</strong> of CoralReactor and we now have two threads (main and NioReactor) calling the same piece of code and accessing the same variable. From a multithreading point of view, one might be tempted to <i>fix</i> this problem using the <code>synchronized</code> keyword, as below:</p>
<pre class="brush: java; highlight: [1,6]; title: ; notranslate">
	public synchronized void increment() {
		total++;
		Info.log(&quot;Total was incremented:&quot;, total, &quot;thread=&quot;, Thread.currentThread().getName());
	}
	
	public synchronized void decrement() {
		total--;
		Info.log(&quot;Total was decremented:&quot;, total, &quot;thread=&quot;, Thread.currentThread().getName());
	}
</pre>
<p><font color="red"><strong>Please don&#8217;t.</strong></font> That will introduce lock-contention on the critical reactor thread and attest your departure from the single-threaded design principle. Fortunately CoralReactor can easily restore the single-threaded principle through the use of <i>callbacks</i>.</p>
<p><br/></p>
<h3 class="coral">Synchronous (blocking) Callbacks</h3>
<p>Instead of having the external thread calling code from the reactor thread, notify the reactor thread that some code execution is pending by passing it a <code>CallbackListener</code>. The reactor thread will then <i>call you back</i> by executing the provided callback listener. See the example below:</p>
<pre class="brush: java; highlight: [29,37]; title: ; notranslate">
public static class CounterCallback implements CallbackListener {

	private final CounterClient client;
	public volatile boolean check;

	public CounterCallback(CounterClient client) {
		this.client = client;
	}

	@Override
	public void onCallback(long now) {
		// this will be executed by the reactor thread
		if (check) client.decrement();
		else client.increment();
	}
}

public static void main(String[] args) throws InterruptedException {
     
    NioReactor nio = NioReactor.create();
     
    MapConfiguration config = new MapConfiguration();
     
    final CounterClient client = new CounterClient(nio, &quot;localhost&quot;, 45151, config);
    client.open();
     
    nio.start();

    CounterCallback callback = new CounterCallback(client); // re-using the same instance
     
    for(int i = 0; i &lt; 100; i++) {
         
        Thread.sleep(2000);
         
        callback.check = (i % 3 == 0);
         
        nio.call(callback); // synchronous call =&gt; will block until reactor executes the callback
    }
}
</pre>
<p>Now when we run our client we don&#8217;t see the <i>main thread</i> executing the client code anymore, just the reactor thread. <strong>We have successfully restored the single-threaded design principle.</strong> Also note that we are not generating any garbage because we are re-using the same instance of our <code>CounterCallback</code> object over and over again. That&#8217;s possible because our <code>nio.call</code> method is synchronous, in other words, it will block the main thread until the reactor thread is able to execute the callback. As we&#8217;ll see below, a garbage-free approach for an asynchronous (non-blocking) call is a bit trickier.</p>
<p>
<strong><font color="blue">NOTE:</font></strong> Multiple threads can call <code>nio.call</code> without a problem.
</p>
<p><br/></p>
<h3 class="coral">Asynchronous (non-blocking) Callbacks</h3>
<p>You can also push callback listeners to the reactor thread without having to wait for its execution, in other words, you can use the method <code>nio.callAsync</code> to push the callback listener to the reactor thread and return immediately to the main thread. Internally, the callback listener goes to a concurrent queue and is executed by the reactor thread as soon as possible, in other words, it is executed <i>asynchronously</i> by the reactor thread. See the example below:</p>
<pre class="brush: java; highlight: [16,20]; title: ; notranslate">
public static void main(String[] args) throws InterruptedException {
     
    NioReactor nio = NioReactor.create();
     
    MapConfiguration config = new MapConfiguration();
     
    final CounterClient client = new CounterClient(nio, &quot;localhost&quot;, 45151, config);
    client.open();
     
    nio.start();

    for(int i = 0; i &lt; 100; i++) {
         
        Thread.sleep(2000);

        CounterCallback callback = new CounterCallback(client); // garbage created here
         
        callback.check = (i % 3 == 0);
         
        nio.callAsync(callback); // asynchronous call =&gt; will return immediately
    }
}
</pre>
<p>Now, because we don&#8217;t block anymore, we must pass a different instance of our callback listener each time we call <code>nio.callAsync</code>. This will create garbage for the GC as these instances will later be discarded by the reactor thread after they get executed. To fix this garbage leak, we can use an internal garbage-free queue offered by CoralReactor.</p>
<p>
<strong><font color="blue">NOTE:</font></strong> Multiple threads can call <code>nio.callAsync</code> without a problem.
</p>
<p><br/></p>
<h3 class="coral">Asynchronous Callbacks without Garbage</h3>
<p>CoralReactor provides a lock-free, ultra-fast, and garbage-free internal queue for handling callbacks. You can create multiple underlying queues, each dedicated to a specific callback listener. This approach eliminates garbage generation and minimizes inter-thread communication latency. Below an example:</p>
<pre class="brush: java; highlight: [17,25,29]; title: ; notranslate">
public static void main(String[] args) throws InterruptedException {
     
    NioReactor nio = NioReactor.create();
     
    MapConfiguration config = new MapConfiguration();
     
    final CounterClient client = new CounterClient(nio, &quot;localhost&quot;, 45151, config);
    client.open();

    Builder&lt;CounterCallback&gt; builder = new Builder&lt;CounterCallback&gt;() {
        @Override
        public CounterCallback newInstance() {
            return new CounterCallback(client);
        }
    };

    nio.initCallbackQueue(CounterCallback.class, builder); // init internal queue

    nio.start(); // only start reactor after initializing queues

    for(int i = 0; i &lt; 100; i++) {
         
        Thread.sleep(2000);

		CounterCallback callback = nio.nextCallback(CounterCallback.class); // get from queue
         
        callback.check = (i % 3 == 0);
         
        nio.flushCallbacks(CounterCallback.class); // flush to consumer
    }
}
</pre>
<p>Note that you must initialize the queue before the reactor is started. Then all you have to do is use your callback listener class to get a callback object and dispatch it to the reactor thread for execution. For more information about lock-free and garbage-free queues for inter-thread communication, you can check our open-source <a href="https://github.com/coralblocks/CoralQueue" target="_blank">CoralQueue</a> project.</p>
<p>
<strong><font color="red">NOTE:</font></strong> Multiple threads must not call <code>nio.nextCallback</code> and <code>nio.flushCallbacks</code>. To support multiple threads you should use a dynamic multiplexer instead of a queue. See example below:
</p>
<pre class="brush: java; highlight: [17,25,29]; title: ; notranslate">
public static void main(String[] args) throws InterruptedException {
     
    NioReactor nio = NioReactor.create();
     
    MapConfiguration config = new MapConfiguration();
     
    final CounterClient client = new CounterClient(nio, &quot;localhost&quot;, 45151, config);
    client.open();

    Builder&lt;CounterCallback&gt; builder = new Builder&lt;CounterCallback&gt;() {
        @Override
        public CounterCallback newInstance() {
            return new CounterCallback(client);
        }
    };

    nio.initCallbackDynamicMux(CounterCallback.class, builder); // init internal queue

    nio.start(); // only start reactor after initializing queues

    for(int i = 0; i &lt; 100; i++) {
         
        Thread.sleep(2000);

		CounterCallback callback = nio.nextDynamicMuxCallback(CounterCallback.class); // get from queue
         
        callback.check = (i % 3 == 0);
         
        nio.flushDynamicMuxCallbacks(CounterCallback.class); // flush to consumer
    }
}
</pre>
<p><br/></p>
<h3 class="coral">Conclusion</h3>
<p>CoralReactor was designed from the ground up to be strictly single-threaded. This means multiple threads <strong>must never</strong> share state with the reactor thread, as doing so would lead to unpredictable errors caused by race conditions. When inter-thread communication is necessary, callbacks must be used to ensure that only the reactor thread executes the relevant code. CoralReactor supports both synchronous (blocking) and asynchronous (non-blocking) callbacks from the reactor thread, without generating any garbage.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/inter-thread-communication-within-coralreactor-reactor-callbacks/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>SSL Support in CoralReactor through SSLSocketChannel</title>
		<link>https://www.coralblocks.com/index.php/ssl-support-in-coralreactor-through-sslsocketchannel/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ssl-support-in-coralreactor-through-sslsocketchannel</link>
		<comments>https://www.coralblocks.com/index.php/ssl-support-in-coralreactor-through-sslsocketchannel/#comments</comments>
		<pubDate>Thu, 10 Sep 2015 03:11:45 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[https]]></category>
		<category><![CDATA[ssl]]></category>
		<category><![CDATA[SSLEngine]]></category>
		<category><![CDATA[SSLSocketChannel]]></category>
		<category><![CDATA[wss]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1616</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>CoralReactor implements its own non-blocking <code>SSLSocketChannel</code> so you can have out-of-box support for SSL. In this article we show how you can connect to a SSL server (<i>https</i>, <i>wss</i>, etc.) easily with a CoralReactor client. <span id="more-1616"></span><br />
<br/></p>
<h3 class="coral">A Simple HTTP Client</h3>
<p>For plain-text HTTP port 80 it is extremely easy. For example, to connect to <i>www.google.com</i> and fetch the HTTP response headers you can write the simple cliente below:</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralreactor.client.ssl;

import static com.coralblocks.corallog.Log.*;

import java.net.URL;
import java.nio.ByteBuffer;

import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralreactor.client.AbstractLineTcpClient;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;

public class SSLClient extends AbstractLineTcpClient {

	public SSLClient(NioReactor nio, String host, int port) {
		super(nio, host, port);
	}

	@Override
	protected void handleConnectionOpened() {
		send(&quot;GET / HTTP/1.0\n&quot;);
	}

	@Override
	protected void handleMessage(ByteBuffer msg) {
		// only print the HTTP response headers
		String s = ByteBufferUtils.parseString(msg);
		if (s.startsWith(&quot;&lt;&quot;)) {
			close();
		} else {
			System.out.println(s);
		}
	}

	public static void main(String[] args) throws Exception {
		
		URL url = new URL(&quot;http://www.google.com&quot;); // note we are using HTTP (port 80)
		String proto = url.getProtocol();
		String host = url.getHost();
		int port = url.getDefaultPort();

		NioReactor nio = NioReactor.create();
		
		Info.log(&quot;Connecting...&quot;, &quot;url=&quot;, url, &quot;host=&quot;, host, &quot;port=&quot;, port, &quot;proto=&quot;, proto);
		
		final Client client = new SSLClient(nio, host, port);
		client.open();

		nio.start();
	}
}
</pre>
<p>And the output:</p>
<pre>
22:13:40.350783-INFO Connecting... url=http://www.google.com host=www.google.com port=<font color="blue">80</font> proto=<font color="blue">http</font>
22:13:40.372449-INFO SSLClient-www.google.com:80 Client opened! sequence=1 session=null
22:13:40.390583-INFO NioReactor Reactor started! type=OptimumNioReactor impl=KQueueSelectorImpl
22:13:40.595306-INFO SSLClient-www.google.com:80 Connection established!
22:13:40.595489-INFO SSLClient-www.google.com:80 Connection opened!
<font color="blue">HTTP/1.0 200 OK</font>
Date: Thu, 10 Sep 2015 02:13:40 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&#038;answer=151657 for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: PREF=ID=1111111111111111:FF=0:TM=1441851220:LM=1441851220:V=1:S=XSWC8Y8PJthovQv9; expires=Thu, 31-Dec-2015 16:02:17 GMT; path=/; domain=.google.com
Set-Cookie: NID=71=uGm6eP_jn9OmofaZ4RX10EFlALI8NmfX9jnfSiLrNlWPngHQdR1q_pl2QifvtlJJBmPi6_Dmoacg8pP2A8TgifnQ7EIxQAOoR15DkRohQrBUKn1nFauUkoUwSwgIKgPv; expires=Fri, 11-Mar-2016 02:13:40 GMT; path=/; domain=.google.com; HttpOnly
Accept-Ranges: none
Vary: Accept-Encoding

22:13:40.839168-INFO SSLClient-www.google.com:80 Client was shutdown
22:13:40.839329-INFO SSLClient-www.google.com:80 Client closed!
</pre>
<p><br/></p>
<h3 class="coral">Switching to HTTPS</h3>
<p>CoralReactor will automatically connect to the server to download and install the required SSL certificates so that you don&#8217;t have to worry about anything to make SSL work. All you have to do is turn it on with a config parameter (i.e. <i>useSSL</i>) and pass the appropriate SSL port (i.e. <i>443</i>) to you client:</p>
<pre class="brush: java; highlight: [3,10]; title: ; notranslate">
	public static void main(String[] args) throws Exception {
		
		URL url = new URL(&quot;https://www.google.com&quot;); // note we are using HTTPS now (port 443)
		String proto = url.getProtocol();
		String host = url.getHost();
		int port = url.getDefaultPort();

		NioReactor nio = NioReactor.create();
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;useSSL&quot;, true); // tell CoralReactor that we want to use SSL
		
		Info.log(&quot;Connecting...&quot;, &quot;url=&quot;, url, &quot;host=&quot;, host, &quot;port=&quot;, port, &quot;proto=&quot;, proto);
		
		final Client client = new SSLClient(nio, host, port, config);
		client.open();

		nio.start();
	}
</pre>
<p>And the output:</p>
<pre>
22:51:00.100277-INFO Connecting... url=https://www.google.com host=www.google.com port=<font color="blue">443</font> proto=<font color="blue">https</font>
22:51:00.118394-INFO SSLClient-www.google.com:443 Client opened! sequence=1 session=null
22:51:00.880822-INFO NioReactor Reactor started! type=OptimumNioReactor impl=KQueueSelectorImpl
22:51:01.055298-INFO SSLClient-www.google.com:443 Connection established!
22:51:01.055608-INFO SSLClient-www.google.com:443 Connection opened!
<font color="blue">HTTP/1.0 200 OK</font>
Date: Thu, 10 Sep 2015 02:51:01 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&#038;answer=151657 for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: PREF=ID=1111111111111111:FF=0:TM=1441853461:LM=1441853461:V=1:S=W287StjqyNrsY-rC; expires=Thu, 31-Dec-2015 16:02:17 GMT; path=/; domain=.google.com
Set-Cookie: NID=71=Z-T634Zo9qGSn9GbdTlmX5KFeU6ZZrzVySqrtfJWuD_nwFbo8Qlsm3EzeRCgiybqBmnW7Mkmn0IdGTVgv6nMaUrX3YtvfsRzQH-FgmJAzGpVv1y9WV2DaLa3UNPgY1uY; expires=Fri, 11-Mar-2016 02:51:01 GMT; path=/; domain=.google.com; HttpOnly
Alternate-Protocol: 443:quic,p=1
Alt-Svc: quic=":443"; p="1"; ma=604800
Accept-Ranges: none
Vary: Accept-Encoding

22:51:01.283551-INFO SSLClient-www.google.com:443 Client was shutdown
22:51:01.283760-INFO SSLClient-www.google.com:443 Client closed!
</pre>
<p><br/></p>
<h3 class="coral">More Control <font size="4">(<i>only if you need to</i>)</font></h3>
<p>CoralReactor uses <code>javax.net.ssl.SSLEngine</code> under the hood as specified <a href="http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLEngine.html" target="_blank">here</a> to implement its SSL support and encrypt the socket communication. If you want more control you can download and install the SSL certificates yourself (i.e. manually) and tell CoralReactor where to find them. These can be done with <i>openssl</i> and <i>keytool</i>.</p>
<p>For example to download the certificate from google you can do:</p>
<pre>
$ echo "" | openssl s_client -connect www.google.com:<font color="blue">443</font> -showcerts 2>/dev/null | openssl x509 -out <font color="blue">google.cer</font>
</pre>
<p>To add the <i>google.cer</i> you downloaded above to a keystore so it can be passed to a CoralReactor client you can do:</p>
<pre>
$ keytool -import -file <font color="blue">google.cer</font> -alias google -keystore <font color="blue">google.jks</font> -storepass "<font color="blue">abc123</font>" -keypass "<font color="blue">abc123</font>"
</pre>
<p>Now you can inform the CoralReactor client that you want to use the <i>google.jks</i> keystore. See below:</p>
<pre class="brush: java; highlight: [11,12,13]; title: ; notranslate">
	public static void main(String[] args) throws Exception {
		
		URL url = new URL(&quot;https://www.google.com&quot;); // note we are using HTTPS now
		String proto = url.getProtocol();
		String host = url.getHost();
		int port = url.getDefaultPort();

		NioReactor nio = NioReactor.create();
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;useSSL&quot;, true); // tell CoralReactor that we want to use SSL
		config.add(&quot;sslKeyStoreFile&quot;, &quot;/path/to/google.jks&quot;); // path to keystore file
		config.add(&quot;sslKeyPassword&quot;, &quot;abc123&quot;); // key password
		config.add(&quot;sslKeyStorePassword&quot;, &quot;abc123&quot;); // keystore password
		
		Info.log(&quot;Connecting...&quot;, &quot;url=&quot;, url, &quot;host=&quot;, host, &quot;port=&quot;, port, &quot;proto=&quot;, proto);
		
		final Client client = new SSLClient(nio, host, port, config);
		client.open();

		nio.start();
	}
</pre>
<p><br/></p>
<h3 class="coral">Using Stunnel as a SSL Proxy</h3>
<p>For a <em>cleaner</em> alternative you can also use <i>stunnel</i> as a SSL proxy. For example to connect to google.com:443 use the <i>stunnel.conf</i> file below:</p>
<pre>
[remote]
client = yes
accept = 8888
connect = www.google.com:443
</pre>
<p>Then run stunnel:</p>
<pre>
$ sudo stunnel stunnel.conf
</pre>
<p>You can easily test with <i>netcat</i> with the command-line below:</p>
<pre>
$ cat <(echo -e "GET / HTTP/1.0\n") | nc <font color="blue">localhost 8888</font> | head -n 9
<font color="blue">HTTP/1.0 200 OK</font>
Date: Thu, 10 Sep 2015 03:09:13 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&#038;answer=151657 for more info."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
</pre>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/ssl-support-in-coralreactor-through-sslsocketchannel/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Thread Concurrency vs Network Asynchronicity</title>
		<link>https://www.coralblocks.com/index.php/thread-concurrency-vs-network-asynchronicity/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=thread-concurrency-vs-network-asynchronicity</link>
		<comments>https://www.coralblocks.com/index.php/thread-concurrency-vs-network-asynchronicity/#comments</comments>
		<pubDate>Mon, 09 Feb 2015 19:09:24 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralQueue]]></category>
		<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[asynchonous messages]]></category>
		<category><![CDATA[coralqueue]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[disruptor]]></category>
		<category><![CDATA[distributed systems]]></category>
		<category><![CDATA[MQ]]></category>
		<category><![CDATA[multiplexor]]></category>
		<category><![CDATA[nio]]></category>
		<category><![CDATA[rabbitmq]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1925</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In this article we study two different ways of handling client requests that involve a <i>blocking</i> operation: multithreading programming through concurrent queues and asynchronous network calls through distributed systems. <span id="more-1925"></span></p>
<h3 class="coral">The Problem</h3>
<p>We have clients connected to a HTTP server (or any TCP server) sending requests that require a heavy computation, in other words, each request needs to execute some code that can take <strong>an arbitrary amount of time to complete</strong>. If we isolate this time-consuming code in a function, we can then call this function a <strong>blocking call</strong>. Simple examples would be a function that queries a database or a function that manipulates a large image file.</p>
<p><a target="_blank" href="http://www.coralblocks.com/wp-content/uploads/2017/08/Screen-Shot-2017-08-21-at-11.49.05-AM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2017/08/Screen-Shot-2017-08-21-at-11.49.05-AM-300x157.png" alt="HLBC2" width="300" height="157" class="aligncenter size-medium wp-image-2166" /></a></p>
<p>In the old model where one connection would be handled by its own dedicated thread, there would be no problem. But in the new reactor model where a single thread will be handling thousands of connections, all it takes is a single connection executing a blocking call to impact and block all other connections. When you have a single-threaded system, the worst thing that can happen is blocking your critical thread. How do we solve this problem without reverting back to the old <i>one-thread-per-connection</i> model?</p>
<p><a target="_blank" href="http://www.coralblocks.com/wp-content/uploads/2017/08/Screen-Shot-2017-08-21-at-11.55.55-AM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2017/08/Screen-Shot-2017-08-21-at-11.55.55-AM-1024x795.png" alt="oldModel" width="600" height="465" class="aligncenter size-large wp-image-2168" /></a></p>
<p><br/></p>
<h3 class="coral">Solution #1: Thread Concurrency</h3>
<p>The first solution is described in detail <a href="/index.php/2015/01/architecture-case-study-1-coralreactor-coralqueue/" target="_blank">in this article</a>. You basically use CoralQueue to distribute the requests&#8217; work (not the requests themselves) to a fixed number of threads that will execute them concurrently (i.e. in parallel). Let&#8217;s say you have 1000 simultaneous connections. Instead of having 1000 simultaneous threads (i.e. the impractical <em>one-thread-per-connection</em> model) you can analyze how many available CPU cores your machine has and choose a much smaller number of threads, let&#8217;s say 4. This architecture will give you the following advantages:</p>
<style>
.li_adv { margin: 0 0 17px 17px; }
</style>
<ul>
<li class="li_adv">The critical reactor thread handling the http server requests will never block because the work necessary for each request will be simply added to a queue, freeing the reactor thread to handle additional incoming http requests.</li>
<li class="li_adv">Even if a thread or two get a request that takes a long time to complete, the other threads can continue to drain the requests sitting on the queue.</li>
</ul>
<p>If you can guess in advance which requests will take a long time to execute, you can even partition the queue in lanes and have a fast-track lane for high-priority / fast requests, so they always find a free thread to execute.</p>
<p><a target="_blank" href="http://www.coralblocks.com/wp-content/uploads/2017/08/Screen-Shot-2017-08-21-at-12.09.04-PM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2017/08/Screen-Shot-2017-08-21-at-12.09.04-PM-1024x769.png" alt="CoralQueue_model" width="600" height="450" class="aligncenter size-large wp-image-2170" /></a></p>
<p><br/></p>
<h3 class="coral">Solution #2: Distributed Systems</h3>
<p>Instead of doing everything on a single machine, with limited CPU cores, you can use a distributed system architecture and take advantage of asynchronous network calls. That simplifies the http server handling the requests, which now does not need any additional threads and concurrent queues. It can do everything on a single, non-blocking reactor thread. It works like this:</p>
<ul>
<li class="li_adv">Instead of doing the heavy computation on the http server itself, you can move this task to another machine (i.e. node).</li>
<li class="li_adv">Instead of distributing work across threads using CoralQueue, you can simply make an asynchronous network call and pass the work to another node responsible for the heavy computation task.</li>
<li class="li_adv">The http server will <strong>asynchronously wait</strong> for the response from the heavy computation node. The response can take as long as necessary to arrive through the network because the <strong>http server will never block</strong>.</li>
<li class="li_adv">The http server can use only one thread to handle incoming http connection from external clients and outgoing tcp connections to the internal nodes doing the heavy computation work.</li>
<li class="li_adv">And the beauty of it is that you can scale by simply adding/removing nodes as necessary. Dynamic load balancing becomes trivial.</li>
<li class="li_adv">Failover is not that hard either: If one node fails, the clients waiting on that node can re-send their work to another node.</li>
</ul>
<p>Now you might ask: How do we implement the architecture for this new node responsible for the heavy computation work? Aren&#8217;t we just transferring the problem from one machine to another? Yes but with one important difference: now you can add and remove nodes dynamically, as needed. Before you were stuck with the number of available CPU cores in your single machine. It is also important to note that the http server does not care or need to know how the nodes will choose to implement the heavy computation task. All it needs to do is send the asynchronous requests. As far as the http server is concerned, the heavy computation node can use the best or the worst architecture to do its job. The server will make a request and wait asynchronously for the answer.</p>
<p><a target="_blank" href="http://www.coralblocks.com/wp-content/uploads/2016/02/Screen-Shot-2017-08-21-at-12.26.54-PM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2016/02/Screen-Shot-2017-08-21-at-12.26.54-PM-1024x679.png" alt="Screen Shot 2017-08-21 at 12.26.54 PM" width="600" height="397" class="aligncenter size-large wp-image-2174" /></a></p>
<p></br></p>
<h3 class="coral">An Example</h3>
<p>Let&#8217;s say we have an http server that receives requests from clients for stock prices. The way it knows the price of a stock is by making an http request to GoogleFinance to discover the price. If making a request to Google is a blocking call (and it is because how can you know in advance how long it is going to take to get a response?) we can use Solution #1. Requests will be distributed across threads that will process them in parallel, blocking if necessary to wait for Google to respond with a price. But wait a minute, <strong>why can&#8217;t we just treat Google as a separate node in our distributed system and make an asynchronous call to its http servers?</strong> That&#8217;s Solution #2 and the code bellow shows how it can be implemented:</p>
<pre class="brush: java; title: ; notranslate">
/*
* Copyright (c) CoralBlocks LLC (c) 2017
 */
package com.coralblocks.coralreactor.client.bench.google;

import java.nio.ByteBuffer;
import java.util.Iterator;

import com.coralblocks.coralbits.ds.IdentityMap;
import com.coralblocks.coralbits.ds.PooledLinkedList;
import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.Server;
import com.coralblocks.coralreactor.server.http.HttpServer;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralreactor.util.MapConfiguration;

public class AsyncHttpServer extends HttpServer implements GoogleFinanceListener {
	
	public class AsyncHttpAttachment extends HttpAttachment {
		// store the symbol requested by each client so we can re-send during failover...
		StringBuilder symbol = new StringBuilder(32);
		
		@Override
		public void reset(long clientId, Client client) {
			super.reset(clientId, client);
			symbol.setLength(0); // start with a fresh empty one...
		}
	}
	
	// number of http clients used to connect to google
	private final int connectionsToGoogle; 
	
	// the clients used to connect to google
	private final GoogleFinanceClient[] googleClients; 
	
	// a list of clients waiting for responses from google (for each google http connection)
	private final IdentityMap&lt;GoogleFinanceClient, PooledLinkedList&lt;Client&gt;&gt; pendingRequests; 

	private final StringBuilder symbol = new StringBuilder(32);
	private final StringBuilder price = new StringBuilder(32);

	public AsyncHttpServer(NioReactor nio, int port, Configuration config) {
	    super(nio, port, config);
	    this.connectionsToGoogle = config.getInt(&quot;connectionsToGoogle&quot;);
	    this.googleClients  = new GoogleFinanceClient[connectionsToGoogle];
	    this.pendingRequests = new IdentityMap&lt;GoogleFinanceClient, PooledLinkedList&lt;Client&gt;&gt;(connectionsToGoogle);
	    
	    MapConfiguration googleFinanceConfig = new MapConfiguration();
	    googleFinanceConfig.add(&quot;readBufferSize&quot;, 512 * 1024); // the html page returned is big...
	    
	    for(int i = 0; i &lt; googleClients.length; i++) {
	    	googleClients[i] = new GoogleFinanceClient(nio, &quot;www.google.com&quot;, 80, googleFinanceConfig);
	    	googleClients[i].addListener(this);
	    	googleClients[i].open();
	    	pendingRequests.put(googleClients[i], new PooledLinkedList&lt;Client&gt;());
	    }
    }
	
	@Override
	protected Attachment createAttachment() {
		return new AsyncHttpAttachment(); // let's use our attachment
	}
	
	private CharSequence parseSymbolFromClientRequest(ByteBuffer request) {
		// for simplicity we assume that the symbol is the request
		// Ex: GET /GOOG HTTP/1.1 =&gt; the symbol is GOOG
		
		int pos = ByteBufferUtils.positionOf(request, '/');
		
		if (pos == -1) return null;
		
		request.position(pos + 1);
		
		pos = ByteBufferUtils.positionOf(request, ' ');
		
		if (pos == -1) return null;
		
		request.limit(pos);
		
		symbol.setLength(0);
		ByteBufferUtils.parseString(request, symbol); // read from ByteBuffer to StringBuilder
		
		return symbol;
	}
	
	private GoogleFinanceClient chooseGoogleClient(long clientId) {
		// try as much as you can to get a google client...
		// that's because some connections might be dead
		for(int i = 0; i &lt; connectionsToGoogle; i++) {
			int index = (int) ((clientId + i) % connectionsToGoogle);
			GoogleFinanceClient googleClient = googleClients[index];
			if (googleClient.isConnectionOpen()) return googleClient;
		}
		return null;
	}
	
	@Override
	protected void handleMessage(Client client, ByteBuffer msg) {
		
		AsyncHttpAttachment a = (AsyncHttpAttachment) getAttachment(client);
		
		ByteBuffer request = a.getRequest();
		
		CharSequence symbol = parseSymbolFromClientRequest(request);
		
		if (symbol == null) {
			System.err.println(&quot;Bad request from client: &quot; + client);
			return;
		}
		
		a.symbol.setLength(0);
		a.symbol.append(symbol);

		sendToGoogle(client, symbol);
	}
	
	private void sendToGoogle(Client client, CharSequence symbol) {
		
		long clientId = getClientId(client);
		
		// distribute requests across our Google http clients...
		GoogleFinanceClient googleClient = chooseGoogleClient(clientId);

		if (googleClient == null) {
			System.err.println(&quot;It looks like all google clients are dead! Dropping request from client: &quot; + client);
			return;
		}
		
		// send the request to google (it fully supports http pipelining)
		googleClient.sendPriceRequest(symbol);
		
		// add this client to the line of clients waiting for a response from the google http client
		pendingRequests.get(googleClient).add(client);
	}
	
	@Override // from GoogleFinanceListener interface
    public void onSymbolPrice(GoogleFinanceClient googleClient, CharSequence symbol, ByteBuffer priceBuffer) {
		
		// Got a response from google, respond to the client waiting for the price...
		
		PooledLinkedList&lt;Client&gt; clients = pendingRequests.get(googleClient);
		Client client = clients.removeFirst();
		
		price.setLength(0);
		ByteBufferUtils.parseString(priceBuffer, price);
		
		CharSequence response = getHttpResponse(price);
		client.send(response);
    }

	@Override // from GoogleFinanceListener interface
    public void onConnectionOpened(GoogleFinanceClient client) {
		// NOOP
    }

	@Override // from GoogleFinanceListener interface
    public void onConnectionTerminated(GoogleFinanceClient googleClient) {
		
		// Our connection to google was broken...
		// failover all clients waiting on this google connection by re-sending them to another google connection

		PooledLinkedList&lt;Client&gt; clients = pendingRequests.get(googleClient);
		Iterator&lt;Client&gt; iter = clients.iterator();
		while(iter.hasNext()) {
			Client c = iter.next();
			AsyncHttpAttachment a = (AsyncHttpAttachment) getAttachment(c);
			if (a.symbol.length() &gt; 0) {
				sendToGoogle(c, a.symbol); // re-send
			}
		}
		clients.clear();
    }
	
	public static void main(String[] args) {
		
		int connectionsToGoogle = Integer.parseInt(args[0]);
		int port = Integer.parseInt(args[1]);
		
		NioReactor nio = NioReactor.create();
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;connectionsToGoogle&quot;, connectionsToGoogle);
		Server server = new AsyncHttpServer(nio, port, config);
		server.open();
		nio.start();
	}
}

</pre>
<p><br/><br />
The advantages of the code above are:</p>
<ul>
<li class="li_adv">It is small and simple.</li>
<li class="li_adv">It only uses one thread, the critical reactor thread, for all network activity.</li>
<li class="li_adv">There is no multithreading programming, there is no blocking and there is no concurrent queues.</li>
<li class="li_adv">It distributes the load across a set of connections to GoogleFinance (load balance).</li>
<li class="li_adv">If one connection to GoogleFinance fails, it re-sends the pending requests on that connection to other connections (failover).</li>
<li class="li_adv">You can scale the front-end to support a larger number of simultaneous clients and decrease latency by launching more http servers pinned to other cpu cores.</li>
<li class="li_adv">You can scale the back-end to increase throughput by adding more connections to GoogleFinance (i.e. <code>connectionsToGoogle</code> above).</li>
</ul>
<p><br/></p>
<h3 class="coral">Asynchronous Messages</h3>
<p>If you start to enjoy the idea of distributed systems, the next step is to dive into the world of <strong>true distributed systems based on asynchronous messages</strong>. Instead of making asynchronous network requests to a single node, messages are sent to the distributed system so any node can take action if necessary. And because asynchronous messages are usually implemented through a reliable UDP protocol, you are able to build a truly distributed system that provides: parallelism (nodes can truly run in parallel); tight integration (all nodes see the same messages in the same order); decoupling (nodes can evolve independently); failover/redundancy (when a node fails, another one can be running and building state to take over immediately); scalability/load balancing (just add more nodes); elasticity (nodes can lag during activity peaks without affecting the system as a whole); and resiliency (nodes can fail / stop working without taking the whole system down). For more information about how asynchronous messaging middlewares work you can check <a href="/index.php/2015/04/state-of-the-art-distributed-systems-with-coralmq/" target="_blank">CoralSequencer</a>.</p>
<p><a target="_blank" href="http://www.coralblocks.com/wp-content/uploads/2016/02/Screen-Shot-2017-08-21-at-12.38.13-PM.png"><img src="http://www.coralblocks.com/wp-content/uploads/2016/02/Screen-Shot-2017-08-21-at-12.38.13-PM-1024x561.png" alt="Screen Shot 2017-08-21 at 12.38.13 PM" width="600" height="328" class="aligncenter size-large wp-image-2176" /></a></p>
<p><br/></p>
<h3 class="coral">Conclusion</h3>
<p>Every system will eventually have to perform some kind of action that requires an arbitrary amount of time to complete. In the past, pure multithreading applications became very popular, but the <em>one-thread-per-request model</em> does not scale. By using concurrent queues you can make a multithreaded system without all the multithreading complexity and best of all it can easily scale to thousands of simultaneous connections. But there is also an alternative solution: distributed systems where instead of using an in-memory concurrent queue to distribute work across threads you use the network to distribute work across nodes, making asynchronous network calls to these nodes. The next architectural step is to use an asynchronous messaging middleware (MQ) instead of network requests to design distributed systems that are not only easy to scale but are also loosely coupled providing parallelism, tight integration, failover, redundancy, load balancing, elasticity and resiliency.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/thread-concurrency-vs-network-asynchronicity/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Architecture Case Study #1: CoralReactor + CoralQueue</title>
		<link>https://www.coralblocks.com/index.php/architecture-case-study-1-coralreactor-coralqueue/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=architecture-case-study-1-coralreactor-coralqueue</link>
		<comments>https://www.coralblocks.com/index.php/architecture-case-study-1-coralreactor-coralqueue/#comments</comments>
		<pubDate>Fri, 23 Jan 2015 00:14:20 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[Architecture]]></category>
		<category><![CDATA[Consulting]]></category>
		<category><![CDATA[CoralQueue]]></category>
		<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[coralqueue]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[disruptor]]></category>
		<category><![CDATA[gc]]></category>
		<category><![CDATA[low-latency]]></category>
		<category><![CDATA[netty]]></category>
		<category><![CDATA[nio]]></category>
		<category><![CDATA[queue]]></category>
		<category><![CDATA[selector]]></category>
		<category><![CDATA[throughput]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=780</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>You need a high throughput application capable of handling thousands of client connections simultaneously but some client requests might take long to process for whatever reason. How can that be done in an efficient way without impacting other connected clients and without leaving the application unresponsive for new client connections? <span id="more-780"></span></p>
<h3 class="coral">Solution</h3>
<p>To handle thousands of connections an application must use non-blocking sockets over a single <a href="http://docs.oracle.com/javase/7/docs/api/java/nio/channels/Selector.html" target="_blank">selector</a>, which means <font color="#26619b"><b>the same thread will handle thousands of connections simultaneously</b></font>. Problem is, if one of these connections lags for whatever reason, all other ones and the application as a whole must not be affected. In the past this problem was solved with the infamous <em>one-thread-per-client</em> approach which does not scale and leads to all kinds of multithreading pitfalls like race conditions, visibility issues and deadlocks. By using <font color="#26619b"><b>one thread for the selector and a fixed number of threads for the heavy-duty work</b></font>, a system can solve this problem by distributing client work (and not client requests) among the heavy-duty threads without affecting the overall performance of the application. But how does this communication between the selector thread and the heavy-duty threads happen? Through CoralQueue demultiplexers and multiplexers.<br />
<br/></p>
<h3 class="coral">Diagram</h3>
<p><a href="http://www.coralblocks.com/wp-content/uploads/2015/02/arch1.jpg"><img src="http://www.coralblocks.com/wp-content/uploads/2015/02/arch1.jpg" alt="arch1" width="1024" height="768" class="alignnone size-full wp-image-872" /></a></p>
<h3 class="coral">Flow</h3>
<style>
.li_flow { margin: 0 0 4px 0; }
</style>
<ul style="padding: 0 40px">
<li class="li_flow">CoralReactor running on single thread pinned to an isolated cpu core with CoralThreads.</li>
<li class="li_flow">CoralReactor opens one or more servers listening on a local port. All servers are running on the same reactor thread.</li>
<li class="li_flow">A server can receive one or thousands of connections from many clients across the globe.</li>
<li class="li_flow">Each client sends requests with some work to be performed.</li>
<li class="li_flow">The server does not perform this work. Instead it passes a message describing the work to a heavy-duty thread using a CoralQueue demultiplexer.</li>
<li class="li_flow">The CoralQueue demux distributes the messages among the heavy-duty threads.</li>
<li class="li_flow">The heavy-duty threads are also pinned to an isolated cpu core with CoralThreads.</li>
<li class="li_flow">A heavy-duty thread executes the work and sends back a message with the results to the server using a CoralQueue multiplexer.</li>
<li class="li_flow">The server picks up the message from the CoralQueue mux and reports back the results to the client.</li>
</ul>
<h3 class="coral">FAQ</h3>
<style>
.li_faq { margin: 0 0 17px 0; }
</style>
<ol style="padding: 12px 40px">
<li class="li_faq"><font color="#26619b">Won&#8217;t you have to create garbage when passing messages back and forth among threads?</font><br />
<b>A:</b> No. CoralQueues is a ultra-low-latency, lock-free data-structure for inter-thread communication that does not produce any garbage.
</li>
<li class="li_faq"><font color="#26619b">What happens if the queue gets full?</font><br />
<b>A:</b> A full queue will cause the reactor thread to block waiting for space. This creates latencies. To avoid a full queue you can start by increasing the number of heavy-duty threads and/or increasing the size of the queue.
</li>
<li class="li_faq"><font color="#26619b">I did number 2 above but I am still getting a full queue. Now what?</font><br />
<b>A:</b> CoralQueue has the built-in feature to write messages to disk asynchronously when it hits a full queue so it does not have to block waiting for space. Then the heavy-duty threads can get the messages from the queue file when they don&#8217;t find them in memory. You can use this approach not to disturb the reactor thread but at this point it is probably a good idea to also try to make whatever work your heavy-duty threads are performing more efficient.
</li>
<li class="li_faq"><font color="#26619b">How many connections can the application handle?</font><br />
<b>A:</b> CoralReactor can easily handle 10k+ connections concurrently in a single thread. If your machine has additional cores, you can also add more reactor threads to increase this number even more.
</li>
<li><font color="#26619b">How many heavy-duty threads should I have?</font><br />
<b>A:</b> That depends on the number of available cpu cores that your machine has. A cpu core is a scarce resource so you should allocate them across your applications wisely. Creating more threads than the number of available cpu cores won&#8217;t bring any benefit and it will actually degrade the performance of the system due to context switches. Ideally you should have a fixed number of heavy-duty threads pinned to their own isolated core so they are never interrupted.
</li>
</ol>
<h3 class="coral">Variations</h3>
<p>Instead of using one CoralQueue demultiplexer to randomly distribute messages across all heavy-duty threads, you can introduce the concept of <em>lanes</em>, with each lane having a <em>heaviness</em> number attached to it. For example, heavy tasks go all to lane 1, not-so-heavy tasks go to lane 2 and light tasks go to lane 3. The application would then decide in which lane a message should be dispatched. If a lane will be processed by a single heavy-duty thread, it can use a regular <em>one-producer-to-one-consumer</em> CoralQueue queue. If a lane will be served by 2 or more heavy-duty threads, then it can use a CoralQueue demultiplexer. To report back the results to the server, all heavy-duty threads can continue to use a CoralQueue multiplexer.<br />
<br/></p>
<h3 class="coral">Code Example</h3>
<p>Below you can see a simple server illustrating the architecture described above. To keep it simple it receives a string (the request) and returns the string prepended by its length (the response). It supports many clients and distribute the work among worker threads using a demux. Then it uses a mux to collect the results from the worker threads and respond to the appropriate client. In a more realistic scenario, the worker threads would be doing some heavier work, like accessing a database. You can easily test this server by connecting through a telnet client.</p>
<pre class="brush: java; title: ; notranslate">
package com.coralblocks.coralreactor.client.bench.queued;

import java.nio.ByteBuffer;

import com.coralblocks.coralbits.util.Builder;
import com.coralblocks.coralbits.util.ByteBufferUtils;
import com.coralblocks.coralqueue.demux.AtomicDemux;
import com.coralblocks.coralqueue.demux.Demux;
import com.coralblocks.coralqueue.mux.AtomicMux;
import com.coralblocks.coralqueue.mux.Mux;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.server.AbstractLineTcpServer;
import com.coralblocks.coralreactor.server.Server;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralreactor.util.MapConfiguration;

public class QueuedTcpServer extends AbstractLineTcpServer {
	
	static class WorkerRequestMessage {
		
		long clientId;
		ByteBuffer buffer;
		
		WorkerRequestMessage(int maxRequestLength) {
			this.clientId = -1;
			this.buffer = ByteBuffer.allocateDirect(maxRequestLength);
		}
		
		void readFrom(ByteBuffer src) {
			buffer.clear();
			buffer.put(src);
			buffer.flip();
		}
	}
	
	static class WorkerResponseMessage {
		
		long clientId;
		ByteBuffer buffer;
		
		WorkerResponseMessage(int maxResponseLength) {
			this.clientId = -1;
			this.buffer = ByteBuffer.allocateDirect(maxResponseLength);
		}
	}
	
	private final int numberOfWorkerThreads;
	private final Demux&lt;WorkerRequestMessage&gt; demux;
	private final Mux&lt;WorkerResponseMessage&gt; mux;
	private final WorkerThread[] workerThreads;

	public QueuedTcpServer(NioReactor nio, int port, Configuration config) {
	    super(nio, port, config);
	    this.numberOfWorkerThreads = config.getInt(&quot;numberOfWorkerThreads&quot;);
	    final int maxRequestLength = config.getInt(&quot;maxRequestLength&quot;, 256);
	    final int maxResponseLength = config.getInt(&quot;maxResponseLength&quot;, 256);
	    
	    Builder&lt;WorkerRequestMessage&gt; requestBuilder = new Builder&lt;WorkerRequestMessage&gt;() {
			@Override
            public WorkerRequestMessage newInstance() {
	            return new WorkerRequestMessage(maxRequestLength);
            }
	    };
	    
	    this.demux = new AtomicDemux&lt;WorkerRequestMessage&gt;(1024, requestBuilder, numberOfWorkerThreads);
	    
	    Builder&lt;WorkerResponseMessage&gt; responseBuilder = new Builder&lt;WorkerResponseMessage&gt;() {
	    	@Override
            public WorkerResponseMessage newInstance() {
	            return new WorkerResponseMessage(maxResponseLength);
            }
	    };
	    
	    this.mux = new AtomicMux&lt;WorkerResponseMessage&gt;(1024, responseBuilder, numberOfWorkerThreads);
	    
	    this.workerThreads = new WorkerThread[numberOfWorkerThreads];
    }
	
	@Override
	public void open() {
		
		for(int i = 0; i &lt; numberOfWorkerThreads; i++) {
			if (workerThreads[i] != null) {
				try {
					// make sure it is dead!
					workerThreads[i].stopMe();
					workerThreads[i].join();
				} catch(Exception e) {
					throw new RuntimeException(e);
				}
			}
		}
		
		mux.clear();
		demux.clear();
			
		for(int i = 0; i &lt; numberOfWorkerThreads; i++) {
			workerThreads[i] = new WorkerThread(i);
			workerThreads[i].start();
		}
		
		nio.addCallback(this); // we want to constantly receive callbacks from 
							   // reactor thread on handleCallback() to drain responses from mux
		
		super.open();
	}
	
	@Override
	public void close() {
		
		for(int i = 0; i &lt; numberOfWorkerThreads; i++) {
			if (workerThreads[i] != null) {
				workerThreads[i].stopMe();
			}
		}
		
		nio.removeCallback(this);
		
		super.close();
	}
	
	@Override
	protected void handleMessage(Client client, ByteBuffer msg) {
		
		if (ByteBufferUtils.equals(msg, &quot;bye&quot;) || ByteBufferUtils.equals(msg, &quot;exit&quot;)) {
			client.close();
			return;
		}
		
		// on a new message, dispatch to the demux so worker threads can process it:
		
		WorkerRequestMessage req;
		
		while((req = demux.nextToDispatch()) == null); // busy spin...
		
		req.clientId = getClientId(client);
		req.readFrom(msg);
		
		demux.flush();
	}
	
	class WorkerThread extends Thread {
		
		private final int index;
		private volatile boolean running = true;
		
		public WorkerThread(int index) {
			super(&quot;WorkerThread-&quot; + index);
			this.index = index;
		}
		
		public void stopMe() {
			running = false;
		}
		
		@Override
        public void run() {
            
			while(running) {
			
    			// read from demux and process:
    			
    			long avail = demux.availableToPoll(index);
    			
    			if (avail &gt; 0) {
    				
    				for(int i = 0; i &lt; avail; i++) {
    					
    					// get the request:
    					WorkerRequestMessage req = demux.poll(index);
    					
    					// do something heavy with the request, like accessing database or big data...
    					// for our example we just prepend the message length
    					
    					long clientId = req.clientId;
    					int msgLen = req.buffer.remaining();
    					
    					// get a response object from mux:
    
    					WorkerResponseMessage res = null;
    					
    					while((res = mux.nextToDispatch(index)) == null); // busy spin
    
    					// notice below that we are just copying data from request to response:
    					res.clientId = clientId; // copy clientId
    					res.buffer.clear();
    					ByteBufferUtils.appendInt(res.buffer, msgLen);
    					res.buffer.put((byte) ':');
    					res.buffer.put((byte) ' ');
    					res.buffer.put(req.buffer); // copy buffer contents
    					res.buffer.flip(); // don't  forget
    				}
    				
					mux.flush(index);
    				demux.donePolling(index);
    				nio.wakeup(); // don't forget so handleCallback is called
    			}
			}
        }
	}
	
	@Override
	protected void handleCallback(long nowInMillis) {
		
		// this is the reactor thread calling us back to check whether the mux has pending results:
		
		long avail = mux.availableToPoll();
		
		if (avail &gt; 0) {
			
			for(long i = 0; i &lt; avail; i++) {
				
				WorkerResponseMessage res = mux.poll();
				
				Client client = getClient(res.clientId);
				
				if (client != null) { // client might have disconnected...
					client.send(res.buffer);
				}
			}
			
			mux.donePolling();
		}
	}
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;numberOfWorkerThreads&quot;, 4);
		Server server = new QueuedTcpServer(nio, 45451, config);
		server.open();
		nio.start();
		
	}
	
}
</pre>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/architecture-case-study-1-coralreactor-coralqueue/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Handling Socket Lagging during Write Operations</title>
		<link>https://www.coralblocks.com/index.php/handling-socket-lagging-during-write-operations/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=handling-socket-lagging-during-write-operations</link>
		<comments>https://www.coralblocks.com/index.php/handling-socket-lagging-during-write-operations/#comments</comments>
		<pubDate>Fri, 14 Aug 2015 03:05:55 +0000</pubDate>
		<dc:creator><![CDATA[cb]]></dc:creator>
				<category><![CDATA[CoralReactor]]></category>
		<category><![CDATA[coralreactor]]></category>
		<category><![CDATA[lag]]></category>
		<category><![CDATA[lagging]]></category>
		<category><![CDATA[nio]]></category>
		<category><![CDATA[op_write]]></category>
		<category><![CDATA[reactor]]></category>
		<category><![CDATA[selector]]></category>
		<category><![CDATA[write]]></category>

		<guid isPermaLink="false">http://www.coralblocks.com/index.php/?p=1530</guid>
		<description><![CDATA[ [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>A common problem when working with non-blocking sockets it that a client may lag when the send rate is too high, in other words, the client will push out messages faster then the network card can send and/or the other side&#8217;s network card can receive them. That will cause the underlying write socket buffer at the OS/Kernel level to fill up. In this article we explain how <a href="/index.php/category/coralreactor" target="_blank">CoralReactor</a> handles this complex scenario in a simple way so that you don&#8217;t have to worry about it. <span id="more-1530"></span></p>
<h3 class="coral">Socket Lagging</h3>
<p>A socket might lag for a combination of one of more of the reasons below:</p>
<style>
.li_space { margin: 0 0 6px 0; }
</style>
<ol>
<li class="li_space">The underlying socket send buffer (native SO_SNDBUF option) is too small. You can actually change the size of this buffer from Java using the method <code>setSendBufferSize</code> of <code>Socket</code>. See the <a href="/index.php/2015/04/coralreactor-faq/" target="_blank">FAQ</a> on how to change this size straight through CoralReactor. By default, CoralReactor tries to set it to 12Mb.</li>
<li class="li_space">The receiving party is reading and processing the messages too slowly.</li>
<li class="li_space">The sending party is writing the messages too fast.</li>
<li class="li_space">The network is slow.</li>
</ol>
<p>In this article we focus on the third reason above, which is the most common.</p>
<h3 class="coral">Application Write Buffer</h3>
<p>CoralReactor maintains a write buffer at the application level, in other words, when CoralReactor writes a message, it actually writes it to an internal <code>java.nio.ByteBuffer</code> named <code>writeBuffer</code>. Then later when you call <code>flush()</code>, CoralReactor tries to transfer bytes from this internal buffer to the underlying socket buffer. Having this intermediary write buffer allows the application to check the space available before it actually tries to write the message. Note that the same procedure is not possible with the underlying socket buffer, in other words, <font color="#26619b"><b>it is impossible for the application to know how much space is available in the underlying socket buffer without actually trying to write to it</b></font>.</p>
<h3 class="coral">Sending Messages</h3>
<p>CoralReactor provides two methods to send messages:</p>
<ul>
<li class="li_space"><code>write()</code>: writes a full message to the application writeBuffer and do not flush. Note that before this writing happens, the available space in the writeBuffer is checked to see if the message can be fully written. If there is no space available (i.e. the writeBuffer is nearly full) and automatic <code>flush()</code> is issued by CoralReactor to free up space. When you are done writing the messages, you should call <code>flush()</code> to send them.</li>
<li class="li_space"><code>send()</code>: calls <code>write()</code> and then immediately calls <code>flush()</code> only if the message was successfully written to the <code>writeBuffer</code>, in other words, if write() returned true. Read below for more details on how you can handle this return value.</li>
</ul>
<h3 class="coral">Avoiding Partial Writes</h3>
<p>CoralReactor&#8217;s <code>write(...)</code> and <code>send(...)</code> methods return a boolean to indicate whether the message was written or not to the application internal writeBuffer. If the underlying socket buffer gets full, eventually the internal writeBuffer will also get full. As explained before, CoralReactor is able to check how much space is available or not in the writeBuffer. Therefore, for simplicity&#8217;s sake, CoralReactor does not write partial messages to its internal writeBuffer, in other words, <font color="#26619b"><b>it either writes the whole message and returns true or doesn&#8217;t write anything at all</b></font> and returns false. If a write or send operation returns false, the application can save the message and retry on a later time, when the method <code>handleFreedWriteBufferSpace(int freeSpace)</code> is triggered. (read below)</p>
<h3 class="coral">Handling a Full Write Buffer</h3>
<p>When the client is writing too fast there is a high chance that the application writeBuffer will get full. When that happens, write() or send() will return false indicating that the message was not written. Under-the-hood, CoralReactor will continue to try to flush the writeBuffer to free up space, through the OP_WRITE selector operation. When this operation succeeds, it will trigger the method <code>handleFreedWriteBufferSpace(int freeSpace)</code>. Therefore, a client can choose to queue messages and wait until this method is triggered to retry to send the message. Other clients might choose to save the message to disk, drop the message or even disconnect. Another possibility is to try to increase the size of the writeBuffer and/or the underlying socket buffer. However, because these sizes cannot be infinite, there will always be the possibility that a client can write fast enough to fill the buffers.</p>
<h3 class="coral">Turning off disconnectOnFullWriteBuffer</h3>
<p>By default the client config option <code>disconnectOnFullWriteBuffer</code> is turned on. That means that if you ever call write() or send() and they return false, an alert message will be logged and the client will be disconnected. If you choose to handle socket lagging yourself as explained in this article, you should set this option to false in your client config.</p>
<h3 class="coral">Source Code Example</h3>
<p>Below we have a simple throughput test that will always overflow the writeBuffer and use handleFreedWriteBufferSpace to continue to push messages as space becomes available in the writeBuffer:</p>
<pre class="brush: java; highlight: [50,82]; title: ; notranslate">
package com.coralblocks.coralreactor.client.bench.throughput;

import com.coralblocks.coralbits.util.SystemUtils;
import com.coralblocks.coralreactor.client.AbstractLineTcpClient;
import com.coralblocks.coralreactor.client.Client;
import com.coralblocks.coralreactor.nio.NioReactor;
import com.coralblocks.coralreactor.util.Configuration;
import com.coralblocks.coralreactor.util.MapConfiguration;

public class ThroughputTcpBatchedClient extends AbstractLineTcpClient {
	
	// java -DmessagesToSend=5000000 -server -verbose:gc -Xbootclasspath/p:/home/coralblocks/workspace/CoralReactor-boot-jdk7/target/coralreactor-boot-jdk7.jar -cp target/coralreactor-all.jar:lib/jna-3.5.1.jar -DnioReactorProcToBind=3 com.coralblocks.coralreactor.client.bench.throughput.ThroughputTcpBatchedClient
	
	private final int messagesToSend;
	private final int messageSize;
	private final byte[] msg;
	
	private int msgCount;
	private long start;

	public ThroughputTcpBatchedClient(NioReactor nio, String host, int port, Configuration config) {
	    super(nio, host, port, config);
	    this.messagesToSend = config.getInt(&quot;messagesToSend&quot;);
	    this.messageSize = config.getInt(&quot;messageSize&quot;);
	    this.msg = new byte[messageSize];
		for(int i = 0; i &lt; msg.length; i++) {
			msg[i] = (byte) ('0' + (i % 10)); // fill the message with some data...
		}
    }
	
	@Override
	protected void handleConnectionOpened() {
		msgCount = 0;
		start = System.nanoTime();
		sendMessages();
	}
	
	@Override
	protected void handleFreedWriteBufferSpace(int freeSpace) {
		if (msgCount == messagesToSend &amp;&amp; freeSpace == writeBuffer.capacity()) {
			printResults();
			disconnect();
			return;
		}
		
		if (msgCount &lt; messagesToSend) sendMessages();
	}
	
	private void sendMessages() {
		while(write(msg)) { // &lt;=== this will overflow the writeBuffer
			if (++msgCount == messagesToSend) {
				flush();
				if (!isLagging()) {
					// we only want to finish the test when everything was flushed to the underlying socket buffer, in other words,
					// do not finish here if lagging (finish instead on handleFreedWriteBufferSpace)
					printResults();
					disconnect();
				}
				break;
			}
		}
	}
	
	private void printResults() {
		long totalTime = System.nanoTime() - start;
		long latency = totalTime / msgCount;
		long ops = msgCount * 1000000000L / totalTime;
		System.out.println(&quot;Done sending messages! messagesSent=&quot; 
			+ msgCount + &quot; avgLatencyPerMsg=&quot; + latency + &quot; nanos throughput=&quot; + ops + &quot; msgs/sec&quot;);
	}
	
	public static void main(String[] args) {
		
		NioReactor nio = NioReactor.create();
		
		int messagesToSend = SystemUtils.getInt(&quot;messagesToSend&quot;, 5000000);
		int messageSize = SystemUtils.getInt(&quot;messageSize&quot;, 256);
		
		MapConfiguration config = new MapConfiguration();
		config.add(&quot;messagesToSend&quot;, messagesToSend);
		config.add(&quot;messageSize&quot;, messageSize);
		config.add(&quot;disconnectOnFullWriteBuffer&quot;, false);
		
		Client client = new ThroughputTcpBatchedClient(nio, &quot;localhost&quot;, 45451, config);
		client.open();
		nio.start();
	}
}
</pre>
<h3 class="coral">Conclusion</h3>
<p>CoralReactor can handle all the complexity of socket lagging for write operations under the hood for you. All you have to do is to implement the method <code>handleFreedWriteBufferSpace(int)</code> to continue to send messages as space becomes available. The <code>write()</code> and <code>send()</code> methods return false to indicate that the write operation is lagging and the message could not be sent. For simplicity, CoralReactor does not write partial messages to the writeBuffer, allowing the application to queue the messages and resend them later when handleFreedWriteBufferSpace is triggered.</p>
]]></content:encoded>
			<wfw:commentRss>https://www.coralblocks.com/index.php/handling-socket-lagging-during-write-operations/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
