CoralFIX

CoralFIX is a full-fledged, ultra-low-latency, garbage-free FIX engine with a very intuitive API. Its FIX parser delivers a complete and ready-to-use FixMessage object in 480 nanos on average. Moreover, through its tight integration with CoralReactor, you can write FIX clients and servers that perform under 4.8 micros per FIX message (i.e. one-way loopback total time from client to server, including parser and network code). Other features include: support for all FIX versions; no need to worry about the FIX session level protocol because it is managed transparently by the base class (sequences, heartbeats, retransmission, logon, etc.); async store for message persistency and retransmission; async audit logging; async sequence persistency; fix timers for session management and much more.


STRAIGHT TO THE POINT GETTING STARTED FREQUENTLY ASKED QUESTIONS PERFORMANCE NUMBERS REQUEST FULL VERSION TRIAL

FIX clients and servers with CoralFIX and CoralReactor

CoralFIX is fully integrated with CoralReactor so you can code your own ultra-low-latency, ultra-low-variance (i.e. no GC overhead) FIX network clients and servers. In this article we introduce the FixApplicationClient and the FixApplicationServer that you can use to code a FIX connection that takes care of all the low-level FIX session details for you, like logon, heartbeats, sequence reset, etc. Continue reading

Getting Started with CoralFIX

This article shows the basics of CoralFIX API when it comes to adding/retrieving values from FixMessages, creating new FIX tags and working with repeating groups. You can also get started quickly by using our SimpleFixApplicationServer and SimpleFixApplicationClient as described here. Continue reading

CoralFIX Parser Performance Numbers

In this article we present the performance numbers of CoralFIX when it comes to parsing messages. We measure the time it takes to generate a ByteBuffer from a FixMessage and the time it takes to generate a FixMessage from a ByteBuffer. For that we use three FIX messages: a simple one without repeating groups, one with a repeating group and lastly one with repeating groups inside repeating groups. Continue reading

Ultra low-latency with CoralFIX and CoralReactor

Please note that this benchmark sends real ExecutionReport FIX messages. It simulates the very real trading scenario where your order is filled and your strategy needs to react as soon as possible. Furthermore, our clients run our benchmarks in their own environment. We provide all the source code of our benchmarks so clients can modify and adapt them in order to reach their own conclusions. They also run their own and very specific benchmarks and tests with similar results. This benchmark not only measures the FIX parser performance, but also and most importantly the network I/O performance. If you want to check our tick-to-trade latencies you can click here.

To test the performance of CoralFIX + CoralReactor we have developed a simple test with a fix server and a fix client. The client connects to the server, the standard FIX handshake through LOGON messages happens and the server proceeds to send 2 million FIX messages to the client. So we are measuring one-way latency from FIX server to FIX client over loopback. For a throughput benchmark instead you can check this article. Note that the server only sends the next message to the client when it gets a message back from it so we can correctly measure the time it takes for each message in isolation to travel from the server to the client. We then compute the average latency, which includes the full CoralFIX parsing time (i.e. encoding and decoding of the FIX message) plus the full CoralReactor TCP network I/O time. Below we present the latency results and the source code. Continue reading

Blazing Fast Throughput with CoralFIX + CoralReactor

To test the performance of CoralFIX + CoralReactor we have developed a simple test with a fix server and a fix client. The client connects to the server, the standard FIX handshake through LOGON messages takes place and the client proceeds to send (i.e. push) 5 million FIX messages to the server as fast as it can. Then the server receives and processes all the messages calculating the throughput. So we are are measuring the one-way throughput over loopback. For a latency benchmark instead you can check this article. Below we present the throughput results and the source code. Continue reading