Showing posts with label akka. Show all posts
Showing posts with label akka. Show all posts

Monday, December 16, 2013

Introduction to Akka Persistence

Akka Persistence is a new module in Akka 2.3. At the time of writing this post, it is available as milestone release (2.3-M2). Akka Persistence adds actor state persistence and at-least-once message delivery semantics to Akka. It is inspired by and the successor of the eventsourced project. They share many high-level concepts but completely differ on API and implementation level.

To persist an actor's state, only changes to that actor's state are written to a journal, not current state directly. These changes are appended as immutable facts to a journal, nothing is ever mutated, which allows for very high transaction rates and efficient replication. Actor state can be recovered by replaying stored changes and projecting them again. This not only allows state recovery after an actor has been restarted by a supervisor but also after JVM or node crashes, for example. State changes are defined in terms of messages an actor receives (or generates).

Persistence of messages also forms the basis for supporting at-least-once message delivery semantics. This requires retries to counter transport losses, which means keeping state at the sending end and having an acknowledgement mechanism at the receiving end (see Message Delivery Guarantees in Akka). Akka Persistence supports that for point-to-point communications. Reliable point-to-point communications are an important part of highly scalable applications (see also Pet Helland's position paper Life beyond Distributed Transactions).

The following gives a high-level overview of the current features in Akka Persistence. Links to more detailed documentation are included.

Processors

Processors are persistent actors. They internally communicate with a journal to persist messages they receive or generate. They may also request message replay from a journal to recover internal state in failure cases. Processors may either persist messages 
  • before an actor's behavior is executed (command sourcing) or
  • during an actor's behavior is executed (event sourcing)
Command sourcing is comparable to using a write-ahead-log. Messages are persisted before it is known whether they can be successfully processed or not. In failure cases, they can be (logically) removed from the journal so that they won't be replayed during next recovery. During recovery, command sourced processors show the same behavior as during normal operation. They can achieve high throughput rates by dynamically increasing the size of write batches under high load.

Event sourced processors do not persist commands. Instead they allow application code to derive events from a command and atomically persist these events. After persistence, they are applied to current state. During recovery, events are replayed and only the state-changing behavior of an event sourced processor is executed again. Other side effects that have been executed during normal operation are not performed again.

Processors automatically recover themselves. Akka Persistence guarantees that new messages sent to a processor never interleave with replayed messages. New messages are internally buffered until recovery completes, hence, an application may send messages to a processor immediately after creating it.

Snapshots

The recovery time of a processor increases with the number of messages that have been written by that processor. To reduce recovery time, applications may take snapshots of processor state which can be used as starting points for message replay. Usage of snapshots is optional and only needed for optimization.

Channels

Channels are actors that provide at-least-once message delivery semantics between a sending processor and a receiver that acknowledges the receipt of messages on application level. They also ensure that successfully acknowledged messages are not delivered again to receivers during processor recovery (i.e. replay of messages). Applications that want to have reliable message delivery without application-defined sending processors should use persistent channels. A persistent channel is like a normal channel that additionally persists messages before sending them to a receiver.

Journals

Journals (and snapshot stores) are pluggable in Akka Persistence. The default journal plugin is backed by LevelDB which writes messages to the local filesystem. A replicated journal is planned but not yet part of the distribution. Replicated journals allow stateful actors to be migrated in a cluster, for example. For testing purposes, a remotely shared LevelDB journal can be used instead of a replicated journal to experiment with stateful actor migration. Application code doesn't need to change when switching to a replicated journal later.

Updates:
  • Views have been added after 2.3-M2.
  • A replicated journal backed by Apache Cassandra is available here.  
  • A complete list of community-contributed plugins is maintained here.

Wednesday, March 20, 2013

Eventsourced for Akka - A high-level technical overview

Eventsourced is an Akka extension that adds scalable actor state persistence and at-least-once message delivery guarantees to Akka. With Eventsourced, stateful actors
  • persist received messages by appending them to a log (journal)
  • project received messages to derive current state
  • usually hold current state in memory (memory image)
  • recover current (or past) state by replaying received messages (during normal application start or after crashes)
  • never persist current state directly (except optional state snapshots for recovery time optimization)
In other words, Eventsourced implements a write-ahead log (WAL) that is used to keep track of messages an actor receives and to recover its state by replaying logged messages. Appending messages to a log instead of persisting actor state directly allows for actor state persistence at very high transaction rates and supports efficient replication. In contrast to other WAL-based systems, Eventsourced usually keeps the whole message history in the log and makes usage of state snapshots optional.

Logged messages represent intended changes to an actor's state. Logging changes instead of updating current state is one of the core concept of event sourcing. Eventsourced can be used to implement event sourcing concepts but it is not limited to that. More details about Eventsourced and its relation to event sourcing can be found here.

Eventsourced can also be used to make message exchanges between actors reliable so that they can be resumed after crashes, for example. For that purpose, channels with at-least-once message delivery guarantees are provided. Channels also prevent that output messages, sent by persistent actors, are redundantly delivered during replays which is relevant for message exchanges between these actors and other services.

Building blocks

The core building blocks provided by Eventsourced are processors, channels and journals. These are managed by an Akka extension, the EventsourcingExtension.

Processor

A processor is a stateful actor that logs (persists) messages it receives. A stateful actor is turned into a processor by modifying it with the stackable Eventsourced trait during construction. A processor can be used like any other actor.

Messages wrapped inside Message are logged by a processor, unwrapped messages are not logged. Logging behavior is implemented by the Eventsourced trait, a processor's receive method doesn't need to care about that. Acknowledging a successful write to a sender can be done by sending a reply. A processor can also hot-swap its behavior by still keeping its logging functionality.

Processors are registered at an EventsourcingExtension. This extension provides methods to recover processor state by replaying logged messages. Processors can be registered and recovered at any time during an application run.

Eventsourced doesn't impose any restrictions how processors maintain state. A processor can use vars, mutable data structures or STM references, for example.

Channel

Channels are used by processors for sending messages to other actors (channel destinations) and receiving replies from them. Channels
  • require their destinations to confirm the receipt of messages for providing at-least-once delivery guarantees (explicit ack-retry protocol). Receipt confirmations are written to a log.
  • prevent redundant delivery of messages to destinations during processor recovery (replay of messages). Replayed messages with matching receipt confirmations are dropped by the corresponding channels.
A channel itself is an actor that decorates a destination with the aforementioned functionality. Processors usually create channels as child actors for decorating destination actor references.

A processor may also sent messages directly to another actor without using a channel. In this case that actor will redundantly receive messages during processor recovery.

Eventsourced provides three different channel types (more are planned).
  • Default channel
    • Does not store received messages.
    • Re-delivers uncomfirmed messages only during recovery of the sending processor.
    • Order of messages as sent by a processor is not preserved in failure cases.
  • Reliable channel
    • Stores received messages.
    • Re-delivers unconfirmed messages based on a configurable re-delivery policy.
    • Order of messages as sent by a processor is preserved, even in failure cases.
    • Often used to deal with unreliable remote destinations.
  • Reliable request-reply channel
    • Same as reliable channel but additionally guarantees at-least-once delivery of replies.
    • Order of replies not guaranteed to correspond to the order of sent request messages.
Eventsourced channels are not meant to replace any existing messaging system but can be used, for example, to reliably connect processors to such a system, if needed. More generally, they are useful to integrate processors with other services, as described in another blog post.

Journal

A journal is an actor that is used by processors and channels to log messages and receipt confirmations. The quality of service (availability, scalability, ...) provided by a journal depends on the used storage technology. The Journals section in the user guide gives an overview of existing journal implementations and their development status.

References

Thursday, January 31, 2013

Event sourcing and external service integration

A frequently asked question when building event sourced applications is how to interact with external services. This topic is covered to some extend by Martin Fowler's Event Sourcing article in the sections External Queries and External Updates. In this blog post I'll show how to approach external service integration with the Eventsourced library for Akka. If you are new to this library, an overview is given in the user guide sections Overview and First steps.
The example application presented here was inspired by Fowler's LMAX article where he describes how event sourcing differs from an alternative transaction processing approach:

Imagine you are making an order for jelly beans by credit card. A simple retailing system would take your order information, use a credit card validation service to check your credit card number, and then confirm your order - all within a single operation. The thread processing your order would block while waiting for the credit card to be checked, but that block wouldn't be very long for the user, and the server can always run another thread on the processor while it's waiting.
In the LMAX architecture, you would split this operation into two. The first operation would capture the order information and finish by outputting an event (credit card validation requested) to the credit card company. The Business Logic Processor would then carry on processing events for other customers until it received a credit-card-validated event in its input event stream. On processing that event it would carry out the confirmation tasks for that order.
Although Fowler mentions the LMAX architecture, we don't use the Disruptor here for implementation. It's role is taken by an Akka dispatcher in the following example. Nevertheless, the described architecture and message flow remain the same:

The two components in the high-level architecture are:

  • OrderProcessor. An event sourced actor that maintains received orders and their validation state in memory. The OrderProcessor writes any received event message to an event log (journal) so that it's in-memory state can be recovered by replaying these events e.g. after a crash or during normal application start. This actor corresponds to the Business Logic Processor in Fowler's example.
  • CreditCardValidator. A plain remote, stateless actor that validates credit card information of submitted orders on receiving CreditCardValidationRequested events. Depending on the validation outcome it replies with CreditCardValidated or CreditCardValidationFailed event messages to the OrderProcessor.
The example application must meet the following requirements and conditions:

  • The OrderProcessor and the CreditCardValidator must communicate remotely so that they can be deployed separately. The CreditCardValidator is an external service from the OrderProcessor's perspective.
  • The example application must be able to recover from JVM crashes and remote communication errors. OrderProcessor state must be recoverable from logged event messages and running credit card validations must be automatically resumed after crashes. To overcome temporary network problems and remote actor downtimes, remote communication must be re-tried. Long-lasting errors must be escalated.
  • Event message replay during recovery must not redundantly emit validation requests to the CreditCardValidator and validation responses must be recorded in the event log (to solve the external queries problem). This will recover processor state in a deterministic way, making repeated recoveries independent from otherwise potentially different validation responses over time for the same validation request (a credit card may expire, for example).
  • Message processing must be idempotent. This requirement is a consequence of the at-least-once message delivery guarantee supported by Eventsourced.
The full example application code that meets these requirements is part of the Eventsourced project and can be executed with sbt.
The CreditCardValidator can be started with:
> project eventsourced-examples
> run-main org.eligosource.eventsourced.example.CreditCardValidator

The application that runs the OrderProcessor and sends OrderSubmitted events can be started with
> project eventsourced-examples
> run-main org.eligosource.eventsourced.example.OrderProcessor

The example application defines an oversimplified domain class Order
together with the domain events
Whenever the OrderProcessor receives a domain event it appends that event to the event log (journal) before processing it. To add event logging behavior to an actor it must be modified with the stackable Eventsourced trait during construction.
Eventsourced actors only write messages of type Message to the event log (together with the contained event). Messages of other type can be received by an Eventsourced actor as well but aren't logged. The Receiver trait allows the OrderProcessor's receive method to pattern-match against received events directly (instead of Message). It is not required for implementing an event sourced actor but can help to make implementations simpler.
On receiving an OrderSubmitted event, the OrderProcessor extracts the contained order object from the event, updates the order with an order id and stores it in the orders map. The orders map represents the current state of the OrderProcessor (which can be recovered by replaying logged event messages).
After updating the orders map, the OrderProcessor replies to the sender of an OrderSubmitted event with an OrderStored event. This event is a business-level acknowledgement that the received OrderSubmitted event has been successfully written to the event log. Finally, the OrderProcessor emits a CreditCardValidationRequested event message to the CreditCardValidator via reliable request-reply channel (see below). The emitted message is derived from the current event message which can be accessed via the message method of the Receiver trait. Alternatively, the OrderProcessor could also have used an emitter for sending the event (see also channel usage hints).
A reliable request-reply channel is pattern on top of a reliable channel with the following properties: It

  • persists request Messages for failure recovery and preserves message order.
  • extracts requests from received Messages before sending them to a destination.
  • wraps replies from a destination into a Message before sending them back to the request sender.
  • sends a special DestinationNotResponding reply to the request sender if the destination doesn't reply within a configurable timeout.
  • sends a special DestinationFailure reply to the request sender if the destination responds with Status.Failure.
  • guarantees at-least-once delivery of requests to the destination.
  • guarantees at-least-once delivery of replies to the request sender.
  • requires a positive receipt confirmation for a reply to mark a request-reply interaction as successfully completed.
  • redelivers requests, and subsequently replies, on missing or negative receipt confirmations.
  • sends a DeliveryStopped event to the actor system's event stream if the maximum number of delivery attempts has been reached (according to the channel's redelivery policy).
A reliable request-reply channel offers all the properties we need to reliably communicate with the remote CreditCardValidator. The channel is created as child actor of the OrderProcessor when the OrderProcessor receives a SetCreditCardValidator message.
The channel is created with the channelOf method of the actor system's EventsourcingExtension and configured with a ReliableRequestReplyChannelProps object. Configuration data are the channel destination (validator), a redelivery policy and a destination reply timeout. When sending validation requests via the created validationRequestChannel, the OrderProcessor must be prepared for receiving CreditCardValidated, CreditCardValidationFailed, DestinationNotResponding or DestinationFailure replies. These replies are sent to the OrderProcessor inside a Message and are therefore written to the event log. Consequently, OrderProcessor recoveries in the future will replay past reply messages instead of obtaining them again from the validator which ensures deterministic state recovery. Furthermore, the validationRequestChannel will ignore validation requests it receives during a replay, except those whose corresponding replies have not been positively confirmed yet. The following snippet shows how replies are processed by the OrderProcessor.

  • A CreditCardValidated reply updates the creditCardValidation status of the corresponding order to Success and stores the updated order in the orders map. Further actions, such as notifying others that an order has been accepted, are omitted here but are part of the full example code. Then, the receipt of the reply is positively confirmed (confirm(true)) so that the channel doesn't redeliver the corresponding validation request.
  • A CreditCardValidationFailed reply updates the creditCardValidation status of the corresponding order to Failure and stores the updated order in the orders map. Again, further actions are omitted here and the receipt of the reply is positively confirmed.
Because the validationRequestChannel delivers messages at-least-once, we need to detect duplicates in order to make reply processing idempotent. Here, we simply require that the order object to be updated must have a Pending creditCardValidation status before changing state (and notifying others). If the order's status is not Pending, the order has already been updated by a previous reply and the current reply is a duplicate. In this case, the methods onValidationSuccess and onValidationFailure don't have any effect (orders.get(orderId).filter(_.creditCardValidation == Pending) is None). The receipt of the duplicate is still positively confirmed. More general guidelines how to detect duplicates are outlined here.

  • A DestinationNotResponding reply is always confirmed negatively (confirm(false)) so that the channel is going redeliver the validation request to the CreditCardValidator. This may help to overcome temporary network problems, for example, but doesn't handle the case where the maximum number of redeliveries has been reached (see below).
  • A DestinationFailure reply will be negatively confirmed by default unless it has been delivered more than twice. This may help to overcome temporary CreditCardValidator failures i.e. cases where a Status.Failure is returned by the validator.
Should the CreditCardValidator be unavailable for a longer time and the validationRequestChannel reaches the maximum number of redeliveries, it will stop message delivery and publishes a DeliveryStopped event to the actor system's event stream. The channel still continues to accept new event messages and persists them so that the OrderProcessor can continue receiving OrderSubmitted events but the interaction with the CreditCardValidator is suspended. It is now up to the application to re-activate message delivery.
Subscribing to DeliveryStopped events allows an application to escalate a persisting network problem or CreditCardValidator outage by alerting a system administrator or switching to another credit card validation service, for example. In our case, a simple re-activation of the validationRequestChannel is scheduled.
The OrderProcessor subscribes itself to the actor system's event stream. On receiving a DeliveryStopped event it schedules a re-activation of the validationRequestChannel by sending it a Deliver message.
This finally meets all the requirements stated above but there's a lot more to say about external service integration. Examples are external updates or usage of channels that don't preserve message order for optimizing concurrency and throughput. I also didn't cover processor-specific, non-blocking recovery as implemented by the example application. This is enough food for another blog post.



































Monday, February 28, 2011

Akka Producer Actors: New Features and Best Practices

In a previous post I wrote about new features and best practices for Akka consumer actors. In this post, I'll cover Akka producer actors. For the following examples to compile and run, you'll need the current Akka 1.1-SNAPSHOT.

Again, I assume that you already have a basic familiarity with Akka, Apache Camel and the akka-camel integration module. If you are new to it, you may want to read the Akka and Camel chapter (free pdf) of the Camel in Action book or the Introduction section of the official akka-camel documentation first.

Basic usage


Akka producer actors can send messages to any Camel endpoint, provided that the corresponding Camel component is on the classpath. This allows Akka actors to interact with external systems or other components over a large number of protocols and APIs.

Let's start with a simple producer actor that sends all messages it receives to an external HTTP service and returns the response to the initial sender. For sending messages over HTTP we can use the Camel jetty component which features an asynchronous HTTP client.


Concrete producer actors inherit a default implementation of Actor.receive from the Producer trait. For simple use cases, only an endpoint URI must be defined. Producer actors also require a started CamelContextManager for working properly. A CamelContextManager is started when an application starts a CamelService e.g. via CamelServiceManager.startCamelService or when starting the CamelContextManager directly via


The latter approach is recommended when an application uses only producer actors but no consumer actors. This slightly reduces the overhead when starting actors. After starting the producer actor, clients can interact with the HTTP service via the actor API.
kra

Here, !! is used for sending the message and waiting for a response. Alternatively, one can also use ! together with an implicit sender reference.


In this case the sender will receive an asynchronous reply from the producer actor. Before, the producer actor itself receives an asynchronous reply from the jetty endpoint. The asynchronous jetty endpoint doesn't block a thread waiting for a response and the producer actor doesn't do that either. This is important from a scalability perspective, especially for longer-running request-response cycles.

By default, a producer actor initiates an in-out message exchange with its Camel endpoint i.e. it expects a response from it. If a producer actor wants to initiate an in-only message exchange then it must override the oneway method to return true. The following example shows a producer actor that initiates an in-only message exchange with a JMS endpoint.


This actor adds any message it receives to the test JMS queue. By default, producer actors that are configured with oneway = true don't reply. This behavior is defined in the Producer.receiveAfterProduce method which is implemented as follows.


The receiveAfterProduce method has the same signature as Actor.receive and is called with the result of the message exchange with the endpoint (please note that in-only message exchanges with Camel endpoints have a result as well). The result type for successful message exchanges is Message, for failed message exchanges it is Failure (see below).

Concrete producer actors can override this method. For example, the following producer actor overrides onReceiveAfterProduce to reply with a constant "done" message.


The result of the message exchange with the JMS endpoint is ignored (case _).

Failures


Messages exchanges with a Camel endpoint can fail. In this case, onReceiveAfterProduce is called with a Failure message containing the cause of the failure (a Throwable). Let's extend the HttpProducer usage example to deal with failure responses.


In addition to a failure cause, a Failure message can also contain endpoint-specific headers with failure details such as the HTTP response code, for example. When using ! instead of !!, together with an implicit sender reference (as shown in the previous section), that sender will then receive the Failure message asynchronously. The JmsReplyingProducer example can also be extended to return more meaningful responses: a "done" message only on success and an error message on failure.


Failed message exchanges never cause the producer actor to throw an exception during execution of receive. Should Producer implementations want to throw an exception on failure (for whatever reason) they can do so in onReceiveAfterProduce.


In this case failure handling should be done in combination with a supervisor (see below).

Let's look at another example. What if we want


to throw an exception on failure (instead of returning a Failure message) but to respond with a normal Message on success? In this case, we need to use self.senderFuture inside onReceiveAfterProduce and complete it with an exception.



Forwarding results


Another option to deal with message exchange results inside onReceiveAfterProduce is to forward them to another actor. Forwarding a message also forwards the initial sender reference. This allows the receiving actor to reply to the initial sender.


With producer actors that forward message exchange results to other actors (incl. other producer actors) one can build actor-based message processing pipelines that integrate external systems. In combination with consumer actors, this could be extended towards a scalable and distributed enterprise service bus (ESB) based on Akka actors ... but this is a topic for another blog post.

Correlation identifiers


The Producer trait also supports correlation identifiers. This allows clients to correlate request messages with asynchronous response messages. A correlation identifier is a message header that can be set by clients. The following example uses the correlation identifier (or message exchange identifier) 123.


An asynchronous response (Message or Failure) from httpProducer will contain that correlation identifier as well.

Fault-tolerance


A failed message exchange by default does not cause a producer actor to throw an exception. However, concrete producer actors may decide to throw an exception inside onReceiveAfterProduce, for example, or there can be a system-level Camel problem that causes a runtime exception. An application that wants to handle these exceptions should supervise its producer actors.

The following example shows how to implement a producer actor that replies to the initial sender with a Failure message when it is restarted or stopped by a supervisor.


To handle restart callbacks, producer actors must override the preRestartProducer method instead of preRestart. The preRestart method is implemented by the Producer trait and does additional resource de-allocation work after calling preRestartProducer. More information about replies within preRestart and postStop can be found in my previous blog post about consumer actors.

Thursday, February 17, 2011

Akka Consumer Actors: New Features and Best Practices

In this blog post I want to give some guidance how to implement consumer actors with the akka-camel module. Besides basic usage scenarios, I will also explain how to make consumer actors fault-tolerant, redeliver messages on failure, deal with bounded mailboxes etc. The code examples shown below require the current Akka 1.1-SNAPSHOT to compile and run.

In the following, I assume that you already have a basic familiarity with Akka, Apache Camel and the akka-camel integration module. If you are new to it, you may want to read the Akka and Camel chapter (free pdf) of the Camel in Action book or the Introduction section of the official akka-camel documentation first.

Basic usage

Akka consumer actors can receive messages from any Camel endpoint, provided that the corresponding Camel component is on the classpath. This allows clients to interact with Akka actors over a large number of protocols and APIs.

Camel endpoints either initiate in-only (one-way) message exchanges with consumer actors or in-out (two-way) message exchanges. Replies from consumer actors are mandatory for in-out message exchanges but optional for in-only message exchanges. For replying to a Camel endpoint, the consumer actor uses the very same interface as for replying to any other sender (e.g. to another actor). Examples are self.reply or self.reply_?.

Let's start by defining a simple consumer actor that accepts messages via tcp on port 6200 and replies to the tcp client (tcp support is given by Camel's mina component).


For consumer actors to work, applications need to start a CamelService which is managed by the CamelServiceManager.


When starting a consumer actor, the endpoint defined for that actor will be activated asynchronously by the CamelService. If your application wants to wait for consumer endpoints to be finally activated you can do so with the awaitEndpointActivation method (which is especially useful for testing).


For sending a test message to the consumer actor, the above code uses a Camel ProducerTemplate which can be obtained from the CamelContextManager.

If Camel endpoints, such as the file endpoint, create in-only message exchanges then consumer actors need not reply, by default. The message exchange is completed once the input message has been added to the consumer actor's mailbox.


When placing a file into the data/input directory, the Camel file endpoint will pick up that file and send its content as message to the consumer actor. Once the message is in the actor's mailbox, the file endpoint will delete the corresponding file (see delete=true in the endpoint URI).

If you want to let the consumer actor decide when the file should be deleted, then you'll need to turn auto-acknowledgements off as shown in the following example (autoack = false). In this case the consumer actor must reply with a special Ack message when message processing is done. This asynchronous reply finally causes the file endpoint to delete the consumed file.


Turning auto-acknowledgements on and off is only relevant for in-only message exchanges because, for in-out message exchanges, consumer actors need to reply in any case with an (application-specific) message. Consumer actors may also reply with a Failure message to indicate a processing failure. Failure replies can be made for both in-only and in-out message exchanges. A Failure reply can be done inside receive method but there are better ways as shown in the next sections.

Fault-tolerance and message redelivery

Message processing inside receive may throw exceptions which usually requires a failure response to Camel (i.e. to the consumer endpoint). This is done with a Failure message that contains the failure reason (an instance of Throwable). Instead of catching and handling the exception inside receive, consumer actors should be part of supervisor hierarchies and send failure responses from within restart callback methods. Here's an example of a fault-tolerant file consumer.


The above file consumer overrides the preRestart and postStop callback methods to send reply messages to Camel. A reply within preRestart and postStop is possible after receive has thrown an exception (new feature since Akka 1.1). When receive returns normally it is expected that any necessary reply has already been done within receive.
  • If the lifecycle of the SupervisedFileConsumer is configured to be PERMANENT, a supervisor will restart the consumer upon failure with a call to preRestart. Within preRestart a Failure reply is sent which causes the file endpoint to redeliver the content of the consumed file and the consumer actor can try to process it again. Should the processing succeed in a second attempt, an Ack is sent within receive. A reply within preRestart must be a safe reply via self.reply_? because an unsafe self.reply will throw an exception when the consumer is restarted without having failed. This can be the case in context of all-for-one restart strategies.
  • If the lifecycle of the SupervisedFileConsumer is configured to be TEMPORARY, a supervisor will shut down the consumer upon failure with a call to postStop. Within postStop an Ack is sent which causes the file endpoint to delete the file. One can, of course, choose to reply with a Failure here, so that files that couldn't be processed successfully are kept in the input directory. A reply within postStop must be a safe reply via self.reply_? because an unsafe self.reply will throw an exception when the consumer has been stopped by the application (and not by a supervisor) after successful execution of receive.

Another frequently discussed consumer actor example is a fault-tolerant JMS consumer. A JMS consumer actor should acknowledge a message receipt upon successful message processing and trigger a message redelivery on failure. This is exactly the same pattern as described for the SupervisedFileConsumer above. You just need to change the file endpoint URI to a jms or activemq endpoint URI and you're done (of course, you additionally need to configure the JMS connection with a redelivery policy and, optionally, use transacted queues. An explanation how to do this would however exceed the scope of this blog post).

Simplifications and tradeoffs with blocking=true

In all the examples so far the internally created Camel routes use the ! (bang) operator to send the input message to the consumer actor. This means that the Camel route does not block a thread waiting for a response. It's an asynchronous reply will cause the Camel route to resume processing. That's also the reason why any exception thrown by receive isn't reported back to Camel directly but must be done explicitly with a Failure response.

If you want that exceptions thrown by receive are reported back to Camel directly (i.e. without sending Failure responses) then you'll need to set blocking = true for the consumer actor. This causes the Camel route to send the input message with the !! (bangbang) operator and to wait for a response. However, this will block a thread until the consumer sends a response or throws an exception within receive. The advantage of this approach is that error handling is strongly simplified in this case but scalability will likely decrease.

Here's an example of a consumer actor that uses the simplified approach to error handling. Any exception thrown by receive will still cause the file endpoint to redeliver the message but a thread will be blocked by Camel during the execution of receive.


No supervisor is needed here. It depends on the non-functional requirements of your application whether to go for this simple but blocking approach or to use a more scalable, non-blocking approach in combination with a supervisor.

Bounded mailboxes and error handling with custom Camel routes

For consumer actors that require a significant amount of time for processing a single message, it can make sense to install a bounded mailbox. A bounded mailbox throws an exception if its capacity is reached and the Camel route tries to add additional messages to the mailbox. Here's an example of a file consumer actor that uses a bounded mailbox with a capacity of 5. Processing is artificially delayed by 1 second using a Thread.sleep.


When, let's say, 10 files are put into the data/input directory, they will be picked up by the file endpoint and added to the actor's mailbox. The capacity of the mailbox will be reached soon because the file endpoint can send messages much faster than the consumer actor can process it. Exceptions thrown by the mailbox are directly reported to the Camel route which causes the file consumer to redeliver messages until they can be added to the mailbox. The same applies to JMS and other endpoints that support redelivery.

When dealing with endpoints that do not support redelivery, one needs to customize the Camel route to the consumer actor with a special error handler that does the redelivery. This is shown for a consumer actor that consumes messages from a direct endpoint.


Here we use onRouteDefinition to define how the Camel route should be customized during its creation. In this example, an error handler is defined that attempts max. 3 redeliveries with a delay of 1000 ms. For details refer to the intercepting route construction section in the akka-camel documentation. When using a producer template to send messages to this endpoint, some of them will be added to the mailbox on first attempt, some of them after a second attempt triggered by the error handler.


The examples presented in this post cover many of the consumer-actor-related questions and topics that have been asked and discussed on the akka-user mailing list. In another post I plan to cover best practices for implementing Akka producer actors.

Monday, August 30, 2010

Akka's grown-up hump

It's quite some time ago when I last wrote about Akka's Camel integration. An initial version of the akka-camel module was released with Akka 0.7. Meanwhile Akka 0.10 is out with an akka-camel module containing numerous new features and enhancements. Some of them will be briefly described in this blog post.

Java API

The akka-camel module now offers a Java API in addition to the Scala API. Both APIs are fully covered in the online documentation.

Support for typed consumer actors

Methods of typed actors can be published at Camel endpoints by annotating them with @consume. The annotation value defines the endpoint URI. Here's an example of a typed consumer actor in Java.
import org.apache.camel.Body;
import org.apache.camel.Header;
import se.scalablesolutions.akka.actor.TypedActor;
import se.scalablesolutions.akka.camel.consume;

public interface MyTypedConsumer {
@consume("file:data/foo")
public void foo(String body);

@consume("jetty:http://localhost:8877/camel/bar")
public String bar(@Body String body, @Header("Content-Type") String contentType);
}

public class MyTypedConsumerImpl extends TypedActor implements MyTypedConsumer {
public void foo(String body) {
System.out.println(String.format("Received message: ", body));
}

public String bar(String body, String contentType) {
return String.format("body=%s Content-Type header=%s", body, conctentType);
}
}
When creating an instance of the typed actor with
import se.scalablesolutions.akka.actor.TypedActor;

// Create typed actor and activate endpoints
MyTypedConsumer consumer = TypedActor.newInstance(
MyTypedConsumer.class, MyTypedConumerImpl.class);
then the actor's foo method can be invoked by dropping a file into the data/foo directory. The file content is passed via the body parameter. The bar method can be invoked by POSTing a message to http://localhost:8877/camel/bar. The HTTP message body is passed via the body parameter and the Content-Type header via the contentType parameter. For parameter binding, Camel's parameter binding annotations are used.

Endpoint lifecycle

Consumer actor endpoints are activated when the actor is started and de-activated when the actor is stopped. This is the case for both typed and untyped actors. An actor can either be stopped explicitly by an application or by a supervisor.

Fault tolerance

When a consumer actor isn't stopped but restarted by a supervisor, the actor's endpoint remains active. Communication partners can continue to exchange messages with the endpoint during the restart phase but message processing will occur only after restart completes. For in-out message exchanges, response times may therefore increase. Communication partners that initiate in-only message exchanges with the endpoint won't see any difference.

Producer actors

Actors that want to produce messages to endpoints either need to mixin the Producer trait (Scala API) or extend the abstract UntypedProducerActor class (Java API). Although the Producer trait was already available in the initial version of akka-camel, many enhancements have been made since then. Most of them are internal enhancements such as performance improvements and support for asynchronous routing. Also, extensions to the API have been made to support
  • pre-processing of messages before they are sent to an endpoint and
  • post-processing of messages after they have been received as response from an endpoint.
For example, instead of replying to the original sender (default behavior) a producer actor could do a custom post-processing e.g. by forwarding the response to another actor (together with the initial sender reference)
import se.scalablesolutions.akka.actor.Actor
import se.scalablesolutions.akka.camel.Producer

class MyProducer(target: ActorRef) extends Actor with Producer {
def endpointUri = "http://example.org/some/external/service"

override protected def receiveAfterProduce = {
// do not reply to initial sender but
// forward result to a target actor
case msg => target forward msg
}
}
Forwarding results to other actors makes it easier to create actor-based message processing pipelines that make use of external services. Examples are given in the akka-camel documentation.

Typed actors need to use Camel's ProducerTemplate directly to produce messages to Camel endpoints. A managed instance of a ProducerTemplate can be obtained via CamelContextManager.template.

Asynchronous routing

Since Akka 0.10, Camel's asynchronous routing engine is fully supported: in-out and in-only messages exchanges between endpoints and actors are designed to be asynchronous. This is the case for both, consumer and producer actors.

This is especially important for actors that participate in long-running request-reply interactions with external services. Threads are no longer blocked for the full duration of an in-out message exchange and are available for doing other work. There's also an asynchronous routing example described in the online documentation.

Routes to actors

Typed an untyped actors can also be accessed from Camel routes directly, using Akka's TypedActorComponent and ActorComponent, respectively. These are Camel components supporting typed-actor and and actor endpoint URIs in route definitions. For example,
from("seda:test").to("actor:uuid:12345678");
routes a message from a SEDA queue to an untyped actor with uuid 12345678. The actor endpoint looks up the actor in Akka's actor registry.

The TypedActorComponent is an extension of Camel's bean component where method invocations follow the semantics of the actor model. Here is an example route from a direct endpoint to the foo method of a typed actor.
from("direct:test").to("typed-actor:sample?method=foo");
The typed actor is registered under the name sample in the Camel registry. For more details how to add typed actors to the Camel registry, follow this link.

CamelService

Prerequisite for endpoints being activated when starting consumer actors is a running CamelService. When starting Akka in Kernel mode or using the Akka Initializer in a web application, a CamelService is started automatically. In all other cases a CamelService must be started by the application itself. This can be done either programmatically with
import se.scalablesolutions.akka.camel.CamelServiceManager._

startCamelService
or declaratively in a Spring XML configuration file.
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:akka="http://www.akkasource.org/schema/akka"
xmlns:camel="http://camel.apache.org/schema/spring"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.akkasource.org/schema/akka
http://scalablesolutions.se/akka/akka-0.10.xsd
http://camel.apache.org/schema/spring
http://camel.apache.org/schema/spring/camel-spring.xsd">

<!– A custom CamelContext (SpringCamelContext) –>
<camel:camelContext id="camelContext">
<!– … –>
</camel:camelContext>

<!– Create a CamelService using a custom CamelContext –>
<akka:camel-service>
<akka:camel-context ref="camelContext" />
</akka:camel-service>

</beans>
Usage of the element requires the akka-spring jar on the classpath. This example also shows how the Spring-managed CamelService is configured with a custom CamelContext.

A running CamelService can be stopped either by closing the application context or by calling the CamelServiceManager.stopCamelService method.

Outlook

The next Akka release will be Akka 1.0 (targeted for late fall) and akka-camel development will mainly focus on API stabilization. If you'd like to have some additional features in the next Akka release, want to give feedback or ask some questions, please contact the Akka community at the akka-user mailing list.

Saturday, April 3, 2010

Akka features for application integration

Akka is a platform for event-driven, scalable and fault-tolerant architectures on the JVM. It is mainly written in Scala. One of its core features is support for the actor model that provides a higher level of abstraction for writing concurrent and distributed systems.

Since version 0.7, Akka offers a new feature that let actors send and receive messages over a great variety of protocols and APIs. In addition to the native Scala actor API, actors can now exchange messages with other systems over large number of protcols and APIs such as HTTP, SOAP, TCP, FTP, SMTP or JMS, to mention a few. At the moment, approximately 80 protocols and APIs are supported. This new feature is provided by Akka's Camel module.

At the core of this new feature is Apache Camel, IMHO the most powerful and feature-rich integration framework currently available for the JVM. For an introduction to Apache Camel you may want to read this article. Camel comes with a large number of components that provide bindings to different protocols and APIs. Usage of Camel's integration components in Akka is essentially a one-liner. Here's an example.

import se.scalablesolutions.akka.actor.Actor
import se.scalablesolutions.akka.actor.Actor._
import se.scalablesolutions.akka.camel.{Message, Consumer}

class MyActor extends Actor with Consumer {
def endpointUri =
"mina:tcp://localhost:6200?textline=true"

def receive = {
case msg: Message => { /* ... */}
case _ => { /* ... */}
}
}
// start and expose actor via tcp
val myActor = actorOf[MyActor].start

The above example exposes an actor over a tcp endpoint on port 6200 via Apache Camel's Mina component. The endpointUri is an abstract method declared in the Consumer trait. After starting the actor, tcp clients can immediately send messages to and receive responses from that actor. If the message exchange should go over HTTP (via Camel's Jetty component), only the actor's endpointUri must be redefined.

class MyActor extends Actor with Consumer {
def endpointUri =
"jetty:http://localhost:8877/example"

def receive = {
case msg: Message => { /* ... */}
case _ => { /* ... */}
}
}

Actors can also trigger message exchanges with external systems i.e. produce to Camel endpoints.

import se.scalablesolutions.akka.actor.Actor
import se.scalablesolutions.akka.camel.Producer

class MyActor extends Actor with Producer {
def endpointUri = "jms:queue:example"
protected def receive = produce
}

In the above example, any message sent to this actor will be added (produced) to the example JMS queue. Producer actors may choose from the same set of Camel components as Consumer actors do.

The number of Camel components is constantly increasing. Akka's Camel module can support these in a plug-and-play manner. Just add them to your application's classpath, define a component-specific endpoint URI and use it to exchange messages over the component-specific protocols or APIs. This is possible because Camel components bind protocol-specific message formats to a Camel-specific normalized message format. The normalized message format hides protocol-specific details from Akka and makes it therefore very easy to support a large number of protocols through a uniform Camel component interface. Akka's Camel module further converts mutable Camel messages into immutable representations which are used by Consumer and Producer actors for pattern matching, transformation, serialization or storage, for example.

Highly-scalable eHealth integration solutions with Akka

One goal I had in mind when implementing the Akka Camel module was to have a basis for building highly-scalable eHealth integration solutions. For eHealth information systems it is becoming increasingly important to support standard interfaces as specified by IHE. Financial support from governments strongly depends on eHealth standard compliance.

Building blocks for implementing standard-compliant eHealth applications are provided by the Open eHealth Integration Platform (IPF). IPF is a mature open source integration platform, based on Apache Camel. It provides, among others, extensive support for IHE actor interfaces. These interfaces are based on Apache Camel's component technology. Therefore, it's a one-liner to expose an Akka actor through an IHE compliant interface. The following example implements the server-side interface of the IHE XDS Registry Stored Query (XDS-ITI18) transaction.

class RSQService extends Actor with Consumer {
def endpointUri = "xds-iti18:RSQService"

def receive = {
case msg: Message => { /* ... */}
case _ => { /* ... */}
}
}

In IHE, a message exchange between two participants is called a transaction. Here, the IPF xds-iti18 component is used to implement the server-side interface of the XDS ITI18 transaction. This allows any XDS-ITI18-comaptible client to communicate with the actor over an IHE standard protocol using ebXML/SOAP/HTTP (as defined in the XDS specification). The implementor of the receive method, however, doesn't need to care about all the low-level protocol details (which are scary if you take a closer look). The body of the received message is a high-level object graph containing XDS-ITI18-specific transaction data.

A high-level programming model for implementing eHealth standards is only one of several reasons why I consider Akka as a powerful technical basis for building scalable and fault-tolerant eHealth information systems and integration solutions. Akka's support for NoSQL datastores could further be used for implementing scalabale persistence layers in eHealth applications. Over the next weeks I'm going to explore this field in more detail and will keep you updated with further blog posts on that topic.