tag:blogger.com,1999:blog-27083494539046915132024-03-19T04:37:26.679+01:00Martin Krasser's BlogMartin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.comBlogger21125tag:blogger.com,1999:blog-2708349453904691513.post-37974388850827103722015-01-06T16:05:00.000+01:002015-01-06T16:07:41.352+01:00Starting A New Blog on Github Pages<h2>
</h2>
I'll discontinue <a href="http://krasserm.blogspot.com/">this blog</a> and going to post new articles to <a href="http://krasserm.github.io/">http://krasserm.github.io</a> from now on. You can subscribe to changes <a href="http://krasserm.github.io/atom.xml">here</a>.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-37364656848541502852013-12-16T14:07:00.000+01:002014-04-04T08:22:16.171+02:00Introduction to Akka Persistence<a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html">Akka Persistence</a> is a new module in Akka 2.3. At the time of writing this post, it is available as milestone release (2.3-M2). Akka Persistence adds actor state persistence and at-least-once message delivery semantics to Akka. It is inspired by and the successor of the <a href="https://github.com/eligosource/eventsourced">eventsourced</a> project. They share many high-level concepts but completely differ on API and implementation level.<br />
<br />
To persist an actor's state, only changes to that actor's state are written to a journal, not current state directly. These changes are appended as immutable facts to a journal, nothing is ever mutated, which allows for very high transaction rates and efficient replication. Actor state can be recovered by replaying stored changes and projecting them again. This not only allows state recovery after an actor has been restarted by a supervisor but also after JVM or node crashes, for example. State changes are defined in terms of messages an actor receives (or generates). <br />
<br />
Persistence of messages also forms the basis for supporting at-least-once message delivery semantics. This requires retries to counter transport losses, which means keeping state at the sending end and having an acknowledgement mechanism at the receiving end (see <a href="http://doc.akka.io/docs/akka/2.3.1/general/message-delivery-guarantees.html">Message Delivery Guarantees</a> in Akka). Akka Persistence supports that for point-to-point communications. Reliable point-to-point communications are an important part of highly scalable applications (see also Pet Helland's position paper <a href="http://www-db.cs.wisc.edu/cidr/cidr2007/papers/cidr07p15.pdf">Life beyond Distributed Transactions</a>).<br />
<br />
The following gives a high-level overview of the current features in Akka Persistence. Links to more detailed documentation are included.<br />
<br />
<h3>
Processors</h3>
<a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#processors">Processors</a> are persistent actors. They internally communicate with a journal to persist messages they receive or generate. They may also request message replay from a journal to recover internal state in failure cases. Processors may either persist messages <br />
<ul>
<li>before an actor's behavior is executed (command sourcing) or</li>
<li>during an actor's behavior is executed (<a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#event-sourcing">event sourcing</a>)</li>
</ul>
Command sourcing is comparable to using a write-ahead-log. Messages are persisted before it is known whether they can be successfully processed or not. In failure cases, they can be (logically) removed from the journal so that they won't be replayed during next recovery. During recovery, command sourced processors show the same behavior as during normal operation. They can achieve high throughput rates by dynamically increasing the size of <a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#batch-writes">write batches</a> under high load.<br />
<br />
Event sourced processors do not persist commands. Instead they allow application code to derive events from a command and atomically persist these events. After persistence, they are applied to current state. During recovery, events are replayed and only the state-changing behavior of an event sourced processor is executed again. Other side effects that have been executed during normal operation are not performed again.<br />
<br />
Processors automatically <a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#recovery">recover</a> themselves. Akka Persistence guarantees that new messages sent to a processor never interleave with replayed messages. New messages are internally buffered until recovery completes, hence, an application may send messages to a processor immediately after creating it.<br />
<br />
<h3>
Snapshots </h3>
The recovery time of a processor increases with the number of messages that have been written by that processor. To reduce recovery time, applications may take <a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#snapshots">snapshots</a> of processor state which can be used as starting points for message replay. Usage of snapshots is optional and only needed for optimization.<br />
<br />
<h3>
Channels</h3>
<a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#channels">Channels</a> are actors that provide at-least-once message delivery semantics between a sending processor and a receiver that acknowledges the receipt of messages on application level. They also ensure that successfully acknowledged messages are not delivered again to receivers during processor recovery (i.e. replay of messages). Applications that want to have reliable message delivery without application-defined sending processors should use persistent channels. A <a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#persistent-channels">persistent channel</a> is like a normal channel that additionally persists messages before sending them to a receiver.<br />
<br />
<h3>
Journals</h3>
Journals (and snapshot stores) are <a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#storage-plugins">pluggable</a> in Akka Persistence. The default journal plugin is backed by LevelDB which writes messages to the local filesystem. A replicated journal is planned but not yet part of the distribution. Replicated journals allow stateful actors to be migrated in a cluster, for example. For testing purposes, a remotely shared LevelDB journal can be used instead of a replicated journal to experiment with stateful actor migration. Application code doesn't need to change when switching to a replicated journal later.<br />
<br />
<b>Updates</b>:<br />
<ul>
<li><a href="http://doc.akka.io/docs/akka/2.3.1/scala/persistence.html#views">Views</a> have been added after 2.3-M2. </li>
<li>A replicated journal backed by Apache <a href="http://cassandra.apache.org/">Cassandra</a> is available <a href="https://github.com/krasserm/akka-persistence-cassandra">here</a>. </li>
<li>A complete list of community-contributed plugins is maintained <a href="http://akka.io/community/">here</a>. </li>
</ul>
Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com10tag:blogger.com,1999:blog-2708349453904691513.post-32150726158354194812013-03-20T13:21:00.000+01:002013-03-21T15:44:30.315+01:00Eventsourced for Akka - A high-level technical overview<a href="https://github.com/eligosource/eventsourced" target="_blank">Eventsourced</a> is an <a href="http://akka.io/" target="_blank">Akka</a> extension that adds scalable actor state persistence and at-least-once message delivery guarantees to Akka. With Eventsourced, stateful actors<br />
<ul>
<li>persist received messages by appending them to a log (journal)</li>
<li>project received messages to derive current state</li>
<li>usually hold current state in memory (memory image)</li>
<li>recover current (or past) state by replaying received messages (during normal application start or after crashes)</li>
<li>never persist current state directly (except optional state snapshots for recovery time optimization)</li>
</ul>
In other words, Eventsourced implements a write-ahead log (WAL) that is used to keep track of messages an actor receives and to recover its state by replaying logged messages. Appending messages to a log instead of persisting actor state directly allows for actor state persistence at very high transaction rates and supports efficient replication. In contrast to other WAL-based systems, Eventsourced usually keeps the whole message history in the log and makes usage of state snapshots optional.<br />
<br />
Logged messages represent intended changes to an actor's state. Logging changes instead of updating current state is one of the core concept of <a href="http://martinfowler.com/eaaDev/EventSourcing.html" target="_blank">event sourcing</a>. Eventsourced can be used to implement event sourcing concepts but it is not limited to that. More details about Eventsourced and its relation to event sourcing can be found <a href="https://github.com/eligosource/eventsourced/wiki/FAQ#wiki-event-sourcing-comparison" target="_blank">here</a>.<br />
<br />
Eventsourced can also be used to make message exchanges between actors reliable so that they can be resumed after crashes, for example. For that purpose, channels with at-least-once message delivery guarantees are provided. Channels also prevent that output messages, sent by persistent actors, are redundantly delivered during replays which is relevant for message exchanges between these actors and other services.<br />
<br />
<h3>
Building blocks</h3>
The core building blocks provided by Eventsourced are processors, channels and journals. These are managed by an Akka extension, the <span style="font-family: "Courier New",Courier,monospace;"><a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.EventsourcingExtension" target="_blank">EventsourcingExtension</a></span>.<br />
<br />
<h4>
Processor</h4>
A processor is a stateful actor that logs (persists) messages it receives. A stateful actor is turned into a processor by modifying it with the stackable <span style="font-family: "Courier New",Courier,monospace;"><a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.Eventsourced" target="_blank">Eventsourced</a></span> trait during construction. A processor can be used like any other actor.<br />
<br />
Messages wrapped inside <span style="font-family: "Courier New",Courier,monospace;"><a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.Message" target="_blank">Message</a></span> are logged by a processor, unwrapped messages are not logged. Logging behavior is implemented by the <span style="font-family: "Courier New",Courier,monospace;">Eventsourced</span> trait, a processor's <span style="font-family: "Courier New",Courier,monospace;">receive</span> method doesn't need to care about that. Acknowledging a successful write to a sender can be done by sending a reply. A processor can also hot-swap its behavior by still keeping its logging functionality. <br />
<br />
Processors are registered at an <span style="font-family: "Courier New",Courier,monospace;">EventsourcingExtension</span>. This extension provides methods to recover processor state by replaying logged messages. Processors can be registered and recovered at any time during an application run.<br />
<br />
Eventsourced doesn't impose any restrictions how processors maintain state. A processor can use vars, mutable data structures or STM references, for example.<br />
<br />
<h4>
Channel</h4>
<a href="https://github.com/eligosource/eventsourced#channels" target="_blank">Channels</a> are used by processors for sending messages to other actors (channel destinations) and receiving replies from them. Channels<br />
<ul>
<li>require their destinations to confirm the receipt of messages for providing at-least-once delivery guarantees (explicit ack-retry protocol). Receipt confirmations are written to a log.</li>
<li>prevent redundant delivery of messages to destinations during processor recovery (replay of messages). Replayed messages with matching receipt confirmations are dropped by the corresponding channels.</li>
</ul>
A channel itself is an actor that decorates a destination with the aforementioned functionality. Processors usually create channels as child actors for decorating destination actor references.<br />
<br />
A processor may also sent messages directly to another actor without using a channel. In this case that actor will redundantly receive messages during processor recovery.<br />
<br />
Eventsourced provides three different channel types (more are planned).<br />
<ul>
<li>Default channel</li>
<ul>
<li>Does not store received messages.</li>
<li>Re-delivers uncomfirmed messages only during recovery of the sending processor.</li>
<li>Order of messages as sent by a processor is not preserved in failure cases.</li>
</ul>
</ul>
<ul>
<li>Reliable channel</li>
<ul>
<li>Stores received messages.</li>
<li>Re-delivers unconfirmed messages based on a configurable re-delivery policy.</li>
<li>Order of messages as sent by a processor is preserved, even in failure cases. </li>
<li>Often used to deal with unreliable remote destinations.</li>
</ul>
</ul>
<ul>
<li>Reliable request-reply channel</li>
<ul>
<li>Same as reliable channel but additionally guarantees at-least-once delivery of replies.</li>
<li>Order of replies not guaranteed to correspond to the order of sent request messages.</li>
</ul>
</ul>
Eventsourced channels are not meant to replace any existing messaging system but can be used, for example, to reliably connect processors to such a system, if needed. More generally, they are useful to integrate processors with other services, as described in <a href="http://krasserm.blogspot.de/2013/01/event-sourcing-and-external-service.html" target="_blank">another blog post</a>.<br />
<br />
<h4>
Journal</h4>
A journal is an actor that is used by processors and channels to log messages and receipt confirmations. The quality of service (availability, scalability, ...) provided by a journal depends on the used storage technology. The <a href="https://github.com/eligosource/eventsourced#journals" target="_blank">Journals</a> section in the user guide gives an overview of existing journal implementations and their development status.<br />
<br />
<h4>
References</h4>
<ul>
<li><a href="https://github.com/eligosource/eventsourced#readme" target="_blank">Eventsourced user guide</a></li>
<li><a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.package" target="_blank">Eventsourced API docs</a></li>
</ul>
Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com2tag:blogger.com,1999:blog-2708349453904691513.post-6273744532216549112013-01-31T18:22:00.000+01:002013-04-25T09:32:09.488+02:00Event sourcing and external service integrationA frequently asked question when building event sourced applications is how to interact with external services. This topic is covered to some extend by Martin Fowler's <a href="http://martinfowler.com/eaaDev/EventSourcing.html">Event Sourcing</a> article in the sections <a href="http://martinfowler.com/eaaDev/EventSourcing.html#ExternalQueries">External Queries</a> and <a href="http://martinfowler.com/eaaDev/EventSourcing.html#ExternalUpdates">External Updates</a>. In this blog post I'll show how to approach external service integration with the <a href="https://github.com/eligosource/eventsourced">Eventsourced</a> library for <a href="http://akka.io/">Akka</a>. If you are new to this library, an overview is given in the user guide sections <a href="https://github.com/eligosource/eventsourced#overview">Overview</a> and <a href="https://github.com/eligosource/eventsourced#first-steps">First steps</a>.
<br />
The example application presented here was inspired by Fowler's <a href="http://martinfowler.com/articles/lmax.html">LMAX article</a> where he describes how event sourcing differs from an alternative transaction processing approach:
<br />
<br />
<blockquote>
<i>Imagine you are making an order for jelly beans by credit card. A simple retailing system would take your order information, use a credit card validation service to check your credit card number, and then confirm your order - all within a single operation. The thread processing your order would block while waiting for the credit card to be checked, but that block wouldn't be very long for the user, and the server can always run another thread on the processor while it's waiting.
</i><br />
<i>In the LMAX architecture, you would split this operation into two. The first operation would capture the order information and finish by outputting an event (credit card validation requested) to the credit card company. The Business Logic Processor would then carry on processing events for other customers until it received a credit-card-validated event in its input event stream. On processing that event it would carry out the confirmation tasks for that order.</i><br />
</blockquote>
Although Fowler mentions the LMAX architecture, we don't use the <a href="http://lmax-exchange.github.com/disruptor/">Disruptor</a> here for implementation. It's role is taken by an Akka <a href="http://doc.akka.io/docs/akka/current/scala/dispatchers.html">dispatcher</a> in the following example. Nevertheless, the described architecture and message flow remain the same:
<br />
<img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhAAkD7x4f2R43Ko1Jn97gCLaBjCtwDC_PEC5tCcOxjMKcjd4ZHac-iB30rpMZaQVgpkIa20aLEoBXjx0kHtW175RZgqWfGn-vw_rJMcpZVb1rulpoTT7dc3KKP1yzHhzDQLVYSiwhVAMM/s320/architecture-1.0.png" />
<br />
The two components in the high-level architecture are:
<br />
<br />
<ul>
<li><code>OrderProcessor</code>. An event sourced actor that maintains received orders and their validation state in memory. The <code>OrderProcessor</code> writes any received event message to an event log (journal) so that it's in-memory state can be recovered by replaying these events e.g. after a crash or during normal application start. This actor corresponds to the <i>Business Logic Processor</i> in Fowler's example.</li>
<li><code>CreditCardValidator</code>. A plain remote, stateless actor that validates credit card information of submitted orders on receiving <code>CreditCardValidationRequested</code> events. Depending on the validation outcome it replies with <code>CreditCardValidated</code> or <code>CreditCardValidationFailed</code> event messages to the <code>OrderProcessor</code>.</li>
</ul>
The example application must meet the following requirements and conditions:
<br />
<br />
<ul>
<li>The <code>OrderProcessor</code> and the <code>CreditCardValidator</code> must communicate remotely so that they can be deployed separately. The <code>CreditCardValidator</code> is an external service from the <code>OrderProcessor</code>'s perspective.</li>
<li>The example application must be able to recover from JVM crashes and remote communication errors. <code>OrderProcessor</code> state must be recoverable from logged event messages and running credit card validations must be automatically resumed after crashes. To overcome temporary network problems and remote actor downtimes, remote communication must be re-tried. Long-lasting errors must be escalated.</li>
<li>Event message replay during recovery must not redundantly emit validation requests to the <code>CreditCardValidator</code> and validation responses must be recorded in the event log (to solve the <a href="http://martinfowler.com/eaaDev/EventSourcing.html#ExternalQueries">external queries</a> problem). This will recover processor state in a deterministic way, making repeated recoveries independent from otherwise potentially different validation responses over time for the same validation request (a credit card may expire, for example).</li>
<li>Message processing must be idempotent. This requirement is a consequence of the at-least-once message delivery guarantee supported by Eventsourced.</li>
</ul>
The <a href="https://github.com/eligosource/eventsourced/blob/blog-01b/es-examples/src/main/scala/org/eligosource/eventsourced/example/OrderExampleReliable.scala">full example application code</a> that meets these requirements is part of the Eventsourced project and can be executed with <a href="http://www.scala-sbt.org/">sbt</a>.
<br />
The <code>CreditCardValidator</code> can be started with:
<br />
<code>> project eventsourced-examples<br />
> run-main org.eligosource.eventsourced.example.CreditCardValidator</code>
<br />
The application that runs the <code>OrderProcessor</code> and sends <code>OrderSubmitted</code> events can be started with
<br />
<code>> project eventsourced-examples<br />
> run-main org.eligosource.eventsourced.example.OrderProcessor</code>
<br />
The example application defines an oversimplified domain class <code>Order</code>
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-01.scala"></script>
together with the domain events
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-02.scala"></script>
Whenever the <code>OrderProcessor</code> receives a domain event it appends that event to the event log (journal) before processing it. To add event logging behavior to an actor it must be modified with the stackable <a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.Eventsourced"><code>Eventsourced</code></a> trait during construction.
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-03.scala"></script>
<code>Eventsourced</code> actors only write messages of type <a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.Message"><code>Message</code></a> to the event log (together with the contained event). Messages of other type can be received by an <code>Eventsourced</code> actor as well but aren't logged. The <a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.Receiver"><code>Receiver</code></a> trait allows the <code>OrderProcessor</code>'s <code>receive</code> method to pattern-match against received events directly (instead of <code>Message</code>). It is not required for implementing an event sourced actor but can help to make implementations simpler.
<br />
On receiving an <code>OrderSubmitted</code> event, the <code>OrderProcessor</code> extracts the contained <code>order</code> object from the event, updates the order with an order id and stores it in the <code>orders</code> map. The <code>orders</code> map represents the current state of the <code>OrderProcessor</code> (which can be recovered by replaying logged event messages).
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-04.scala"></script>
After updating the <code>orders</code> map, the <code>OrderProcessor</code> replies to the sender of an <code>OrderSubmitted</code> event with an <code>OrderStored</code> event. This event is a business-level acknowledgement that the received <code>OrderSubmitted</code> event has been successfully written to the event log. Finally, the <code>OrderProcessor</code> emits a <code>CreditCardValidationRequested</code> event message to the <code>CreditCardValidator</code> via reliable request-reply channel (see below). The emitted message is derived from the current event message which can be accessed via the <code>message</code> method of the <code>Receiver</code> trait. Alternatively, the <code>OrderProcessor</code> could also have used an <a href="https://github.com/eligosource/eventsourced#emitter">emitter</a> for sending the event (see also <a href="https://github.com/eligosource/eventsourced#usage-hints">channel usage hints</a>).
<br />
A reliable request-reply channel is pattern on top of a reliable channel with the following properties: It
<br />
<br />
<ul>
<li>persists request <code>Message</code>s for failure recovery and preserves message order.</li>
<li>extracts requests from received <code>Message</code>s before sending them to a destination.</li>
<li>wraps replies from a destination into a <code>Message</code> before sending them back to the request sender.</li>
<li>sends a special <code>DestinationNotResponding</code> reply to the request sender if the destination doesn't reply within a configurable timeout.</li>
<li>sends a special <code>DestinationFailure</code> reply to the request sender if the destination responds with <code>Status.Failure</code>.</li>
<li>guarantees at-least-once delivery of requests to the destination.</li>
<li>guarantees at-least-once delivery of replies to the request sender.</li>
<li>requires a positive receipt confirmation for a reply to mark a request-reply interaction as successfully completed.</li>
<li>redelivers requests, and subsequently replies, on missing or negative receipt confirmations.</li>
<li>sends a <code>DeliveryStopped</code> event to the actor system's event stream if the maximum number of delivery attempts has been reached (according to the channel's redelivery policy).</li>
</ul>
A reliable request-reply channel offers all the properties we need to reliably communicate with the remote <code>CreditCardValidator</code>. The channel is created as child actor of the <code>OrderProcessor</code> when the <code>OrderProcessor</code> receives a <code>SetCreditCardValidator</code> message.
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-05.scala"></script>
The channel is created with the <code>channelOf</code> method of the actor system's <a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.core.EventsourcingExtension"><code>EventsourcingExtension</code></a> and configured with a <a href="http://eligosource.github.com/eventsourced/api/snapshot/#org.eligosource.eventsourced.patterns.reliable.requestreply.ReliableRequestReplyChannelProps"><code>ReliableRequestReplyChannelProps</code></a> object. Configuration data are the channel destination (<code>validator</code>), a redelivery policy and a destination reply timeout. When sending validation requests via the created <code>validationRequestChannel</code>, the <code>OrderProcessor</code> must be prepared for receiving <code>CreditCardValidated</code>, <code>CreditCardValidationFailed</code>, <code>DestinationNotResponding</code> or <code>DestinationFailure</code> replies. These replies are sent to the <code>OrderProcessor</code> inside a <code>Message</code> and are therefore written to the event log. Consequently, <code>OrderProcessor</code> recoveries in the future will replay past reply messages instead of obtaining them again from the validator which ensures deterministic state recovery. Furthermore, the <code>validationRequestChannel</code> will ignore validation requests it receives during a replay, except those whose corresponding replies have not been positively confirmed yet. The following snippet shows how replies are processed by the <code>OrderProcessor</code>.
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-06.scala"></script>
<br />
<ul>
<li>A <code>CreditCardValidated</code> reply updates the <code>creditCardValidation</code> status of the corresponding order to <code>Success</code> and stores the updated order in the <code>orders</code> map. Further actions, such as notifying others that an order has been accepted, are omitted here but are part of the full example code. Then, the receipt of the reply is positively confirmed (<code>confirm(true)</code>) so that the channel doesn't redeliver the corresponding validation request.</li>
<li>A <code>CreditCardValidationFailed</code> reply updates the <code>creditCardValidation</code> status of the corresponding order to <code>Failure</code> and stores the updated order in the <code>orders</code> map. Again, further actions are omitted here and the receipt of the reply is positively confirmed.</li>
</ul>
Because the <code>validationRequestChannel</code> delivers messages at-least-once, we need to detect duplicates in order to make reply processing idempotent. Here, we simply require that the order object to be updated must have a <code>Pending</code> <code>creditCardValidation</code> status before changing state (and notifying others). If the order's status is not <code>Pending</code>, the order has already been updated by a previous reply and the current reply is a duplicate. In this case, the methods <code>onValidationSuccess</code> and <code>onValidationFailure</code> don't have any effect (<code>orders.get(orderId).filter(_.creditCardValidation == Pending)</code> is <code>None</code>). The receipt of the duplicate is still positively confirmed. More general guidelines how to detect duplicates are outlined <a href="https://github.com/eligosource/eventsourced#idempotency">here</a>.
<br />
<br />
<ul>
<li>A <code>DestinationNotResponding</code> reply is always confirmed negatively (<code>confirm(false)</code>) so that the channel is going redeliver the validation request to the <code>CreditCardValidator</code>. This may help to overcome temporary network problems, for example, but doesn't handle the case where the maximum number of redeliveries has been reached (see below).</li>
<li>A <code>DestinationFailure</code> reply will be negatively confirmed by default unless it has been delivered more than twice. This may help to overcome temporary <code>CreditCardValidator</code> failures i.e. cases where a <code>Status.Failure</code> is returned by the validator.</li>
</ul>
Should the <code>CreditCardValidator</code> be unavailable for a longer time and the <code>validationRequestChannel</code> reaches the maximum number of redeliveries, it will stop message delivery and publishes a <code>DeliveryStopped</code> event to the actor system's event stream. The channel still continues to accept new event messages and persists them so that the <code>OrderProcessor</code> can continue receiving <code>OrderSubmitted</code> events but the interaction with the <code>CreditCardValidator</code> is suspended. It is now up to the application to re-activate message delivery.
<br />
Subscribing to <code>DeliveryStopped</code> events allows an application to escalate a persisting network problem or <code>CreditCardValidator</code> outage by alerting a system administrator or switching to another credit card validation service, for example. In our case, a simple re-activation of the <code>validationRequestChannel</code> is scheduled.
<br />
<script src="https://gist.github.com/4682655.js?file=es-ext-07.scala"></script>
The <code>OrderProcessor</code> subscribes itself to the actor system's event stream. On receiving a <code>DeliveryStopped</code> event it schedules a re-activation of the <code>validationRequestChannel</code> by sending it a <code>Deliver</code> message.
<br />
This finally meets all the requirements stated above but there's a lot more to say about external service integration. Examples are external updates or usage of channels that don't preserve message order for optimizing concurrency and throughput. I also didn't cover processor-specific, non-blocking recovery as implemented by the example application. This is enough food for another blog post.
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-39074510574997948872012-02-23T10:04:00.006+01:002017-08-11T09:11:37.143+02:00Using JAXB for XML and JSON APIs in Scala Web ApplicationsIn the past, I already mentioned several times the implementation of RESTful XML and JSON APIs in Scala web applications using JAXB, without going into details. In this blog post I want to shed more light on this approach together with some links to more advanced examples. A JAXB-based approach to web APIs can be useful if you want to support both XML and JSON representations but only want to maintain a single binding definition for both representations. I should also say that I'm still investigating this approach, so see the following as rather experimental. <br />
<br />
First of all, <a href="http://jcp.org/en/jsr/detail?id=222">JAXB</a> is a Java standard for binding XML schemas to Java classes. It allows you to convert Java objects to XML documents, and vice versa, based on JAXB annotations on the corresponding Java classes. JAXB doesn't cover JSON but there are libraries that allow you to convert Java objects to JSON (and vice versa) based on the very same JAXB annotations that are used for defining XML bindings. One such library is <a href="http://jersey.java.net/nonav/documentation/latest/json.html">Jersey's JSON</a> library (<a href="http://repo1.maven.org/maven2/com/sun/jersey/jersey-json/">jersey-json</a>) which internally uses the <a href="http://jackson.codehaus.org/">Jackson</a> library.<br />
<br />
As you'll see in the following, JAXB can also be used together with immutable domain or resource models based on Scala case classes. There's no need to pollute them with getters and setters or Java collections from the <code>java.util</code> package. Necessary conversions from Scala collections or other type constructors (such as <code>Option</code>, for example) to Java types supported by JAXB can be defined externally to the annotated model (and reused). At the end of this blog post, I'll also show some examples how to develop JAXB-based XML and JSON APIs with the <a href="https://github.com/playframework/Play20">Play Framework</a>.<br />
<br />
<h2>
Model</h2>
In the following, I'll use a model that consists of the single <code>case class Person(fullname: String, username: Option[String], age: Int)</code>. To define <code>Person</code> as root element in the XML schema, the following JAXB annotations should be added.<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-01.scala"></script><br />
<code>@XmlRootElement</code> makes <code>Person</code> a root element in the XML schema and <code>@XmlAccessorType(XmlAccessType.FIELD)</code> instructs JAXB to access fields directly instead of using getters and setters. But before we can use the <code>Person</code> class with JAXB a few additional things need to be done.<br />
<ul>
<li>A no-arg constructor or a static no-arg factory method must be provided, otherwise, JAXB doesn't know how to create <code>Person</code> instances. In our example we'll use a no-arg constructor.</li>
<br />
<li>A person's <code>fullname</code> should be mandatory in the corresponding XML schema. This can be achieved by placing an <code>@XmlElement(required=true)</code> annotation on the field corresponding to the <code>fullname</code> parameter.</li>
<br />
<li>A person's <code>username</code> should be an optional <code>String</code> in the corresponding XML schema i.e. the <code>username</code> element of the complex <code>Person</code> type should have an XML attribute <code>minOccurs="0"</code>. Furthermore, it should be avoided that <code>scala.Option</code> appears as complex type in the XML schema. This can be achieved by providing a type adapter from <code>Option[String]</code> to <code>String</code> via the JAXB <code>@XmlJavaTypeAdapter</code> annotation.</li>
</ul>
<br />
We can implement the above requirements by defining and annotating the <code>Person</code> class as follows:<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-02.scala"></script><br />
Let's dissect the above code a bit:<br />
<ul>
<li>The no-arg constructor on the <code>Person</code> class is only needed by JAXB and should therefore be declared private so that it cannot be accessed elsewhere in the application code (unless you're using reflection like JAXB does).</li>
<br />
<li>Placing JAXB annotations on fields of a case class is a bit tricky. When writing a case class, usually only case class parameters are defined but not fields directly. The Scala compiler then generates the corresponding fields in the resulting .class file. Annotations that are placed on case class parameters are not copied to their corresponding fields, by default. To instruct the Scala compiler to copy these annotations, the Scala <code>@field</code> annotation must be used in addition. This is done in the custom annotation type definitions <code>xmlElement</code> and <code>xmlTypeAdapter</code>. They can be used in the same way as their dependent annotation types <code>XmlElement</code> and <code>XmlJavaTypeAdapter</code>, respectively. Placing the custom <code>@xmlElement</code> annotation on the <code>fullname</code> parameter will cause the Scala compiler to copy the dependent <code>@XmlElement</code> annotation (a JAXB annotation) to the generated <code>fullname</code> field where it can be finally processed by JAXB.</li>
<br />
<li>To convert between <code>Option[String]</code> (on Scala side) and <code>String</code> (used by JAXB on XML schema side) we implement a JAXB type adapter (interface <code>XmlAdapter</code>). The above example defines a generic <code>OptionAdapter</code> (that can also be reused elsewhere) and a concrete <code>StringOptionAdapter</code> used for the optional <code>username</code> parameter. Please note that annotating the <code>username</code> parameter with <code>@xmlTypeAdapter(classOf[OptionAdapter[String]])</code> is not sufficient because JAXB will not be able to infer <code>String</code> as the target type (JAXB uses reflection) and will use <code>Object</code> instead (resulting in an XML <code>anyType</code> in the corresponding XML schema). Type adapters can also be used to convert between Scala and Java collection types. Since JAXB can only handle Java collection types you'll need to use type adapters should you want to use Scala collection types in your case classes (and you really should). You can find an example <a href="https://github.com/krasserm/eventsourcing-example/blob/play-blog/modules/service/src/main/scala/dev/example/eventsourcing/domain/Invoice.scala#L53">here</a>.</li>
</ul>
<br />
We're now ready to use the <code>Person</code> class to generate an XML schema and to convert <code>Person</code> objects to and from XML or JSON. Please note that the following code examples require JAXB version 2.2.4u2 or higher, otherwise the <code>OptionAdapter</code> won't work properly. The reason is <a href="http://java.net/jira/browse/JAXB-415">JAXB issue 415</a>. Either use JDK 7u4 or higher which already includes this version or install the required JAXB version manually. The following will write an XML schema, generated from the <code>Person</code> class, to stdout:<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-03.scala"></script><br />
The result is:<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-04.xsd"></script><br />
Marshalling a <code>Person</code> object to XML can be done with<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-05.scala"></script><br />
which prints<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-06.xml"></script><br />
Unmarshalling creates a <code>Person</code> object from XML.<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-07.scala"></script><br />
We have implemented <code>StringOptionAdapter</code> in a way that an empty <code><username/></code> element or <code><username></username></code> in <code>personXml1</code> would also yield <code>None</code> on Scala side. Creating JSON from <code>Person</code> objects can be done with the <code>JSONJAXBContext</code> from Jersey's JSON library.<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-08.scala"></script><br />
which prints the following to stdout:<br />
<br />
<script src="https://gist.github.com/1891525.js?file=jaxb-09.json"></script><br />
Unmarshalling can be done with the <code>context.createJSONUnmarshaller.unmarshalFromJSON</code> method. The <code>JSONConfiguration</code> object provides a number of configuration options that determine how JSON is rendered and parsed. Refer to the <a href="http://jersey.java.net/nonav/documentation/latest/json.html">official documentation</a> for details.<br />
<br />
<h2>
Play and JAXB</h2>
This section shows some examples how to develop JAXB-based XML and JSON APIs with the Play Framework 2.0. It is based on JAXB-specific body parsers and type class instances defined in trait <a href="https://github.com/krasserm/eventsourcing-example/blob/play-blog/app/support/JaxbSupport.scala#L18"><code>JaxbSupport</code></a> which is part of the <a href="https://github.com/krasserm/eventsourcing-example">event-sourcing example</a> application (Play-specific work is currently done on the <a href="https://github.com/krasserm/eventsourcing-example/tree/play">play</a> branch). You can reuse this trait in other applications as is, there are no dependencies to the rest of the project (<span style="font-weight: bold;">update:</span> except to <a href="https://github.com/krasserm/eventsourcing-example/blob/play-blog/modules/service/src/main/scala/dev/example/eventsourcing/web/package.scala#L20"><code>SysError</code></a>). To enable JAXB-based XML and JSON processing for a Play web application, add <code>JaxbSupport</code> to a controller object as follows:<br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-01.scala"></script><br />
An implicit <code>JSONJAXBContext</code> must be in scope for both XML and JSON processing. For XML processing alone, it is sufficient to have an implicit JAXBContext.<br />
<br />
<h3>
XML and JSON Parsing</h3>
<code>JaxbSupport</code> provides Play-specific body parsers that convert XML or JSON request body to instances of JAXB-annotated classes. The following action uses a JAXB body parser that expect an XML body and tries to convert it to a <code>Person</code> instance (using a JAXB unmarshaller). <br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-02.scala"></script><br />
If the unmarshalling fails or the request <code>Content-Type</code> is other than <code>text/xml</code> or <code>application/xml</code>, a <code>400</code> status code (bad request) is returned. Converting a JSON body to a <code>Person</code> instance can be done with the <code>jaxb.parse.json</code> body parser.<br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-03.scala"></script><br />
If the body parser should be chosen at runtime depending on the <code>Content-Type</code> header, use the dynamic <code>jaxb.parse</code> body parser. The following action is able to process both XML and JSON bodies and convert them to a <code>Person</code> instance.<br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-04.scala"></script><br />
<code>JaxbSupport</code> also implements the following related body parsers <br />
<ul>
<li><code>jaxb.parse.xml(maxLength: Int)</code> and <code>jaxb.parse.json(maxLength: Int)</code></li>
<br />
<li><code>jaxb.parse(maxLength: Int)</code> and <code>jaxb.parse(maxLength: Int)</code></li>
<br />
<li><code>jaxb.parse.tolerantXml</code> and <code>jaxb.parse.tolerantJson</code></li>
<br />
<li><code>jaxb.parse.tolerantXml(maxLength: Int)</code> and <code>jaxb.parse.tolerantJson(maxLength: Int)</code></li>
</ul>
<br />
<h3>
XML and JSON Rendering</h3>
For rendering XML and JSON, JaxbSupport provides the wrapper classes <code>JaxbXml</code>, <code>JaxbJson</code> and <code>Jaxb</code>. The following action renders an XML response from a <code>Person</code> object (using a JAXB marshaller):<br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-05.scala"></script><br />
whereas<br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-06.scala"></script><br />
renders a JSON response from a <code>Person</code> object. If you want to do content negotiation based on the <code>Accept</code> request header, use the <code>Jaxb</code> wrapper.<br />
<br />
<script src="https://gist.github.com/1891558.js?file=play-07.scala"></script><br />
Jaxb requires an implicit <code>request</code> in context for obtaining the <code>Accept</code> request header. If the <code>Accept</code> MIME type is <code>application/xml</code> or <code>text/xml</code> an XML representation is returned, if it is <code>application/json</code> a JSON representation is returned. Further <code>JaxbSupport</code> application examples can be found <a href="https://github.com/krasserm/eventsourcing-example/blob/play-blog/app/controllers/Application.scala#L16">here</a>.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com7tag:blogger.com,1999:blog-2708349453904691513.post-18341058712256677692011-02-28T07:53:00.009+01:002011-06-28T10:29:15.611+02:00Akka Producer Actors: New Features and Best PracticesIn a <a href="http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html">previous post</a> I wrote about new features and best practices for Akka consumer actors. In this post, I'll cover Akka producer actors. For the following examples to compile and run, you'll need the current Akka 1.1-SNAPSHOT.<br /><br />Again, I assume that you already have a basic familiarity with <a href="http://akka.io/">Akka</a>, <a href="http://camel.apache.org/">Apache Camel</a> and the <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html">akka-camel</a> integration module. If you are new to it, you may want to read the <a href="http://www.manning.com/ibsen/appEsample.pdf">Akka and Camel</a> chapter (free pdf) of the <a href="http://www.manning.com/ibsen/">Camel in Action</a> book or the <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#introduction">Introduction</a> section of the official akka-camel documentation first.<br /><br /><h2>Basic usage</h2><br />Akka producer actors can send messages to any Camel endpoint, provided that the corresponding Camel component is on the classpath. This allows Akka actors to interact with external systems or other components over a large number of protocols and APIs.<br /><br />Let's start with a simple producer actor that sends all messages it receives to an external HTTP service and returns the response to the initial sender. For sending messages over HTTP we can use the Camel <a href="http://camel.apache.org/jetty.html">jetty</a> component which features an asynchronous HTTP client. <br /><br /><script src="https://gist.github.com/847006.js?file=HttpProducer.scala"></script><br />Concrete producer actors inherit a default implementation of <code>Actor.receive</code> from the <code>Producer</code> trait. For simple use cases, only an endpoint URI must be defined. Producer actors also require a started <code>CamelContextManager</code> for working properly. A <code>CamelContextManager</code> is started when an application starts a <code>CamelService</code> e.g. via <code>CamelServiceManager.startCamelService</code> or when starting the <code>CamelContextManager</code> directly via<br /><br /><script src="https://gist.github.com/847006.js?file=CamelContextManager.scala"></script><br />The latter approach is recommended when an application uses only producer actors but no consumer actors. This slightly reduces the overhead when starting actors. After starting the producer actor, clients can interact with the HTTP service via the actor API.<br />kra<br /><script src="https://gist.github.com/847006.js?file=HttpProducerClientBangBang.scala"></script><br />Here, <code>!!</code> is used for sending the message and waiting for a response. Alternatively, one can also use <code>!</code> together with an implicit sender reference. <br /><br /><script src="https://gist.github.com/847006.js?file=HttpProducerClientBang.scala"></script><br />In this case the <code>sender</code> will receive an asynchronous reply from the producer actor. Before, the producer actor itself receives an asynchronous reply from the jetty endpoint. The asynchronous jetty endpoint doesn't block a thread waiting for a response and the producer actor doesn't do that either. This is important from a scalability perspective, especially for longer-running request-response cycles.<br /><br />By default, a producer actor initiates an in-out message exchange with its Camel endpoint i.e. it expects a response from it. If a producer actor wants to initiate an in-only message exchange then it must override the <code>oneway</code> method to return <code>true</code>. The following example shows a producer actor that initiates an in-only message exchange with a <a href="http://camel.apache.org/jms.html">JMS</a> endpoint.<br /><br /><script src="https://gist.github.com/847006.js?file=JmsProducer.scala"></script><br />This actor adds any message it receives to the <code>test</code> JMS queue. By default, producer actors that are configured with <code>oneway = true</code> don't reply. This behavior is defined in the <code>Producer.receiveAfterProduce</code> method which is implemented as follows.<br /><br /><script src="https://gist.github.com/847006.js?file=ReceiveAfterProduceDefault.scala"></script><br />The <code>receiveAfterProduce</code> method has the same signature as <code>Actor.receive</code> and is called with the result of the message exchange with the endpoint (please note that in-only message exchanges with Camel endpoints have a result as well). The result type for successful message exchanges is <a href="https://github.com/jboner/akka-modules/blob/v1.0/akka-camel/src/main/scala/akka/camel/Message.scala#L20"><code>Message</code></a>, for failed message exchanges it is <a href="https://github.com/jboner/akka-modules/blob/v1.0/akka-camel/src/main/scala/akka/camel/Message.scala#L222"><code>Failure</code></a> (see below). <br /><br />Concrete producer actors can override this method. For example, the following producer actor overrides <code>onReceiveAfterProduce</code> to reply with a constant <code>"done"</code> message. <br /><br /><script src="https://gist.github.com/847006.js?file=JmsReplyingProducer.scala"></script><br />The result of the message exchange with the JMS endpoint is ignored (<code>case _</code>).<br /><br /><h2>Failures</h2><br />Messages exchanges with a Camel endpoint can fail. In this case, <code>onReceiveAfterProduce</code> is called with a </code>Failure</code> message containing the cause of the failure (a <code>Throwable</code>). Let's extend the <code>HttpProducer</code> usage example to deal with failure responses.<br /><br /><script src="https://gist.github.com/847006.js?file=HttpProducerClientFailure.scala"></script><br />In addition to a failure cause, a <code>Failure</code> message can also contain endpoint-specific headers with failure details such as the HTTP response code, for example. When using <code>!</code> instead of <code>!!</code>, together with an implicit sender reference (as shown in the previous section), that sender will then receive the <code>Failure</code> message asynchronously. The <code>JmsReplyingProducer</code> example can also be extended to return more meaningful responses: a <code>"done"</code> message only on success and an error message on failure.<br /><br /><script src="https://gist.github.com/847006.js?file=JmsReplyingProducerFailure.scala"></script><br />Failed message exchanges never cause the producer actor to throw an exception during execution of <code>receive</code>. Should <code>Producer</code> implementations want to throw an exception on failure (for whatever reason) they can do so in <code>onReceiveAfterProduce</code>.<br /><br /><script src="https://gist.github.com/847006.js?file=SomeProducer.scala"></script><br />In this case failure handling should be done in combination with a supervisor (see below). <br /><br />Let's look at another example. What if we want<br /><br /><script src="https://gist.github.com/847012.js?file=SomeProducerClient.scala"></script><br />to throw an exception on failure (instead of returning a <code>Failure</code> message) but to respond with a normal <code>Message</code> on success? In this case, we need to use <code>self.senderFuture</code> inside <code>onReceiveAfterProduce</code> and complete it with an exception. <br /><br /><script src="https://gist.github.com/847012.js?file=SomeProducerException.scala"></script><br /><br /><h2>Forwarding results</h2><br />Another option to deal with message exchange results inside <code>onReceiveAfterProduce</code> is to forward them to another actor. Forwarding a message also forwards the initial sender reference. This allows the receiving actor to reply to the initial sender. <br /><br /><script src="https://gist.github.com/847012.js?file=JmsForwardingProducer.scala"></script><br />With producer actors that forward message exchange results to other actors (incl. other producer actors) one can build actor-based message processing pipelines that integrate external systems. In combination with consumer actors, this could be extended towards a scalable and distributed enterprise service bus (ESB) based on Akka actors ... but this is a topic for another blog post.<br /><br /><h2>Correlation identifiers</h2><br />The Producer trait also supports correlation identifiers. This allows clients to correlate request messages with asynchronous response messages. A correlation identifier is a message header that can be set by clients. The following example uses the correlation identifier (or message exchange identifier) <code>123</code>.<br /><br /><script src="https://gist.github.com/847012.js?file=CorrelationIdentifier.scala"></script><br />An asynchronous response (<code>Message</code> or <code>Failure</code>) from <code>httpProducer</code> will contain that correlation identifier as well.<br /><br /><h2>Fault-tolerance</h2><br />A failed message exchange by default does not cause a producer actor to throw an exception. However, concrete producer actors may decide to throw an exception inside <code>onReceiveAfterProduce</code>, for example, or there can be a system-level Camel problem that causes a runtime exception. An application that wants to handle these exceptions should supervise its producer actors. <br /><br />The following example shows how to implement a producer actor that replies to the initial sender with a <code>Failure</code> message when it is restarted or stopped by a supervisor.<br /><br /><script src="https://gist.github.com/847012.js?file=SupervisedProducerNew.scala"></script><br />To handle restart callbacks, producer actors must override the <code>preRestartProducer</code> method instead of <code>preRestart</code>. The <code>preRestart</code> method is implemented by the <code>Producer</code> trait and does additional resource de-allocation work after calling <code>preRestartProducer</code>. More information about replies within <code>preRestart</code> and <code>postStop</code> can be found in my <a href="http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html">previous blog post</a> about consumer actors.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com7tag:blogger.com,1999:blog-2708349453904691513.post-31081505403788725672011-02-17T13:58:00.028+01:002011-07-06T20:54:30.121+02:00Akka Consumer Actors: New Features and Best PracticesIn this blog post I want to give some guidance how to implement consumer actors with the <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html">akka-camel</a> module. Besides basic usage scenarios, I will also explain how to make consumer actors fault-tolerant, redeliver messages on failure, deal with bounded mailboxes etc. The code examples shown below require the current Akka 1.1-SNAPSHOT to compile and run.<br /><br />In the following, I assume t<span style="font-size:100%;">hat</span> you already have a basic familiarity with <a href="http://akka.io/">Akka</a>, <a href="http://camel.apache.org/">Apache Camel</a> and the akka-camel integration module. If you are new to it, you may want to read the <a href="http://www.manning.com/ibsen/appEsample.pdf">Akka and Camel</a> chapter (free pdf) of the <a href="http://www.manning.com/ibsen/">Camel in Action</a> book or the <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#introduction">Introduction</a> section of the official akka-camel documentation first.<p><h2>Basic usage</h2>Akka consumer actors can receive messages from any Camel endpoint, provided that the corresponding Camel component is on the classpath. This allows clients to interact with Akka actors over a large number of protocols and APIs.<br /><br />Camel endpoints either initiate in-only (one-way) message exchanges with consumer actors or in-out (two-way) message exchanges. Replies from consumer actors are mandatory for in-out message exchanges but optional for in-only message exchanges. For replying to a Camel endpoint, the consumer actor uses the very same interface as for replying to any other sender (e.g. to another actor). Examples are <code>self.reply</code> or <code>self.reply_?</code>.<br /><br />Let's start by defining a simple consumer actor that accepts messages via tcp on port 6200 and replies to the tcp client (tcp support is given by Camel's <a href="http://camel.apache.org/mina.html">mina</a> component).<br /><br /><script src="https://gist.github.com/835076.js?file=InOutConsumer.scala"></script><br />For consumer actors to work, applications need to start a <code>CamelService</code> which is managed by the <code>CamelServiceManager</code>.<br /><br /><script src="https://gist.github.com/835076.js?file=StartCamelService.scala"></script><br />When starting a consumer actor, the endpoint defined for that actor will be activated asynchronously by the <code>CamelService</code>. If your application wants to wait for consumer endpoints to be finally activated you can do so with the <code>awaitEndpointActivation</code> method (which is especially useful for testing).<br /><br /><script src="https://gist.github.com/835076.js?file=InOutConsumerClient.scala"></script><br />For sending a test message to the consumer actor, the above code uses a Camel <code>ProducerTemplate</code> which can be obtained from the <code>CamelContextManager</code>.<br /><br />If Camel endpoints, such as the <a href="http://camel.apache.org/file2.html">file</a> endpoint, create in-only message exchanges then consumer actors need not reply, by default. The message exchange is completed once the input message has been added to the consumer actor's mailbox.<br /><br /><script src="https://gist.github.com/835076.js?file=InOnlyConsumer.scala"></script><br />When placing a file into the <code>data/input</code> directory, the Camel file endpoint will pick up that file and send its content as message to the consumer actor. Once the message is in the actor's mailbox, the file endpoint will delete the corresponding file (see <code>delete=true</code> in the endpoint URI).<br /><br />If you want to let the consumer actor decide when the file should be deleted, then you'll need to turn auto-acknowledgements off as shown in the following example (<code>autoack = false</code>). In this case the consumer actor must reply with a special <code>Ack</code> message when message processing is done. This asynchronous reply finally causes the file endpoint to delete the consumed file.<br /><br /><script src="https://gist.github.com/835076.js?file=InOnlyAckConsumer.scala"></script><br />Turning auto-acknowledgements on and off is only relevant for in-only message exchanges because, for in-out message exchanges, consumer actors need to reply in any case with an (application-specific) message. Consumer actors may also reply with a <code>Failure</code> message to indicate a processing failure. <code>Failure</code> replies can be made for both in-only and in-out message exchanges. A <code>Failure</code> reply can be done inside <code>receive</code> method but there are better ways as shown in the next sections.<p><h2>Fault-tolerance and message redelivery</h2>Message processing inside receive may throw exceptions which usually requires a failure response to Camel (i.e. to the consumer endpoint). This is done with a <code>Failure</code> message that contains the failure reason (an instance of <code>Throwable</code>). Instead of catching and handling the exception inside <code>receive</code>, consumer actors should be part of supervisor hierarchies and send failure responses from within restart callback methods. Here's an example of a fault-tolerant file consumer.<br /><br /><script src="https://gist.github.com/847012.js?file=SupervisedFileConsumer.scala"></script><br />The above file consumer overrides the preRestart and postStop callback methods to send reply messages to Camel. A reply within preRestart and postStop is possible after receive has thrown an exception (new feature since Akka 1.1). When receive returns normally it is expected that any necessary reply has already been done within receive.<br /><ul><li>If the lifecycle of the <code>SupervisedFileConsumer</code> is configured to be <code>PERMANENT</code>, a supervisor will restart the consumer upon failure with a call to <code>preRestart</code>. Within <code>preRestart</code> a <code>Failure</code> reply is sent which causes the file endpoint to redeliver the content of the consumed file and the consumer actor can try to process it again. Should the processing succeed in a second attempt, an <code>Ack</code> is sent within <code>receive</code>. A reply within <code>preRestart</code> must be a safe reply via <code>self.reply_?</code> because an unsafe <code>self.reply</code> will throw an exception when the consumer is restarted without having failed. This can be the case in context of all-for-one restart strategies.</li><li>If the lifecycle of the <code>SupervisedFileConsumer</code> is configured to be <code>TEMPORARY</code>, a supervisor will shut down the consumer upon failure with a call to <code>postStop</code>. Within <code>postStop</code> an <code>Ack</code> is sent which causes the file endpoint to delete the file. One can, of course, choose to reply with a <code>Failure</code> here, so that files that couldn't be processed successfully are kept in the input directory. A reply within <code>postStop</code> must be a safe reply via <code>self.reply_?</code> because an unsafe <code>self.reply</code> will throw an exception when the consumer has been stopped by the application (and not by a supervisor) after successful execution of <code>receive</code>.</li></ul><br />Another frequently discussed consumer actor example is a fault-tolerant JMS consumer. A JMS consumer actor should acknowledge a message receipt upon successful message processing and trigger a message redelivery on failure. This is exactly the same pattern as described for the <code>SupervisedFileConsumer</code> above. You just need to change the file endpoint URI to a <a href="http://camel.apache.org/jms.html">jms</a> or <a href="http://camel.apache.org/activemq.html">activemq</a> endpoint URI and you're done (of course, you additionally need to configure the JMS connection with a redelivery policy and, optionally, use transacted queues. An explanation how to do this would however exceed the scope of this blog post).<p><h2>Simplifications and tradeoffs with <code>blocking=true</code></h2>In all the examples so far the internally created Camel routes use the <code>!</code> (bang) operator to send the input message to the consumer actor. This means that the Camel route does not block a thread waiting for a response. It's an asynchronous reply will cause the Camel route to resume processing. That's also the reason why any exception thrown by receive isn't reported back to Camel directly but must be done explicitly with a <code>Failure</code> response.<br /><br />If you want that exceptions thrown by receive are reported back to Camel directly (i.e. without sending <code>Failure</code> responses) then you'll need to set <code>blocking = true</code> for the consumer actor. This causes the Camel route to send the input message with the <code>!!</code> (bangbang) operator and to wait for a response. However, this will block a thread until the consumer sends a response or throws an exception within <code>receive</code>. The advantage of this approach is that error handling is strongly simplified in this case but scalability will likely decrease.<br /><br />Here's an example of a consumer actor that uses the simplified approach to error handling. Any exception thrown by receive will still cause the file endpoint to redeliver the message but a thread will be blocked by Camel during the execution of receive.<br /><br /><script src="https://gist.github.com/835076.js?file=FileConsumer.scala"></script><br />No supervisor is needed here. It depends on the non-functional requirements of your application whether to go for this simple but blocking approach or to use a more scalable, non-blocking approach in combination with a supervisor.<p><h2>Bounded mailboxes and error handling with custom Camel routes</h2>For consumer actors that require a significant amount of time for processing a single message, it can make sense to install a bounded mailbox. A bounded mailbox throws an exception if its capacity is reached and the Camel route tries to add additional messages to the mailbox. Here's an example of a file consumer actor that uses a bounded mailbox with a capacity of 5. Processing is artificially delayed by 1 second using a <code>Thread.sleep</code>.<br /><br /><script src="https://gist.github.com/835076.js?file=BoundedMailboxFileConsumer.scala"></script><br />When, let's say, 10 files are put into the <code>data/input</code> directory, they will be picked up by the file endpoint and added to the actor's mailbox. The capacity of the mailbox will be reached soon because the file endpoint can send messages much faster than the consumer actor can process it. Exceptions thrown by the mailbox are directly reported to the Camel route which causes the file consumer to redeliver messages until they can be added to the mailbox. The same applies to JMS and other endpoints that support redelivery.<br /><br />When dealing with endpoints that do not support redelivery, one needs to customize the Camel route to the consumer actor with a special error handler that does the redelivery. This is shown for a consumer actor that consumes messages from a <a href="http://camel.apache.org/direct.html">direct</a> endpoint.<br /><br /><script src="https://gist.github.com/835076.js?file=BoundedMailboxDirectConsumer.scala"></script><br />Here we use <code>onRouteDefinition</code> to define how the Camel route should be customized during its creation. In this example, an error handler is defined that attempts max. 3 redeliveries with a delay of 1000 ms. For details refer to the <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#intercepting-route-construction">intercepting route construction</a> section in the akka-camel documentation. When using a producer template to send messages to this endpoint, some of them will be added to the mailbox on first attempt, some of them after a second attempt triggered by the error handler.<br /><br /><script src="https://gist.github.com/835076.js?file=SendTenMessages.scala"></script><br />The examples presented in this post cover many of the consumer-actor-related questions and topics that have been asked and discussed on the <a href="http://groups.google.com/group/akka-user">akka-user</a> mailing list. In another post I plan to cover best practices for implementing Akka producer actors.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com1tag:blogger.com,1999:blog-2708349453904691513.post-62962942006857516362010-08-30T10:28:00.016+02:002011-06-28T10:48:59.894+02:00Akka's grown-up humpIt's quite some time ago when I <a href="http://krasserm.blogspot.com/2010/04/akka-features-for-application.html">last wrote</a> about <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html">Akka's Camel integration</a>. An initial version of the akka-camel module was released with Akka 0.7. Meanwhile Akka 0.10 is out with an akka-camel module containing numerous new features and enhancements. Some of them will be briefly described in this blog post.<br /><br /><span style="font-weight: bold;font-size:100%;" >Java API</span><br /><br />The akka-camel module now offers a Java API in addition to the Scala API. Both APIs are fully covered in the <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html">online documentation</a>.<br /><br /><span style="font-weight: bold;">Support for typed consumer actors</span><br /><br />Methods of <a href="http://akka.io/docs/akka/1.1/java/typed-actors.html">typed actors</a> can be published at Camel endpoints by annotating them with <span style="font-family:courier new;">@consume</span>. The annotation value defines the endpoint URI. Here's an example of a <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#typed-actors">typed consumer</a> actor in Java.<br /><pre class="brush:java">import org.apache.camel.Body;<br />import org.apache.camel.Header;<br />import se.scalablesolutions.akka.actor.TypedActor;<br />import se.scalablesolutions.akka.camel.consume;<br /><br />public interface MyTypedConsumer {<br /> @consume("file:data/foo")<br /> public void foo(String body);<br /><br /> @consume("jetty:http://localhost:8877/camel/bar")<br /> public String bar(@Body String body, @Header("Content-Type") String contentType);<br />}<br /><br />public class MyTypedConsumerImpl extends TypedActor implements MyTypedConsumer {<br /> public void foo(String body) {<br /> System.out.println(String.format("Received message: ", body));<br /> }<br /><br /> public String bar(String body, String contentType) {<br /> return String.format("body=%s Content-Type header=%s", body, conctentType);<br /> }<br />}<br /></pre>When creating an instance of the typed actor with<br /><pre class="brush:java">import se.scalablesolutions.akka.actor.TypedActor;<br /><br />// Create typed actor and activate endpoints<br />MyTypedConsumer consumer = TypedActor.newInstance(<br /> MyTypedConsumer.class, MyTypedConumerImpl.class);<br /></pre>then the actor's <span style="font-family:courier new;">foo</span> method can be invoked by dropping a file into the <span style="font-family:courier new;">data/foo</span> directory. The file content is passed via the <span style="font-family:courier new;">body</span> parameter. The <span style="font-family:courier new;">bar</span> method can be invoked by POSTing a message to <span style="font-family:courier new;">http://localhost:8877/camel/bar</span>. The HTTP message body is passed via the <span style="font-family:courier new;">body</span> parameter and the <span style="font-family:courier new;">Content-Type</span> header via the <span style="font-family:courier new;">contentType</span> parameter. For parameter binding, Camel's <a href="http://camel.apache.org/parameter-binding-annotations.html">parameter binding annotations</a> are used.<br /><br /><span style="font-weight: bold;">Endpoint lifecycle</span><br /><br />Consumer actor endpoints are <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#consumer-publishing">activated</a> when the actor is started and <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#consumer-un-publishing">de-activated</a> when the actor is stopped. This is the case for both typed and untyped actors. An actor can either be stopped explicitly by an application or by a supervisor.<br /><br /><span style="font-weight: bold;">Fault tolerance</span><br /><br />When a consumer actor isn't stopped but <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#fault-tolerance">restarted</a> by a supervisor, the actor's endpoint remains active. Communication partners can continue to exchange messages with the endpoint during the restart phase but message processing will occur only after restart completes. For in-out message exchanges, response times may therefore increase. Communication partners that initiate in-only message exchanges with the endpoint won't see any difference.<br /><br /><span style="font-weight: bold;">Producer actors</span><br /><br />Actors that want to <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#produce-messages">produce</a> messages to endpoints either need to mixin the <span style="font-family:courier new;">Producer</span> trait (Scala API) or extend the abstract <span style="font-family:courier new;">UntypedProducerActor</span> class (Java API). Although the <span style="font-family:courier new;">Producer</span> trait was already available in the initial version of akka-camel, many enhancements have been made since then. Most of them are internal enhancements such as performance improvements and support for asynchronous routing. Also, extensions to the API have been made to support<br /><ul><li>pre-processing of messages before they are sent to an endpoint and </li><li>post-processing of messages after they have been received as response from an endpoint.</li></ul>For example, instead of replying to the original sender (default behavior) a producer actor could do a custom post-processing e.g. by forwarding the response to another actor (together with the initial sender reference)<br /><pre class="brush:scala">import se.scalablesolutions.akka.actor.Actor<br />import se.scalablesolutions.akka.camel.Producer<br /><br />class MyProducer(target: ActorRef) extends Actor with Producer {<br /> def endpointUri = "http://example.org/some/external/service"<br /><br /> override protected def receiveAfterProduce = {<br /> // do not reply to initial sender but<br /> // forward result to a target actor<br /> case msg => target forward msg<br /> }<br />}<br /></pre>Forwarding results to other actors makes it easier to create actor-based message processing pipelines that make use of external services. <a http://akka.io/docs/akka-modules/1.1/modules/camel.html#examples">Examples</a> are given in the akka-camel documentation.<br /><br />Typed actors need to use Camel's <span style="font-family:courier new;">ProducerTemplate</span> directly to produce messages to Camel endpoints. A managed instance of a <span style="font-family:courier new;">ProducerTemplate</span> can be obtained via <span style="font-family:courier new;">CamelContextManager.template</span>.<br /><br /><span style="font-weight: bold;">Asynchronous routing</span><br /><br />Since Akka 0.10, Camel's <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#examples">asynchronous routing engine</a> is <a href="http://doc.akkasource.org/Camel#async-routing">fully supported</a>: in-out and in-only messages exchanges between endpoints and actors are designed to be asynchronous. This is the case for both, consumer and producer actors.<br /><br />This is especially important for actors that participate in long-running request-reply interactions with external services. Threads are no longer blocked for the full duration of an in-out message exchange and are available for doing other work. There's also an <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#examples">asynchronous routing example </a>described in the online documentation.<br /><br /><span style="font-weight: bold;">Routes to actors</span><br /><br />Typed an untyped actors can also be <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#custom-camel-routes">accessed from Camel routes</a> directly, using Akka's <span style="font-family:courier new;">TypedActorComponent</span> and <span style="font-family:courier new;">ActorComponent</span>, respectively. These are Camel components supporting <span style="font-family:courier new;">typed-actor</span> and and <span style="font-family:courier new;">actor</span> endpoint URIs in route definitions. For example,<br /><pre class="brush:java">from("seda:test").to("actor:uuid:12345678");<br /></pre>routes a message from a <a href="http://camel.apache.org/seda.html">SEDA</a> queue to an untyped actor with <span style="font-family:courier new;">uuid</span> 12345678. The <span style="font-family:courier new;">actor</span> endpoint looks up the actor in Akka's actor registry.<br /><br />The <span style="font-family:courier new;">TypedActorComponent</span> is an extension of Camel's <a href="http://camel.apache.org/bean.html">bean</a> component where method invocations follow the semantics of the <a href="http://en.wikipedia.org/wiki/Actor_model">actor model</a>. Here is an example route from a <a href="http://camel.apache.org/direct.html">direct</a> endpoint to the <span style="font-family:courier new;">foo</span> method of a typed actor.<br /><pre class="brush:java">from("direct:test").to("typed-actor:sample?method=foo");<br /></pre>The typed actor is registered under the name <span style="font-style: italic;">sample</span> in the Camel registry. For more details how to add typed actors to the Camel registry, follow <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#access-to-typed-actors">this link</a>.<br /><br /><span style="font-weight: bold;">CamelService</span><br /><br />Prerequisite for endpoints being activated when starting consumer actors is a <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html#consumers-and-the-camelservice">running <span style="font-family:courier new;">CamelService</span></a>. When starting Akka in Kernel mode or using the Akka <a style="font-family: courier new;" href="http://akka.io/docs/akka/1.1/scala/http.html">Initializer</a> in a web application, a <span style="font-family:courier new;">CamelService</span> is started automatically. In all other cases a <span style="font-family:courier new;">CamelService</span> must be started by the application itself. This can be done either programmatically with<br /><pre class="brush:scala">import se.scalablesolutions.akka.camel.CamelServiceManager._<br /><br />startCamelService<br /></pre>or declaratively in a Spring XML configuration file.<br /><pre class="brush:xml"><beans xmlns="http://www.springframework.org/schema/beans"<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br /> xmlns:akka="http://www.akkasource.org/schema/akka"<br /> xmlns:camel="http://camel.apache.org/schema/spring"<br /> xsi:schemaLocation="<br />http://www.springframework.org/schema/beans<br />http://www.springframework.org/schema/beans/spring-beans-2.5.xsd<br />http://www.akkasource.org/schema/akka<br />http://scalablesolutions.se/akka/akka-0.10.xsd<br />http://camel.apache.org/schema/spring<br />http://camel.apache.org/schema/spring/camel-spring.xsd"><br /><br /> <!– A custom CamelContext (SpringCamelContext) –><br /> <camel:camelContext id="camelContext"><br /> <!– … –><br /> </camel:camelContext><br /><br /> <!– Create a CamelService using a custom CamelContext –><br /> <akka:camel-service><br /> <akka:camel-context ref="camelContext" /><br /> </akka:camel-service><br /><br /></beans></pre>Usage of the <camel-service> element requires the <a href="http://akka.io/docs/akka-modules/1.1/modules/spring.html">akka-spring</a> jar on the classpath. This example also shows how the Spring-managed <span style="font-family:courier new;">CamelService</span> is configured with a custom <span style="font-family:courier new;">CamelContext</span>.<br /><br />A running <span style="font-family:courier new;">CamelService</span> can be stopped either by closing the application context or by calling the <span style="font-family:courier new;">CamelServiceManager.stopCamelService</span> method.<br /><br /><span style="font-weight: bold;">Outlook</span><br /><br />The next Akka release will be Akka 1.0 (targeted for late fall) and akka-camel development will mainly focus on API stabilization. If you'd like to have some additional features in the next Akka release, want to give feedback or ask some questions, please contact the Akka community at the <a href="http://groups.google.com/group/akka-user">akka-user</a> mailing list.<br /><br /></camel-service>Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com1tag:blogger.com,1999:blog-2708349453904691513.post-80044631904233501632010-04-03T10:59:00.009+02:002011-06-28T10:53:13.880+02:00Akka features for application integration<a href="http://akka.io/">Akka</a> is a platform for event-driven, scalable and fault-tolerant architectures on the JVM. It is mainly written in Scala. One of its core features is support for the <a href="http://en.wikipedia.org/wiki/Actor_model">actor model</a> that provides a higher level of abstraction for writing concurrent and distributed systems.<br /><br />Since version 0.7, Akka offers a new feature that let actors send and receive messages over a great variety of protocols and APIs. In addition to the native Scala actor API, actors can now exchange messages with other systems over large number of protcols and APIs such as HTTP, SOAP, TCP, FTP, SMTP or JMS, to mention a few. At the moment, approximately 80 protocols and APIs are supported. This new feature is provided by Akka's <a href="http://akka.io/docs/akka-modules/1.1/modules/camel.html">Camel module</a>.<br /><br />At the core of this new feature is <a href="http://camel.apache.org/">Apache Camel</a>, IMHO the most powerful and feature-rich integration framework currently available for the JVM. For an introduction to Apache Camel you may want to read <a href="http://architects.dzone.com/articles/apache-camel-integration">this article</a>. Camel comes with a large number of <a href="http://camel.apache.org/components.html">components</a> that provide bindings to different protocols and APIs. Usage of Camel's integration components in Akka is essentially a one-liner. Here's an example.<br /><pre class="brush:scala"><br />import se.scalablesolutions.akka.actor.Actor<br />import se.scalablesolutions.akka.actor.Actor._<br />import se.scalablesolutions.akka.camel.{Message, Consumer}<br /><br />class MyActor extends Actor with Consumer {<br /> def endpointUri =<br /> "mina:tcp://localhost:6200?textline=true"<br /><br /> def receive = {<br /> case msg: Message => { /* ... */}<br /> case _ => { /* ... */}<br /> }<br />}<br />// start and expose actor via tcp<br />val myActor = actorOf[MyActor].start<br /></pre><br />The above example exposes an actor over a tcp endpoint on port 6200 via Apache Camel's <a href="http://camel.apache.org/mina.html">Mina component</a>. The <span style="font-family:courier new;">endpointUri</span> is an abstract method declared in the <span style="font-family:courier new;">Consumer</span> trait. After starting the actor, tcp clients can immediately send messages to and receive responses from that actor. If the message exchange should go over HTTP (via Camel's <a href="http://camel.apache.org/jetty.html">Jetty component</a>), only the actor's <span style="font-family:courier new;">endpointUri</span> must be redefined.<br /><pre class="brush:scala"><br />class MyActor extends Actor with Consumer {<br /> def endpointUri =<br /> "jetty:http://localhost:8877/example"<br /><br /> def receive = {<br /> case msg: Message => { /* ... */}<br /> case _ => { /* ... */}<br /> }<br />}<br /></pre><br />Actors can also trigger message exchanges with external systems i.e. produce to Camel endpoints.<br /><pre class="brush:scala"><br />import se.scalablesolutions.akka.actor.Actor<br />import se.scalablesolutions.akka.camel.Producer<br /><br />class MyActor extends Actor with Producer {<br /> def endpointUri = "jms:queue:example"<br /> protected def receive = produce<br />}<br /></pre><br />In the above example, any message sent to this actor will be added (produced) to the <span style="font-family:courier new;">example</span> JMS queue. Producer actors may choose from the same set of Camel components as Consumer actors do.<br /><br />The number of Camel components is constantly increasing. Akka's Camel module can support these in a plug-and-play manner. Just add them to your application's classpath, define a component-specific endpoint URI and use it to exchange messages over the component-specific protocols or APIs. This is possible because Camel components bind protocol-specific message formats to a Camel-specific <a href="https://svn.apache.org/repos/asf/camel/tags/camel-2.2.0/camel-core/src/main/java/org/apache/camel/Message.java">normalized message format</a>. The normalized message format hides protocol-specific details from Akka and makes it therefore very easy to support a large number of protocols through a uniform Camel component interface. Akka's Camel module further converts mutable Camel messages into <a href="http://github.com/jboner/akka/blob/v0.8/akka-camel/src/main/scala/Message.scala#L17">immutable representations</a> which are used by <span style="font-family:courier new;">Consumer</span> and <span style="font-family:courier new;">Producer</span> actors for pattern matching, transformation, serialization or storage, for example.<br /><h4>Highly-scalable eHealth integration solutions with Akka</h4>One goal I had in mind when implementing the Akka Camel module was to have a basis for building highly-scalable eHealth integration solutions. For eHealth information systems it is becoming increasingly important to support standard interfaces as specified by <a href="http://www.ihe.net/">IHE</a>. Financial support from governments strongly depends on eHealth standard compliance.<br /><br />Building blocks for implementing standard-compliant eHealth applications are provided by the <a href="http://gforge.openehealth.org/gf/project/ipf/">Open eHealth Integration Platform</a> (IPF). IPF is a mature open source integration platform, based on Apache Camel. It provides, among others, extensive support for IHE actor interfaces. These interfaces are based on Apache Camel's component technology. Therefore, it's a one-liner to expose an Akka actor through an IHE compliant interface. The following example implements the server-side interface of the <a href="http://www.ihe.net/Technical_Framework/upload/IHE_ITI_TF_Supplement_XDS-2.pdf">IHE XDS</a> <span style="font-style: italic;">Registry Stored Query</span> (XDS-ITI18) transaction.<br /><pre class="brush:scala"><br />class RSQService extends Actor with Consumer {<br /> def endpointUri = "xds-iti18:RSQService"<br /><br /> def receive = {<br /> case msg: Message => { /* ... */}<br /> case _ => { /* ... */}<br /> }<br />}<br /></pre><br />In IHE, a message exchange between two participants is called a transaction. Here, the IPF <a href="http://repo.openehealth.org/confluence/display/ipf2/IHE+support">xds-iti18 component</a> is used to implement the server-side interface of the XDS ITI18 transaction. This allows any XDS-ITI18-comaptible client to communicate with the actor over an IHE standard protocol using ebXML/SOAP/HTTP (as defined in the XDS specification). The implementor of the <span style="font-family:courier new;">receive</span> method, however, doesn't need to care about all the low-level protocol details (which are scary if you take a closer look). The body of the received message is a high-level object graph containing XDS-ITI18-specific transaction data.<br /><br />A high-level programming model for implementing eHealth standards is only one of several reasons why I consider Akka as a powerful technical basis for building scalable and fault-tolerant eHealth information systems and integration solutions. Akka's support for NoSQL datastores could further be used for implementing scalabale persistence layers in eHealth applications. Over the next weeks I'm going to explore this field in more detail and will keep you updated with further blog posts on that topic.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com2tag:blogger.com,1999:blog-2708349453904691513.post-81494182635485580992010-02-07T13:02:00.006+01:002011-04-13T20:27:21.496+02:00Accessing a security-enabled Google App Engine service with Apache CamelIn a <a href="http://krasserm.blogspot.com/2010/01/accessing-security-enabled-google-app.html">previous post</a> I've described the low-level details for programmatic login to a Google App Engine service from a Java client. Things are getting much easier when using <a href="http://camel.apache.org/">Apache Camel</a>. The recently committed <a href="http://camel.apache.org/glogin.html"><span style="font-family:courier new;">glogin</span></a> component makes it trivial to login to a remotely deployed Google App Engine service as well as to a local <a href="http://code.google.com/appengine/docs/java/tools/devserver.html">development server</a>. In the following example, an application-specific authorization cookie is obtained with the <span style="font-family:courier new;">glogin</span> component. It authorizes a client application to access <a href="http://camelcloud.appspot.com/">http://camelcloud.appspot.com</a> on Google App Engine.<br /><pre class="brush:java"><br />import org.apache.camel.Exchange;<br />import org.apache.camel.ProducerTemplate;<br />import static org.apache.camel.component.gae.login.GLoginBinding.*;<br /><br />...<br /><br />ProducerTemplate template = ...<br /><br />Exchange result = template.request(<br />"glogin://camelcloud.appspot.com"<br /> + "?userName=replaceme@gmail.com"<br /> + "&password=replaceme", null);<br />String cookie = result.getOut().getHeader(<br /> GLOGIN_COOKIE, String.class));<br /></pre><br />Please note that the password is only sent to the <a href="http://code.google.com/intl/de-DE/apis/accounts/">Google Accounts API</a> for authentication. It is never sent to Google App Engine or included into any URL. The obtained authorization cookie is valid for 24 hours and needs to be sent with subsequent requests to the GAE application. If inclusion of user credentials in an endpoint URI is not an option, username and password can also be dynamically set (per request) using Camel message headers:<br /><pre class="brush:java"><br />import org.apache.camel.Exchange;<br />import org.apache.camel.ProducerTemplate;<br />import static org.apache.camel.component.gae.login.GLoginBinding.*;<br /><br />...<br /><br />ProducerTemplate template = ...<br /><br />Exchange result = template.request(<br />"glogin://camelcloud.appspot.com", new Processor() {<br /> public void process(Exchange exchange) {<br /> exchange.getIn().setHeader(<br /> GLOGIN_USER_NAME, "replaceme@gmail.com");<br /> exchange.getIn().setHeader(<br /> GLOGIN_PASSWORD, "replaceme");<br /> }<br />});<br />String cookie = result.getOut().getHeader(<br /> GLOGIN_COOKIE, String.class));<br /></pre><br />To login to a local development server, the <span style="font-family:courier new;">devMode</span> parameter in the endpoint URI must be set to <span style="font-family:courier new;">true</span>.<br /><pre class="brush:java"><br />import org.apache.camel.Exchange;<br />import org.apache.camel.ProducerTemplate;<br />import static org.apache.camel.component.gae.login.GLoginBinding.*;<br /><br />...<br /><br />ProducerTemplate template = ...<br /><br />Exchange result = template.request(<br />"glogin://localhost:8888"<br /> + "?userName=test@example.org"<br /> + "&devMode=true", null);<br />String cookie = result.getOut().getHeader(<br /> GLOGIN_COOKIE, String.class));<br /></pre>The glogin component is part of the <a href="http://camel.apache.org/gae.html">Camel Components for Google App Engine</a>.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com4tag:blogger.com,1999:blog-2708349453904691513.post-7443256963750851232010-02-07T11:48:00.007+01:002010-02-07T15:59:45.561+01:00Add OAuth to your web application with Apache Camel<a href="http://oauth.net/">OAuth</a> is an open protocol to allow secure API authorization from desktop and web applications. Google, for example, already <a href="http://code.google.com/intl/de-DE/apis/accounts/docs/OAuth.html">supports OAuth</a> for authorizing 3rd-party applications to access Google services on behalf of a user.<br /><br />Recently, I added the <a style="font-family: courier new;" href="http://camel.apache.org/gauth.html">gauth</a> component to Apache Camel. You can use it to implement OAuth consumer functionality for any web application with only a few lines of code. <span style="font-family:courier new;">gauth</span> endpoints take care of exchanging authorization and access tokens between a web application and an OAuth service provider. At the moment, the <span style="font-family:courier new;">gauth</span> component can be used to interact with Google's OAuth services, later versions will support other OAuth providers as well.<br /><br />From a user's perspective, an example OAuth scenario might look as follows:<br /><ul><li>The user logs into a web application that uses the <a href="http://code.google.com/intl/de-DE/apis/calendar/">Google Calendar API</a>, for example.</li><li>To authorize access, the user is redirected to a Google Accounts authorization page where access for the requesting web application can be granted or denied.</li><li>After granting access the user is redirected back to the web application and the web application can now access the user's calendar data.<br /></li><li>The user can revoke access at any time within Google Accounts.<br /></li></ul>To implement that scenario with Apache Camel, two routes are needed. The first route obtains an unauthorized request token from Google and then redirects the user to the Google Accounts authorization page:<br /><pre class="brush:java"><br />String encodedCallback = URLEncoder.encode(<br /> "https://example.org/handler", "UTF-8");<br />String encodedScope = URLEncoder.encode(<br /> "http://www.google.com/calendar/feeds/", "UTF-8");<br /><br />from("jetty:http://0.0.0.0:8080/authorize")<br />.to("gauth://authorize"<br /> + "?callback=" + encodedCallback<br /> + "&scope=" + encodedScope);<br /></pre><br />In this example, the authorization request is triggered by the user by sending a GET request to <span style="font-family: courier new;">http://example.org/authorize</span> (e.g. by clicking a link in the browser). The <span style="font-family: courier new;">gauth://authorize</span> endpoint then obtains an unauthorized request token from Google. The <span style="font-family: courier new;">scope</span> parameter in the endpoint URI defines which Google service the web application wants to access. After having obtained the token, the endpoint generates a redirect response (302) which redirects the user to the Google Accounts authorization page. After granting access, the user is redirected back to the web application (<span style="font-family: courier new;">callback</span> parameter). The callback now contains an authorized request token that must finally be upgraded to an access token. Handling the callback and upgrading to an access token is done in the second route.<br /><pre class="brush:java"><br />from("jetty:https://example.org/handler")<br />.to("gauth://upgrade")<br />.to(new StoreTokenProcessor())<br /></pre><br />The <a style="font-family: courier new;" href="http://camel.apache.org/jetty.html">jetty</a> endpoint receives the callback from Google. The <span style="font-family: courier new;">gauth://upgrade</span> endpoint takes the authorized request token from the callback and upgrades it to an access token. The route finally stores the long-lived access token for the current user. The next time the user logs into the web application, the access token is already available and the application can continue to access the user's Google Calendar data without needing further user interaction. The user can invalidate the access token at any time within Google Accounts.<br /><br />Only these two routes are needed to integrate with Google's OAuth provider services. The routes can perfectly co-exist with any other web application framework. Whereas the web framework provides the basis for web application-specific functionality, the OAuth service provider integration is done with Apache Camel. This approach allows for a clean separation of integration logic from application or domain logic.<br /><br />For handling OAuth requests, web applications can also use other components than Camel's <span style="font-family: courier new;">jetty</span> component, such as the <a style="font-family: courier new;" href="http://camel.apache.org/servlet.html">servlet</a> component. For adding OAuth to Google App Engine applications, the <span style="font-family: courier new;">jetty</span> component needs to be replaced with Camel's <a style="font-family: courier new;" href="http://camel.apache.org/ghttp.html">ghttp</a> component. Here's an example:<br /><pre class="brush:java"><br />String encodedCallback = URLEncoder.encode(<br /> "https://camelcloud.appspot.com/handler", "UTF-8");<br />String encodedScope = URLEncoder.encode(<br /> "http://www.google.com/calendar/feeds/", "UTF-8");<br /><br />from("ghttp:///authorize")<br />.to("gauth://authorize"<br /> + "?callback=" + encodedCallback<br /> + "&scope=" + encodedScope);<br /><br />from("ghttp:///handler")<br />.to("gauth://upgrade")<br />.to(new StoreTokenProcessor())<br /></pre><br />The following figure gives an overview how the OAuth sequence of interactions relate to the <span style="font-family: courier new;">gauth://authorize</span> and <span style="font-family: courier new;">gauth://upgrade</span> endpoints.<br /><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipLsc6yeIcT1qzNg9wvJMOmdu2t511XW28QPTmFlMhQMzHdpLILMeNSYICJGH3Sq2hZ7HmRgOdPVxciWehL2c6erP8t32WBwnD-CGJjK46Jn53HXZoCN4V9c8ItxISMVdhVSysY74K8Tk/s1600-h/gauth.png"><img style="margin: 0px auto 10px; display: block; text-align: center; cursor: pointer; width: 400px; height: 224px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipLsc6yeIcT1qzNg9wvJMOmdu2t511XW28QPTmFlMhQMzHdpLILMeNSYICJGH3Sq2hZ7HmRgOdPVxciWehL2c6erP8t32WBwnD-CGJjK46Jn53HXZoCN4V9c8ItxISMVdhVSysY74K8Tk/s400/gauth.png" alt="" id="BLOGGER_PHOTO_ID_5435504124815279170" border="0" /></a><br /><br />Accessing a Google service with an access token (step 9) is application-specific and not covered by the <span style="font-family: courier new;">gauth</span> component. To get access to a user's Google Calendar data with an access token, one could use the GData client library. The <a style="font-family: courier new;" href="http://camel.apache.org/gauth.html">gauth</a> component documentation contains an example.<br /><br />The <span style="font-family: courier new;">gauth</span> component is the first step towards a broader support of security standards such as <a href="http://oauth.net/">OAuth</a> and <a href="http://openid.net/">OpenID</a> in Apache Camel. I'm currently thinking of the following extensions<br /><ul><li>A Camel OpenID component</li><li>A Camel OpenID/OAuth hybrid component</li><li>Support OAuth providers other than Google</li></ul>The <span style="font-family: courier new;">gauth</span> component is currently part of the Camel 2.3 development snapshot (<a href="https://svn.apache.org/repos/asf/camel/trunk/">sources</a>).Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com4tag:blogger.com,1999:blog-2708349453904691513.post-89470146291487266232010-01-14T16:23:00.012+01:002011-04-13T20:32:06.286+02:00Accessing a security-enabled Google App Engine service from a Java clientAfter a rather long search on Google pages and forums I could only find fragmented information how to programmatically access a Google App Engine service that requires users to authenticate. In this blog post I'm going to summarize my findings for a Java client application.<br /><br />With programmatic access I mean that the user doesn't need to enter username and password into a login form created by Google but rather into an installed client application and the client coordinates the authentication and authorization process programmatically. The mechanism used here is the <a href="http://code.google.com/apis/accounts/docs/AuthForInstalledApps.html">ClientLogin for installed applications</a>.<br /><br />The first step is to obtain an authentication token from the <a href="http://code.google.com/apis/accounts/">Google Accounts API</a>. The easiest way to do that is with the <a href="http://code.google.com/p/gdata-java-client/">GData client library for Java</a>.<br /><pre class="brush:java"><br />import java.net.URLEncoder;<br /><br />import com.google.gdata.client.GoogleAuthTokenFactory;<br />import com.google.gdata.util.AuthenticationException;<br /><br />public class AuthExample {<br /><br /> public static void main(String[] args) throws Exception {<br /><br /> String username = "myusername@gmail.com";<br /> String password = "mypassword";<br /> String serviceName = "ah";<br /><br /> GoogleAuthTokenFactory factory = new GoogleAuthTokenFactory(serviceName, "", null);<br /> // Obtain authentication token from Google Accounts<br /> String token = factory.getAuthToken(username, password, null, null, serviceName, "");<br /><br /> ...<br /> }<br />}<br /></pre><br />One has to provide username an password and the name of the Google service that should be accessed. For Google App Engine the service name is always <span style="font-style: italic;">ah</span>, regardless of the name of the deployed application. The next step is to do a login at Google App Engine. The login URL is <span style="font-family:courier new;">https://example.appspot.com/_ah/login?continue=https%3A%2F%2Fexample.appspot.com%2Fexample&auth=DQAAAJc...qNUA8</span>. The <span style="font-family:courier new;">continue</span> query parameter instructs the login service where to rederict after successful login. In this example the redirect goes to <span style="font-family:courier new;">https://example.appspot.com/example</span>. The <span style="font-family:courier new;">auth</span> query parameter contains the authentication token obtained before.<br /><pre class="brush:java"><br />import org.apache.http.HttpResponse;<br />import org.apache.http.client.HttpClient;<br />import org.apache.http.client.methods.HttpGet;<br />import org.apache.http.impl.client.DefaultHttpClient;<br /><br />public class AuthExample {<br /><br /> public static void main(String[] args) throws Exception {<br /> ...<br /><br /> String token = ...<br /> String serviceUrl = "https://example.appspot.com/example";<br /> String loginUrl = "https://example.appspot.com/_ah/login?continue=" +<br /> URLEncoder.encode(serviceUrl, "UTF-8") + "&auth=" + token;<br /><br /> HttpClient httpclient = new DefaultHttpClient();<br /> HttpGet httpget = new HttpGet(loginUrl);<br /> HttpResponse response = httpclient.execute(httpget);<br /> // process response<br /> // ...<br /><br /> httpclient.getConnectionManager().shutdown();<br /> }<br />}<br /></pre><br />When the login service sends a redirect after successful login, it also returns a cookie that allows the client to finally access the protected App Engine service at <span style="font-family:courier new;">https://example.appspot.com/example</span>. The redirect and cookie handling is done by the <span style="font-family:courier new;">httpclient</span> automatically. For the duration of the session the protected App Engine service can be accessed with that cookie.<br /><br /><span style="font-weight: bold;">Update:</span> If the service expects POST requests instead of GET requests then an automated redirect is not an option. In this case, redirect must be disabled for the for the <span style="font-family:courier new;">httpclient</span> and a POST request to the <span style="font-family:courier new;">serviceUrl</span> must be created manually. Also, the authorization cookie must be set explicitly.<br /><pre class="brush:java"><br />import org.apache.http.Header;<br />import org.apache.http.HttpResponse;<br />import org.apache.http.client.HttpClient;<br />import org.apache.http.client.methods.HttpGet;<br />import org.apache.http.client.methods.HttpPost;<br />import org.apache.http.client.params.ClientPNames;<br />import org.apache.http.impl.client.DefaultHttpClient;<br /><br />public class AuthExample {<br /><br /> public static void main(String[] args) throws Exception {<br /> ...<br /><br /> String token = ...<br /> String loginUrl = "https://example.appspot.com/_ah/login?auth=" + token;<br /> String serviceUrl = "https://example.appspot.com/example";<br /><br /> HttpClient httpclient = new DefaultHttpClient();<br /> httpclient.getParams().setBooleanParameter(ClientPNames.HANDLE_REDIRECTS, false);<br /> HttpGet httpget = new HttpGet(loginUrl);<br /> HttpResponse response = httpclient.execute(httpget);<br /> // Get cookie returned from login service<br /> Header[] headers = response.getHeaders("Set-Cookie");<br /> httpclient.getConnectionManager().shutdown(); <br /><br /> httpclient = new DefaultHttpClient();<br /> HttpPost httppost = new HttpPost(serviceUrl);<br /> // set cookie returned by login service<br /> for (Header header : headers) {<br /> httppost.addHeader("Cookie", header.getValue());<br /> }<br /> // set request entity body<br /> // ...<br /><br /> response = httpclient.execute(httppost);<br /> // process response<br /> // ...<br /><br /> httpclient.getConnectionManager().shutdown(); <br /> }<br />}<br /></pre><span style="font-weight: bold;">Update:</span> Login to a local development server. To get access to a security-enabled application on the local development server there's no need for getting an authentication token. Instead, POST an email address and a redirect URL to <span style="font-family:courier new;">http://localhost:<port>/_ah/login</span> and the server returns an authorization cookie. Here's an example:<br /><pre class="brush:java">HttpClient httpClient = new DefaultHttpClient();<br />httpClient.getParams().setBooleanParameter(<br /> ClientPNames.HANDLE_REDIRECTS, false);<br />// POST login data to GAE SDK dev server<br />HttpPost httpPost = new HttpPost(<br /> "http://localhost:8888/_ah/login");<br />httpPost.setHeader("Content-Type",<br /> "application/x-www-form-urlencoded");<br />String email = URLEncoder.encode(<br /> "test@example.com", "UTF-8");<br />String redirectUrl = URLEncoder.encode(<br /> "http://localhost:8888", "UTF-8");<br />httpPost.setEntity(new StringEntity(<br /> "email=" + email + "&continue=" + redirectUrl));<br />HttpResponse response = httpClient.execute(httpPost);<br />// Extract authorization cookie from response<br />String cookie = response.getFirstHeader("Set-Cookie").getValue();<br />httpClient.getConnectionManager().shutdown();<br />// Create a new client and access the secured<br />// service with the authorization cookie<br />httpClient = new DefaultHttpClient();<br />HttpGet httpget = new HttpGet("http://localhost:8888");<br />httpget.addHeader("Cookie", cookie);<br />response = httpClient.execute(httpget);<br />System.out.println(IOUtils.toString(response.getEntity().getContent()));<br />httpClient.getConnectionManager().shutdown();<br /></pre>Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com11tag:blogger.com,1999:blog-2708349453904691513.post-50739537037941682172009-12-12T08:52:00.006+01:002011-04-13T20:17:01.306+02:00New features in grails-jaxrs 0.3In this blog post I present some new features of the recently released grails-jaxrs 0.3 plugin. A complete list of new features is available in the <a href="http://code.google.com/p/grails-jaxrs/wiki/ReleaseNotes_0_3">release notes</a>. A feature overview and links to the complete documentation is on the <a href="http://code.google.com/p/grails-jaxrs/">plugin home page</a>.<br /><br />grails-jaxrs is a Grails plugin that supports the development of RESTful web services based on the <a href="http://jcp.org/en/jsr/detail?id=311">Java API for RESTful Web Services</a> (JSR 311: JAX-RS). It is targeted at developers who want to structure the web service layer of an application in a JSR 311 compatible way but still want to continue to use Grails' powerful features such as GORM, automated XML and JSON marshalling, Grails services, Grails filters and so on. This plugin is an alternative to Grails' built-in mechanism for implementing RESTful web services.<br /><br />The following example shows how to do content negotiation for Grails domain objects. Grails domain classes like<br /><pre class="brush:groovy"><br />class Person {<br /> String firstName<br /> String lastName<br />}<br /></pre><br />can now be used in JAX-RS resource methods directly (e.g. <span style="font-family: courier new;">Person</span> parameter in the <span style="font-family: courier new;">create</span> method):<br /><pre class="brush:groovy"><br />import static javax.ws.rs.core.UriBuilder.fromPath<br /><br />import javax.ws.rs.Consumes<br />import javax.ws.rs.Path<br />import javax.ws.rs.Produces<br />import javax.ws.rs.POST<br />import javax.ws.rs.core.Response<br /><br />@Path('/api/person')<br />@Consumes(['application/xml','application/json'])<br />@Produces(['application/xml','application/json'])<br />class PersonCollectionResource {<br /><br /> @POST<br /> Response create(Person person) {<br /> person.save() // use GORM<br /> URI uri = fromPath(person.id as String).build()<br /> Response.created(uri).entity(person).build()<br /> }<br /><br /> // ...<br /> <br />}<br /></pre><br />Content negotiation and conversion between domain objects and their XML or JSON representations is done by <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Domain_object_providers">domain object providers</a>. There's no need any more for application code to deal with representation formats directly.<br /><br />The <span style="font-family: courier new;">PersonCollectionResource.create</span> method handles POST requests for creating new Person objects in the database. The method uses <a href="http://grails.org/doc/1.1.2/guide/5.%20Object%20Relational%20Mapping%20%28GORM%29.html">GORM</a> to persist the domain object. Clients can send either XML or JSON representations for POSTing person data (see <span style="font-family: courier new;">Content-Type</span> header):<br /><pre><br />POST /hello/api/person HTTP/1.1<br />Content-Type: application/xml<br />Accept: application/xml<br />Host: localhost:8080<br />Content-Length: 78<br /><br /><person><br /><firstname>Sam</firstname><br /><lastname>Hill</lastname><br /></person><br /></pre><br />or<br /><pre><br />POST /hello/api/person HTTP/1.1<br />Content-Type: application/json<br />Accept: application/json<br />Host: localhost:8080<br />Content-Length: 58<br /><br />{"class":"Person","firstName":"Fabien","lastName":"Barel"}<br /></pre><br />In either case, the plugin will convert it to a <span style="font-family: courier new;">Person</span> object, as required by the <span style="font-family: courier new;">person </span>parameter. For creating a response the method uses the JAX-RS API. It first creates a URI for the response <span style="font-family: courier new;">Location</span> header and uses the <span style="font-family: courier new;">Response</span> builder to set the status code to 201 (<span style="font-family: courier new;">created</span>) and the response entity. Note that the method itself doesn't create an XML or JSON representation of the response domain object. This is again done by a domain object provider which uses the <span style="font-family: courier new;">Accept</span> request header to determine the response representation format. The responses to the above POST requests are:<br /><pre><br />HTTP/1.1 201 Created<br />Content-Type: application/xml<br />Location: http://localhost:8080/hello/api/person/1<br />Transfer-Encoding: chunked<br />Server: Jetty(6.1.14)<br /><br /><?xml version="1.0" encoding="UTF-8"?><br /><person id="1"><br /><firstname>Sam</firstname><br /><lastname>Hill</lastname><br /></person><br /></pre><br />and<br /><pre><br />HTTP/1.1 201 Created<br />Content-Type: application/json<br />Location: http://localhost:8080/hello/api/person/2<br />Transfer-Encoding: chunked<br />Server: Jetty(6.1.14)<br /><br />{"class":"Person","id":"2","firstName":"Fabien","lastName":"Barel"}<br /></pre><br />The <span style="font-family: courier new;">PersonCollectionResource.create</span> method is even more verbose than necessary. It could equally be written as<br /><pre class="brush:groovy"><br />import static org.grails.jaxrs.response.Responses.*<br /><br />@Path('/api/person')<br />@Consumes(['application/xml','application/json'])<br />@Produces(['application/xml','application/json'])<br />class PersonCollectionResource {<br /><br /> @POST<br /> Response create(Person person) {<br /> created person.save()<br /> }<br /><br /> // ...<br /> <br />}<br /></pre><br />using helper methods (a mini-DSL) from <span style="font-family: courier new;">org.grails.jaxrs.response.Responses</span>. That's exactly the code that is generated when using <a href="http://code.google.com/p/grails-jaxrs/wiki/GettingStarted#Scaffolding">scaffolding</a> for the <span style="font-family: courier new;">Person</span> domain class i.e.<br /><pre><br />grails generate-resources person<br /></pre><br />With the grails-jaxrs scaffolding feature, one can generate RESTful service interface for domain objects supporting the HTTP methods POST, GET, PUT and DELETE. A scaffolding example is given in the <a href="http://code.google.com/p/grails-jaxrs/wiki/GettingStarted#Scaffolding">Scaffolding</a> section of the grails-jaxrs documentation, a walk through the generated code is in the <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Using_GORM">Using GORM</a> section.<br /><br />By default, grails-jaxrs uses Grail's XML and JSON converters for converting between domain objects and their XML or JSON representations. Applications can easily customize this conversion logic as explained in the <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Custom_providers">Custom entity providers</a> section.<br /><br />Besides usage of GORM, grails-jaxrs also supports <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Service_injection">auto-injection of Grails services</a> into JAX-RS resource and provider classes or usage of <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Applying_filters">Grails filters</a>, to mention a few. With version 0.3 the included JAX-RS implementations have been upgraded to their latest versions: Jersey 1.1.4.1 and Restlet 2.0-M6.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com4tag:blogger.com,1999:blog-2708349453904691513.post-65499603993102445982009-11-17T06:27:00.002+01:002009-11-17T06:39:32.075+01:00Camel components for Google App EngineThe upcoming Apache Camel version 2.1 will include <a href="http://cwiki.apache.org/confluence/display/CAMEL/GAE">components for connecting to the cloud computing services of Google App Engine</a>. At the moment the following three components are available.<br /><br /><ul><li><a style="font-family: courier new;" href="http://cwiki.apache.org/confluence/display/CAMEL/ghttp">ghttp</a>: Provides connectivity to the GAE <a href="http://code.google.com/appengine/docs/java/urlfetch/" rel="nofollow">URL fetch service</a> but can also be used to receive messages from servlets</li><li><a style="font-family: courier new;" href="http://cwiki.apache.org/confluence/display/CAMEL/gtask">gtask</a>: Supports asynchronous message processing on GAE by using the <a href="http://code.google.com/appengine/docs/java/taskqueue/" rel="nofollow">task queueing service</a> as message queue. </li><li><a style="font-family: courier new;" href="http://cwiki.apache.org/confluence/display/CAMEL/gmail">gmail</a>: Supports sending of emails via the GAE <a href="http://code.google.com/appengine/docs/java/mail/" rel="nofollow">mail service</a>. Receiving mails is not supported yet but will be added later.</li></ul><br />Camel components for the other Google App Engine cloud computing services such as Memcache service, XMPP service, Images service, Datastore Service and the Authentication service are planned.<br /><br />There's also a <a href="http://cwiki.apache.org/confluence/display/CAMEL/Tutorial+for+Camel+on+Google+App+Engine">tutorial</a> that explains how to develop a non-trivial Camel GAE application using the Camel components for GAE.<br /><br />From a conceptual point of view, connecting to cloud computing services via Camel components introduces an abstraction-layer that decouples Camel applications from provider-specific cloud service interfaces. Supporting several cloud computing environments in Camel can significantly reduce the burden of migrating Camel applications from one provider to another. The Camel components for Google App Engine are a first step into this direction.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-32375410889422039342009-10-18T09:32:00.007+02:002011-04-13T20:19:11.195+02:00First steps with Apache Camel on Google App EngineThis post describes how to get a simple Camel 2 application running on Google App Engine (GAE). I'll focus on the workarounds and fixes that were necessary to succeed. Please note that the following descriptions are by no means best-practices or recommendations. They only describe my first steps for which better solutions will likely exist in the future. I plan to work on improvements to<br /><ul><li>make Camel deployments on GAE easier and to</li><li>allow Camel applications access GAE services via Camel components</li></ul>For my experiments, I was using a Camel 2.1 development snapshot, the App Engine SDK 1.2.6 and the Google Plugin for Eclipse which makes local testing and remote deployment very easy. The Camel components I used are:<br /><ul><li>camel-core</li><li>camel-spring</li><li>camel-servlet</li><li>camel-http</li></ul>The following snippet shows the route definition of the sample application. It uses the camel-servlet component to receive input via HTTP, converts the HTTP request body to a String, prepends a "Hello " to the body and returns the result.<br /><pre class="brush:java"><br />package example;<br /><br />import org.apache.camel.builder.RouteBuilder;<br /><br />public class ExampleRoute extends RouteBuilder {<br /><br /> @Override<br /> public void configure() throws Exception {<br /> from("servlet:/test")<br /> .convertBodyTo(String.class)<br /> .transform(constant("Hello ").append(body()));<br /> }<br />}<br /></pre><br />The route doesn't make use of any GAE services (URL fetch, tasks queues, storage, mail ...) Also, message processing is synchronous because GAE doesn't allow applications to create their own threads. For example, using SEDA or JMS queues will not work.<br /><br />For processing HTTP requests, I created my own servlet class and extended the <span style=";font-family:courier new;font-size:85%;" >CamelHttpTransportServlet</span> from the camel-servlet component.<br /><pre class="brush:java"><br />package example;<br /><br />import org.apache.camel.component.servlet.CamelHttpTransportServlet;<br />import org.apache.camel.management.JmxSystemPropertyKeys;<br /><br />public class ExampleServlet extends CamelHttpTransportServlet {<br /><br /> static {<br /> System.setProperty(JmxSystemPropertyKeys.DISABLED, "true");<br /> }<br /><br />}<br /></pre><br />The only thing this servlet does is to disable all JMX-related functionality because the GAE JRE doesn't support JMX. All request processing and dispatching is done by the <span style=";font-family:courier new;font-size:85%;" >CamelHttpTransportServlet</span>. Configuring the servlet in the <span style=";font-family:courier new;font-size:85%;" >web.xml</span> was done as follows.<br /><pre class="brush:xml"><br /><servlet><br /><servlet-name>CamelServlet</servlet-name><br /><servlet-class>example.ExampleServlet</servlet-class><br /><init-param><br /> <param-name>contextConfigLocation</param-name><br /> <param-value>context.xml</param-value><br /></init-param><br /></servlet><br /><br /><servlet-mapping><br /><servlet-name>CamelServlet</servlet-name><br /><url-pattern>/camel/*</url-pattern><br /></servlet-mapping><br /></pre><br />The servlet <span style=";font-family:courier new;font-size:85%;" >init-param</span> points to the Spring application context that configures the route builder and the Camel context:<br /><pre class="brush:xml"><br /><beans xmlns="http://www.springframework.org/schema/beans"<br /> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"<br /> xsi:schemaLocation="<br />http://www.springframework.org/schema/beans<br />http://www.springframework.org/schema/beans/spring-beans-2.5.xsd"><br /><br /><bean id="camelContext"<br /> class="org.apache.camel.spring.CamelContextFactoryBean"><br /> <property name="builderRefs"><br /> <list><br /> <ref bean="routeBuilderRef"/><br /> </list><br /> </property><br /></bean><br /><br /><bean id="routeBuilderRef"<br /> class="org.apache.camel.model.RouteBuilderDefinition"><br /> <constructor-arg value="routeBuilder" /><br /></bean><br /><br /><bean id="routeBuilder"<br /> class="example.ExampleRoute"><br /></bean><br /><br /></beans><br /></pre><br />A severe limitation is that one cannot use the Camel-specific configuration XML schema from the <span style=";font-family:courier new;font-size:85%;" >http://camel.apache.org/schema/spring</span> namespace for configuring the Camel context. The problem is that the <span style=";font-family:courier new;font-size:85%;" >CamelNamespaceHandler</span> uses JAXB to parse bean definitions which isn't supported by GAE either. One has to fallback to plain old <span style=";font-family:courier new;font-size:85%;" ><bean></span> definitions (POBD?) to configure the Camel context in Spring. Using Spring JavaConfig or something similar would make more sense here but I didn't try it.<br /><br />Another JAXB-releated problem arises with Camel's Spring DSL. It is also processed with JAXB and therefore cannot be used on GAE.<br /><br />Going completely without Spring leads to another problem. In this case the <span style=";font-family:courier new;font-size:85%;" >CamelContext</span> uses a <span style=";font-family:courier new;font-size:85%;" >JndiRegistry</span> by default that depends on <span style=";font-family:courier new;font-size:85%;" >javax.naming.InitialContext</span>. This class isn't on the JRE whitelist either. Writing a simple Map-based implementation of <span style=";font-family:courier new;font-size:85%;" >org.apache.camel.impl.Registry</span> and configuring the <span style="font-size:85%;"><span style="font-family:courier new;">CamelContext</span></span> with it does the trick.<br /><br />The last obstacle to get the sample application running was to replace the Camel's <span style="font-size:85%;"><span style="font-family:courier new;">UuidGenerator</span></span> with another one that uses <span style=";font-family:courier new;font-size:85%;" >java.util.UUID</span> from the JRE. Camel's original <span style=";font-family:courier new;font-size:85%;" >UuidGenerator</span> also uses a class that is not on the JRE whitelist. Since replacement by configuration was not possible, changes to the Camel code base were necessary (patch already submitted).<br /><br />After deploying the application to GAE and POSTing a request containing "Martin" to <span style="font-size:85%;"><span style="font-family:courier new;">http://<appname>.appspot.com/camel/test</span></span> I was able to send myself greetings. In the URL, <span style=";font-family:courier new;font-size:85%;" ><appname></span> must of course be replaced with the name of an existing application.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com3tag:blogger.com,1999:blog-2708349453904691513.post-36125729974127278792009-10-14T15:00:00.003+02:002009-10-14T15:13:53.870+02:00IPF 2.0 milestone 2 and IPF Tools milestones releasedI'm pleased to announce the following milestone releases from the <a href="http://repo.openehealth.org/confluence/display/ipf2/Home">IPF core</a> project and the <a href="http://repo.openehealth.org/confluence/display/ipftools/Home">IPF Tools</a> project.<br /><br />IPF 2.0 milestone 2 (<a href="http://repo.openehealth.org/confluence/display/ipf2/IPF+2.0-m2">release notes</a>)<br />IPF Tools<br />- IPF Runtime 2.0 milestone 2 (<a href="http://repo.openehealth.org/confluence/display/ipftools/IPF+Runtime+2.0.0.m2">release notes</a>)<br />- IPF Manager 2.0 milestone 2 (<a href="http://repo.openehealth.org/confluence/display/ipftools/IPF+Manager+2.0.0.m2">release notes</a>)<br />- IPF IDE 1.0 milestone 2 (<a href="http://repo.openehealth.org/confluence/display/ipftools/IPF+IDE+1.0.0.m2#">release notes</a>)<br /><br /><span style="font-size:130%;">IPF 2.0 milestone 2</span><br /><br />This release is feature-equivalent to <a href="http://repo.openehealth.org/confluence/display/ipf/IPF+1.7.0">IPF 1.7.0</a> but runs on Camel 2.0. Users who plan to upgrade to IPF 2.0 or Camel 2.0 in the near future are highly recommended to use this milestone release. <b>Please note that IPF 2.0 is not backwards-compatible to IPF 1.x</b>. This is mainly due to non-backwards compatible API changes in Camel 2.0. It is therefore important to carefully read the <a href="http://camel.apache.org/camel-200-release.html">Camel 2.0.0 release notes</a> as well as the <a href="http://repo.openehealth.org/confluence/display/ipf2/IPF+2.0-m2#IPF2.0-m2-Upgradenotes">IPF 2.0-m2 upgrade notes</a>.<br /><br />With the release of IPF 2.0-m2 and IPF 1.7.0, IPF 1.x development will go into maintainance mode and new features will be developed on the IPF 2.0 development branch. We leave it open whether to backport selected IPF 2.x features to IPF 1.x. Please add any backport requests to the <a href="http://gforge.openehealth.org/gf/project/ipf/tracker/">IPF issue tracker</a>.<br /><br />Other changes compared to IPF 1.7.0 are:<br /><br /><ul><li>The platform manager has been moved to the IPF Tools project.</li><li>The IPF OSGi distributable (IPF runtime) and the IPF OSGi documentation have been moved to the IPF Tools project.</li><li>The HL7-independent parts of the mapping service have been factored out into a new commons-map component.</li><li>The IPF 2.0 documentation has been forked from the IPF 1.7 documentation and revised.</li></ul><span style="font-size:130%;"><br />IPF Runtime 2.0 milestone 2</span><br /><br />The IPF Runtime is an IPF distribution that is running on the Equinox OSGi platform. It is available as Eclipse plugin or as standalone package. The runtime is used to develop OSGi-based IPF applications.<br /><span style="font-size:130%;"><br />IPF Manager 2.0 milestone 2</span><br /><br />IPF Manager is an Eclipse application for managing IPF services and applications. It is available as Eclipse plugin or as standalone package. In its current state it provides a flow management user interface and a general-purpose JMX client. The IPF Manager is compatible with IPF Runtime 2.0-m2.<br /><span style="font-size:130%;"><br />IPF IDE 1.0 milestone 2</span><br /><br />The IPF IDE supports developers in creating, testing and packaging IPF applications within the Eclipse plugin development environment (PDE) on top of the IPF runtime. The IPF IDE is compatible with IPF Runtime 2.0-m2.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-69703748520820021002009-10-07T11:09:00.003+02:002009-10-07T11:25:17.862+02:00IPF 1.7.0 releasedI'm pleased to announce the release of <a href="http://repo.openehealth.org/confluence/display/ipf/Home">IPF 1.7.0</a>. The main focus of this release was support for clinical standards, in particular the <a href="http://repo.openehealth.org/confluence/display/ipf/IHE+support">IHE profiles</a><a href="http://repo.openehealth.org/confluence/display/ipf/IHE+support"> XDS.a, XDS.b, PIX, PDQ</a> and <a href="http://repo.openehealth.org/confluence/display/ipf/CDA+support">support for the clinical document architecture (CDA) and the Continuity of Care Document (CCD) content profile</a>. The release notes are <a href="http://repo.openehealth.org/confluence/display/ipf/IPF+1.7.0">here</a>.<br /><br />With IPF's IHE support, IHE actor interfaces can be implemented in IPF routes via URIs. This is as simple as using other Camel or IPF components such as the HTTP or JMS components. The URIs denote individual transactions (ITI) in IHE profiles. For example<br /><br /><span style="font-family:courier new;">from('xds-iti18:myIti18Service')</span><br /><span style="font-family:courier new;">...</span><br /><br />implements the 'XDS Registry Stored Query' service interface of an XDS document registry and can be used from any XDS ITI18-compliant consumer. Such a consumer can also be implemented using the same IPF xds-iti18 component on client side e.g.<br /><br /><span style="font-family:courier new;">...</span><br /><span style="font-family:courier new;">.to('xds-iti18://somehost:8080/myWebApp/services/myIti18Service')</span><br /><br /><br />All the low-level details like communicating with ebXML messages over SOAP etc. is handled by that component. IPF routes deal with easy-to-use object representations of messages exchanged within IHE transactions. The full list of supported transactions is given in the <a href="http://repo.openehealth.org/confluence/display/ipf/IHE+support#IHEsupport-Quickreference">IHE quick reference</a>.<br /><br />With IPF's CDA and CCD support clinical documents can be created, parsed, rendered and queried/analyzed using a domain-specific language (DSL). This DSL hides away most of the technical details you usually encounter when dealing with the complex XML-representation of clinical documents. In addition to these content-DSL extensions, IPF also provides some route DSL extension for parsing, validating and marshalling CDA documents in IPF routes.<br /><br />Here's an excerpt of new IPF 1.7.0 features added since 1.6.0<br /><br />* <a href="http://repo.openehealth.org/confluence/display/ipf/IHE+support">IHE support</a><br />** IHE XDS.a+b transactions (ITI 14-18, 41-43)<br />** IHE PIX transactions (ITI 8-10)<br />** IHE PDQ transactions (ITI 21-22)<br />** IHE ATNA for all the above transactions<br />* <a href="http://repo.openehealth.org/confluence/display/ipf/CDA+support">CDA support</a><br />** Generic CDA support<br />** CCD profile support<br />* Advanced XML processing<br />** <a href="http://repo.openehealth.org/confluence/display/ipf/Core+features#Corefeatures-TransmogrifierImpls">Caching XSLT transmogrifier</a><br />** <a href="http://repo.openehealth.org/confluence/display/ipf/Core+features#Corefeatures-Validator">Schematron validator</a><br />* <a href="http://repo.openehealth.org/confluence/display/ipf/XDS+repository">Detailed XDS tutorial</a><br />* <a href="http://repo.openehealth.org/confluence/display/ipf/Performance+measurement">Performance measurement support</a><br />* <a href="http://repo.openehealth.org/confluence/display/ipf/Flow+removal">Scheduled flow management database cleanup</a><br />* ...<br /><br />IPF 1.7.0 is based on Camel 1.6. In parallel, a Camel 2.0-based version is developed on the <a href="http://repo.openehealth.org/confluence/display/ipf2/Home">IPF 2.0</a> branch. The current development snapshot is feature-equivalent with IPF 1.7.0 but runs on Camel 2.0. The next IPF 2.0 milestone 2 release will follow within the next one or two weeks. I recommend you to use the 2.0 milestone releases unless you upgrade from an older IPF 1.x release. After releasing IPF 2.0.0, work on IPF 1.x will go into maintainance mode (but we leave it open whether to backport selected IPF 2.x features).<br /><br />Exciting new features in IPF 2.0 which didn't make it into IPF 1.7 are Eclipse-based IPF development tools and improvements to IPF's OSGi support. These features are <a href="http://gforge.openehealth.org/gf/project/ipf-tools/">developed</a> and documented in a separate <a href="http://repo.openehealth.org/confluence/display/ipftools/Home">IPF Tools</a> project. The Eclipse-based IPF management client has been moved to this project as well. I'll give a more detailed IPF 2.0 overview in a separate post.<br /><br />Many thanks to the whole development team and contributors for their excellent and high-quality work!Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-11599009743538503442009-09-23T07:29:00.006+02:002009-09-23T08:00:18.670+02:00New release of JSR 311 plugin for GrailsIt's about two weeks ago that the latest version (0.2) of the <a href="http://code.google.com/p/grails-jaxrs/">JSR 311 plugin for Grails</a> (grails-jaxrs) has been released. In contrast to the <a href="http://krasserm.blogspot.com/2009/07/jsr-311-plugin-for-grails.html">first release</a> (0.1) , which was mainly a proof-of-concept, this new release focused, among other things, on a closer Grails integration. In particular<br /><ul><li>JAX-RS classes like <a href="http://code.google.com/p/grails-jaxrs/wiki/GettingStarted#Create_a_resource">resource classes</a> and <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Entity_providers">entity providers</a> are now auto-detected by the plugin. There's no need any more to add them to the Spring application context manually.</li></ul><ul><li>JAX-RS classes managed by the plugin can be changed at runtime in development mode. Code changes are detected by the plugin and reloaded in the same way as Grails controllers or services are.</li></ul><ul><li>Services and other Spring beans are <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Service_injection">auto-injected by-name</a> into JAX-RS resources and providers.</li></ul><ul><li>The plugin also extends Grails' command-line interface for <a href="http://code.google.com/p/grails-jaxrs/wiki/GettingStarted#Create_a_resource">creating JAX-RS resources from scratch</a> as well as<a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Scaffolding"> generating JAX-RS resources from existing domain objects</a> (scaffolding). With scaffolding RESTful service interfaces can be auto-generated for individual domain objects (still early-access).</li></ul>Antoher enhancement in this version is support for <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Google_App_Engine">deployments on the Google App Engine</a> (GAE). This required to support <a href="http://www.restlet.org/">Restlet</a> as JAX-RS implementation in addition to <a href="http://jersey.dev.java.net/">Jersey</a>. In contrast to Jersey, Restlet can be deployed to GAE and since version 2.0-m4 <a href="http://restlet.tigris.org/issues/show_bug.cgi?id=818">its JAX-RS extension as well</a>. For a running <a href="http://code.google.com/p/grails-jaxrs/wiki/AdvancedFeatures#Google_App_Engine">example</a> go to <a href="http://grails-jaxrs.appspot.com/test?name=World">http://grails-jaxrs.appspot.com/test?name=World</a>. Please note that initializing Grails applications on GAE can take very long (up tp 30 seconds) at the moment. Subsequent requests are served much faster, of course.<br /><br />Also the documentation has been and extended and completely revised. The easiest way to get started with the plugin is the <a href="http://code.google.com/p/grails-jaxrs/wiki/GettingStarted">Getting Started</a> guide.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-8934864870853926432009-07-08T10:45:00.005+02:002011-04-13T20:20:40.844+02:00JSR 311 plugin for GrailsProject homepage: <a href="http://code.google.com/p/grails-jaxrs/">grails-jaxrs</a><br /><br />Recently, I wanted to make a full two-weeks mountain bike tour through the Alps but bad weather on a few days caused me to make detours on the less rocky roads of <a href="http://jcp.org/en/jsr/detail?id=311">JSR 311</a> (JAX-RS: The Java API for RESTful Web Services). I took the chance reading the spec, did some hacking with <a href="https://jersey.dev.java.net/">Jersey</a> and I must really say I like to work with it. I liked it so much that I started to use it inside <a href="http://www.grails.org/">Grails</a> to combine it with features such as <a href="http://grails.org/doc/1.1.1/guide/5.%20Object%20Relational%20Mapping%20%28GORM%29.html">GORM</a> and <a href="http://grails.org/doc/1.1.1/guide/6.%20The%20Web%20Layer.html#6.1.7%20XML%20and%20JSON%20Responses">XML and JSON marshalling</a>.<br /><br />From my experiments I factored out what I think is reusable into a Grails plugin named <a href="http://code.google.com/p/grails-jaxrs/">grails-jaxrs</a> and made it open source. The plugin takes care of initializing Jersey inside a Grails application, implements a controller that does the dispatch to Jersey and provides some Grails-specific entity provider implementations.<br /><br />Here's a very simple example of a resource class that represents a collection of notes and that makes use of Grails object relational mapping (GORM) and XML marshaling:<br /><pre class="brush:groovy"><br />import grails.converters.*<br /><br />import javax.ws.rs.Consumes<br />import javax.ws.rs.GET<br />import javax.ws.rs.Path<br />import javax.ws.rs.PathParam<br />import javax.ws.rs.POST<br />import javax.ws.rs.Produces<br />import javax.ws.rs.core.Response<br />import javax.ws.rs.core.UriBuilder<br /><br />@Path('/notes')<br />class NotesResource {<br /><br /> @POST<br /> @Consumes('text/plain')<br /> @Produces('text/xml')<br /> Response addNote(String text) {<br /><br /> // Create new Note object and store save it to DB<br /> def note = new Note(text:text).save()<br /> <br /> // Construct the URI for the newly created note<br /> URI uri = UriBuilder.fromPath(note.id as String).build()<br /> <br /> // Return an XML representation of the note object<br /> // along with a Location response header with the URI<br /> Response.created(uri).entity(note as XML).build()<br /> }<br /><br /> @GET<br /> @Produces('text/xml')<br /> Response getNotes() {<br /><br /> // Find all notes in the database and return<br /> // an XML representation of the note list<br /> Response.ok(Note.findAll() as XML).build()<br /> }<br /><br />}<br /></pre><br />A note is an object that contains some text. Notes are instances of the <span style="font-family:courier new;">Note</span> Grails domain class. A collection of notes are made available through a RESTful service interface using the above <span style="font-family:courier new;">NotesResource</span> class and the <a href="http://code.google.com/p/grails-jaxrs/">grails-jaxrs</a> plugin. A POST to <span style="font-family:courier new;">http://host:port/notes/</span> creates a new note. A GET to <span style="font-family:courier new;">http://host:port/notes</span> returns an XML repsresentation of all existing notes in the database. For more details refer to the <a href="http://code.google.com/p/grails-jaxrs/wiki/GettingStarted">getting started tutorial</a>.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-52334944565214466292009-05-24T10:43:00.003+02:002009-05-24T11:14:19.062+02:00Moving towards IPF 1.7Work on the next <a href="http://gforge.openehealth.org/gf/project/ipf/">IPF</a> release is in progress and focuses on<br /><br /><ul><li>Components for implementing <a href="http://www.ihe.net/">IHE</a> actor interfaces (XDS.a, XDS.b, PIX, PDQ). Implementing IHE actor interfaces will be as simple as using other Camel components for communication (e.g. the HTTP component). IPF IHE components represent transactions in <a href="http://www.ihe.net/profiles/index.cfm">IHE profiles</a>. For example, to create a web service for the server-side of the ITI-41 transaction (provide and register document set) from the XDS.b profile just write <span style="font-family:courier new;">from('xdsb-iti41:service1')</span> in your route definition. <span style="font-family:courier new;">xdsb-iti41</span> is the name of the component, <span style="font-family:courier new;">service1</span> the endpoint name. The rest of the route has to deal with connecting to (proprietary) backend systems that implement the corresponding actor functionality (document registry/repository). For more detailed information refer to <a href="http://architects.dzone.com/articles/introduction-open-ehealth">this article</a> (section <a href="http://architects.dzone.com/articles/introduction-open-ehealth?page=0,3">Outlook</a>). </li></ul><ul><li>DSL (Groovy builder) for creating <a href="http://en.wikipedia.org/wiki/Clinical_Document_Architecture">CDA</a> documents. This DSL supports the creation of structurally correct CDA documents by enforcing CDA-relevant schema definitions but without dealing with low-level XML details. See also <a href="http://architects.dzone.com/articles/introduction-open-ehealth?page=0,3">outlook on IPF's CDA support</a>.</li></ul><ul><li>Extension of OSGi support. Not all features of IPF have been fully <a href="http://repo.openehealth.org/confluence/display/ipf/OSGi+support">OSGi-enabled</a> with release 1.6, like the large binary support or the event infrastructure. This will be fixed with IPF 1.7. You'll also be able to use the new IHE components on OSGi platforms. We also plan to extend existing Camel components to make use of standard OSGi services such as the HTTP service.</li></ul><ul><li>Better IDE (Eclipse) integration, especially for developing IPF OSGi applications. This includes application development and packaging with Eclipse PDE tools and providing an Eclipse update site for downloading the IPF runtime. Code completion for DSL extensions is currently under discussion.</li></ul><ul><li>Performance testing framework. DSL extensions to performance-measure IPF applications including calculation and reporting of message processing statistics during load tests. </li></ul><br />In parallel we will also create a SVN branch for experimenting with Camel 2.0 milestone releases. I'll let you know about our upgrade/migration experiences in a separate blog post.Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0tag:blogger.com,1999:blog-2708349453904691513.post-80581581288434064022009-04-13T17:55:00.001+02:002009-04-13T18:01:33.335+02:00Open eHealth Integration Platform 1.6.0 releasedLast week we've released the<a href="http://gforge.openehealth.org/gf/project/ipf/"> Open eHealth Integration Platform</a> (IPF) version 1.6.0. IPF is an extension of the <a href="http://camel.apache.org/">Apache Camel</a> routing and mediation engine. It has an application programming layer based on the <a href="http://groovy.codehaus.org/">Groovy</a> programming language and comes with comprehensive support for message processing and connecting systems in the eHealth domain. Please see the <a href="http://repo.openehealth.org/confluence/display/ipf/IPF+1.6.0">release notes</a> for further details.<br /><br />Give it a try! We welcome your feedback!Martin Krasserhttp://www.blogger.com/profile/11765963540395771125noreply@blogger.com0