Access Logging will be available in Grizzly 2.3.12!

Thanks to Grizzly contributor, Pier Fumagalli, a long standing issue to implement access logging has been closed.

Pier’s implementation is based off Apache’s mod_log_config and thus is probably familiar to a lot of developers.

Getting Started

Logging is enabled via a Probe that may be registered with one or more NetworkListeners. It is recommended to use the org.glassfish.grizzly.http.httpserver.accesslog.AccessLogBuilder to create instances of the probe.

For example:

The above code registers the default access logging probe globally to all listeners
associated with the httpServer. However, the following configuration options
are available through the builder:

AccessLogBuilder Configuration Properties
format (either type String or AccessLogFormat (described later)) Specifies the log record format. If this configuration is not explicitly set, it will default to the NCSA extended/combined log format. Additional formats are available. See the javadocs for ApacheLogFormat for details.
timeZone (either type String or java.util.TimeZone) The time zone for the timestamped log records. If not specified, it will default to the timezone of the system running the server.
statusThreshold (type int) Specifies the minimum HTTP status code that will trigger an entry in the access log. If not specified, then all status codes are valid.
rotateHourly If set, then the access logs will be rotated hourly. For example, if the file name specified was access.log, files will be archived on a hourly basis with names like access-yyyyMMDDhh.log
rotateDaily If set, then the access logs will be rotated daily. For example, if the file name specified was access.log, files will be archived on a daily basis with names like access-yyyyMMDD.log
rotatePattern (type String) Specifies a java.util.SimpleDateFormat pattern for automatic log file rotation. For example, if the file name specified was access.log and the rotation patternspecified was EEE (day name in week), files will be archived on a daily basis with names like
access-Mon.log, access-Tue.log, etc.
synchronous (type boolean) Specifies whether access log entries should be written 

synchronously or not. If not specified, logging will occur asynchronously.

Custom Log Formatting and Appending

Log recorder formatting and appending is accomplished by the AccessLogFormat
and AccessLogAppender interfaces.

Grizzly’s implementation, as hinted by the builder documentation above, does
come with defaults for both interfaces.

The default AccessLogFormatter is the ApacheLogFormatter. This implementation
provides the same functionality as the mod_log_config (<http://httpd.apache.org/docs/2.2/mod/mod_log_config.html>).
Custom formats may be created using the format Strings as specified by mod_log_config.
The ApacheLogFormatter also defines several default format Strings that may be used -
see the javadocs for details.

As far as AccessLogAppenders, the following are included:

  • FileAppender (standard log file writing)
  • QueueingAppender (asyncronous log file writing)
  • RotatingFileAppender (automatic log file rotation)
  • StreamAppender (appends to the provided java.io.OutputStream)

Please feel free to give the access logging a shot in either the 2.3 or 3.0 nightly builds and let us know if you find any issues or have ideas for improvement.

A big thanks to Pier for taking the time to contribute back to the community!

Posted in Grizzly | Tagged | Comments Off

Grizzly 2.3.6 has been released!

Per the release announcement:

Hey Folks,

The Grizzly Team is happy to announce the release of 2.3.6!

The artifacts are available on both maven.java.net and central.

2.3.6 issues resolved:  [1]

Thanks to everyone who provided feedback and/or patches for this release!

As usual, if there are additional suggestions on how to improve Grizzly going forward,
or if a problem is discovered, please let us know via one or more of the suggestions
on our contribution info page [3].

Thanks!
- The Grizzly Team

[1] https://java.net/jira/secure/ReleaseNote.jspa?projectId=10005&version=16623
[2] http://java.net/jira/browse/GRIZZLY
[3] https://grizzly.java.net/contribute.html

Posted in Grizzly | Tagged | Comments Off

Grizzly HttpServer + Spring + Jersey + serve static content from a folder and from a JAR

In response to stackoverflow question we’ve created a sample to show how to create Grizzly HttpServer to serve Jersey-Spring resources plus static content from a folder and from a JAR file.

The code looks like:

The complete sample could be found on github

Posted in Grizzly | Tagged , , , , , | Comments Off

Gracefully Terminating a Grizzly Transport

As of Grizzly 2.3.4, we’ve added the ability to gracefully shutdown the HttpServer (i.e., the server will shutdown once all of the current requests have been processed).  However, it seemed that this functionality could be generalized to Grizzly transports so that custom servers could have this feature.  So, as of this week, it is possible to test this feature in the current 2.3.5-SNAPSHOT release.

So, how could you leverage this in your custom server?  Well, it’s pretty simple.  There are two interfaces you’ll need to be aware of.  The first, which anyone wanting to use this feature will have to implement is the GracefulShutdownListener interface.  This interface provides two callbacks.  The shutdownRequested(ShutdownContext) callback will be invoked when a graceful shutdown has been initiated by calling Transport.shutdown() or Transport.shutdown(long, TimeUnit).  The other callback, shutdownForced(), will be invoked if a graceful shutdown was initiated and the grace period expired or Transport.shutdownNow() was called.

The other interface is ShutdownContext.  An instance of this will be passed to each GracefulShutdownListener.

As you can see, once your GracefulShutdownListener has determined it is safe to terminate the trasport, it will need to call ready() on the provided ShutdownContext.

In order to be notified that a graceful shutdown is happening, at some point the GracefulShutdownListener will need to be registered with the transport of interest by calling Transport.addShutdownListener().

 

Posted in Grizzly | Tagged , | Comments Off

Grizzly 2.3.4: Client-side Connection Pool

New client-side connection pool API has been introduced in Grizzly 2.3.4. This API is completely different from one provided by Grizzly 1.9.x, it has more features and hopefully is nicer and easier to use :)
There are 2 main abstractions: SingleEndpointPool and MultiEndpointPool, which represent a connection pool to a single and multiple endpoints* respectively. Each connection pool abstraction has a builder, which helps to construct and initialize a connection pool of a specific configuration.
Let’s talk about these 2 connection pool types separately:

  • SingleEndpointPool
    Represents a connection pool to a single endpoint*. For example if we want to create a connection pool to “grizzly.java.net:443″ and set the maximum number of connections in pool equal to 8 – the code will look like: 

    A Connection could be asynchronously obtained from the pool using either Future or CompletionHandler:

    or

    Please note, if you try to retrieve a Connection using Future, but suddenly changed your mind and don’t want to wait for a Connection, you have to use the code like:

    to properly cancel the asynchronous operation and return a Connection (if there is any) back to the pool. In general it is highly important to return a Connection back to a pool once you don’t need it to avoid connection pool starvation, which will make connection pool useless.

    Other interesting feature of Grizzly connection pool – is an ability to attach/detach connections to/from a pool. For example if you retrieved a connection, but don’t plan to use it as part of the pool or don’t plan to return it back to the pool – you can detach this connection

    and the pool will be able to establish a new connection (lazily, if needed) to reimburse it. On other hand if you have a connection, created outside the pool (or detached from the pool) and you want to attach this connection to the pool – you can call:

    When the pool is not needed anymore (usually when you exit application) – it is recommended to close it:

    During the close() operation execution all the idle connections will be closed. The busy connections, which were not returned back to the pool yet, will be kept open and will be closed once you will try to return them back to the pool.
  • MultiEndpointPool
    Represents a connection pool to multiple endpoints*. We can think of MutliEndpointPool as an Endpoint-to-SingleEndpointPool map, where each endpoint is represented by an EndpointKey. The MultiEndpointPool supports pretty much the same set of operations as SingleEndpointPool, but some of these operations (especially related to the Connection allocation) require EndpointKey parameter. Here is an example of MultiEndpointPool, which is used to allocate connections to 2 different servers: 

The SingleEndpointPool and the MultiEndpointPool have similar configuration properties, which could be tuned: max pool size, connect timeout, keep-alive timeout, reconnect delay etc. Additionally for MultiEndpointPool it is possible to tune max connections per endpoint property, which lets us limit the maximum number of connections to a single endpoint.
You can find the complete connection pool example here.

The Grizzly connection pool implementation is available as a Maven module:

or could be downloaded here.

Here is the full list of configuration properties, which can be used to customize:

  • SingleEndpointPool
    Property Description Notes
    connectorHandler The ConnectorHandler to be used to establish Connections to the endpoint mandatory
    endpointAddress The remote endpoint address to open Connection to mandatory
    localEndpointAddress The local endpoint address to bind Connection to optional
    corePoolSize The number of Connections, kept in the pool, that are immune to keep-alive mechanism Default value: 0
    maxPoolSize The max number of Connections kept by this pool Default value: 4
    connectTimeout The connect timeout, after which, if a connection is not established, it is considered failed value < 0 disables the timeout. By default disabled
    reconnectDelay The delay to be used before the pool will repeat the attempt to connect to the endpoint after previous connect had failed value < 0 disables reconnect. By default disabled
    maxReconnectAttempts The maximum number of attempts to reconnect that will be made before notification of failure occurs Default value: 5
    keepAliveTimeout The maximum amount of time an idle Connection will be kept in the pool. The idle Connections will be closed till the pool size is greater than corePoolSize value < 0 disables keep-alive mechanism. Default value: 30 seconds
    keepAliveCheckInterval The interval, which specifies how often the pool will perform idle Connections check Default value: 5 seconds
  • MultiEndpointPool
    Property Description Notes
    defaultConnectorHandler The default ConnectorHandler to be used to establish Connections to an endpoint mandatory. It is still possible to set a ConnectorHandler per each endpoint separately
    maxConnectionsPerEndpoint The maximum number of Connections each SingleEndpointPool sub-pool is allowed to have Default value: 2
    maxConnectionsTotal The total maximum number of Connections to be kept by the pool Default value: 16
    connectTimeout The connect timeout, after which, if a connection is not established, it is considered failed value < 0 disables the timeout. By default disabled
    reconnectDelay The delay to be used before the pool will repeat the attempt to connect to the endpoint after previous connect had failed value < 0 disables reconnect. By default disabled
    maxReconnectAttempts The maximum number of attempts to reconnect that will be made before notification of failure occurs Default value: 5
    keepAliveTimeout The maximum amount of time an idle Connection will be kept in the pool value < 0 disables keep-alive mechanism. Default value: 30 seconds
    keepAliveCheckInterval The interval, which specifies how often the pool will perform idle Connections check Default value: 5 seconds


* term endpoint is abstract, it can be either SocketAddress, which represents host:port for TCP and UDP Transports; or any other endpoint type understood by a custom Grizzly Transport

Posted in Grizzly | Tagged , , , | Comments Off

Grizzly 2.3.3: Serving Static HTTP Resources from Jar Files

Besides SPDY updates and SPDY server push support, Grizzly 2.3.3 is coming with another interesting feature – possibility to serve static HTTP resources from jar files, or to be more precise serving static HTTP resources using Java ClassLoader.

As we remember it was always easy to register Grizzly StaticHttpHandler to serve HTTP resources from the specific file system path or even several paths.

For example in order to serve static HTTP resources from “/Users/myhome/mypages” folder we can register StaticHttpHandler like:

But we got many questions from users asking: “What if I have static resources in a jar file?” True, the entire standalone server plus static resources might be bundled to a single jar file, so how to serve these resources?

In Grizzly 2.3.3 we implemented new CLStaticHttpHandler, which is able to serve static HTTP resources using Java ClassLoader.

For example, if our standalone server and static HTTP resources are bundle to a single jar file, the CLStaticHttpHandler might be registered like:

Or, if we have our static HTTP resources in a separate jar file, that is not on the application class path – the sample above should be changed like:

If you have any question – don’t hesitate to ask on Grizzly mailing lists.

Posted in Grizzly | Tagged , , | Comments Off

Grizzly 2.3.3: SPDY Server Push

Starting with 2.3.3, Grizzly offers support for SPDY server push mechanism.

Quote:
SPDY enables a server to send multiple replies to a client for a
single request. The rationale for this feature is that sometimes a
server knows that it will need to send multiple resources in response
to a single request. Without server push features, the client must
first download the primary resource, then discover the secondary
resource(s), and request them. Pushing of resources avoids
the round-trip delay…

For sure, one might ask “what if the client already has pushed resource
cached locally?” It is true, we probably should not push all the associated
resources blindly, for example we probably should not push a huge file
when we are not sure whether or not the client already has it, on other side it might
be a good idea to push small icon, javascript, css resources even if the client
has them already cached because additional round-trips
(especially for mobile networks) might be more time consuming than
receiving extra data.

Grizzly provides *PushResource* builder in order to construct the descriptor for the
resource we want to push. For example:

In the sample above we instruct SPDY stream to initiate server push, and send
image file represented by the File object.
Similarly it is possible to build *PushResource* based on Grizzly Buffer, byte[] or String.

It is also possible to customize *PushResource* status code, reason phrase and
headers like:

And finally to give you more idea how the complete code looks like, here is an
*HttpHandler* example:

The complete SPDY example including Server Push can be found here.

Posted in Grizzly | Tagged , , | Comments Off

Grizzly 2.3+SPDY/3

We announced earlier in the week that we released Grizzly 2.3!  One of the major features of this release is the inclusion of SPDY/3 support.

SPDY is, per the following quote from Wikipedia:

The goal of SPDY is to reduce web page load time. This is achieved by prioritizing and multiplexing the transfer of web page subresources so that only one connection per client is required.

Another good quote:

SPDY does not replace HTTP; it modifies the way HTTP requests and responses are sent over the wire. This means that all existing server-side applications can be used without modification if a SPDY-compatible translation layer is put in place. When sent over SPDY, HTTP requests are processed, tokenized, simplified and compressed.

You can read all of the nitty-gritty details of SPDY here.

SPDY also typically is only used with transport layer security (TLS) using an extension called Next Protocol Negotiation (referred to as NPN).  In a nutshell, this allows application-level protocols to be negotiated during the SSL handshake for easy tunnelling.  See this RFC draft for more details on NPN.

In order for the Grizzly SPDY/3 implementation to function, you’ll need our SPDY and NPN runtimes.

The one gotcha right now with NPN and Java is that the SSL runtime does not natively support it.  So initially we decided on using Jetty’s NPN implementation. This worked fine with our initial implementation, however, due to potential red-tape issues, we decided to roll our own.  The end result, is fairly similar to Jetty’s due to how we bridged to their implementation.  As such we have similar restrictions.  As of this time, we only support SPDY/3 when using OpenJDK 7u14.  Versions later than this may or may not work as the SSL internals appear to change frequently as we’re replacing key SSL classes via the JVM bootclasspath mechanism.  If you do find a bug or an incompatibility with later versions of OpenJDK, please let us know and log an issue.

The maven coordinates for everything you need are:

So, how does one use SPDY with Grizzly 2.3 standalone?  It’s pretty simple to configure.

Before you actually start the server, the grizzly-npn-bootstrap.jar needs to be added to the JVM’s bootclasspath.  For example, if the JAR was in my home directory, then the bootclasspath would look something like:  -Xbootclasspath/p:/Users/catsnac/grizzly-npn-bootstrap-1.0.jar.

How do you know it’s working?  Well, there’s a couple possibilities.  Chrome, for example, has an extension that indicates whether or not the site you’re visiting is using SPDY or not.  A more primitive option, which I’ll be showing here, is using the -Djavax.net.debug=ssl:handshake JVM option on the server-side.

Simply start the standalone Grizzly/SPDY application with the aforementioned ssl debug option and connect using an SPDY-enabled browser.  Assuming the Grizzly NPN implementation is properly set in the bootclasspath, you should see:

It may be difficult to see, but if you search for “<===” in the content above, you can see where I’ve pointed out the specific NPN messages.

So, what about GlassFish?  We do have integration code for GlassFish 4(we recommend using the latest promoted builds). There are a few extra steps to getting it working in that environment.

Like the standalone example, you’ll need to add the same bootclasspath definition to the domain.xml.  You’ll also need to copy the grizzly-npn-osgi-1.0.jar and grizzly-spdy-2.3.jar to $GF_HOME/modules.  The grizzly-spdy module includes a command to enable spdy.  So once the jars are in place, the bootclasspath updated, you can run the enable-spdy command against the secure HTTP listener.  Assuming you’re doing this against the default configuration, this would be http-listener-2:  asadmin enable-spdy http-listener-2.

Once the above is done, you can ensure it’s working in the same way as was previously described.

What about tuning?  At this point in time we offer the following configuration options.  These can be set either via the SpdyAddon (Grizzly standalone) or using asadmin set.

maxConcurrentStreams Configures how many streams may be multiplexed over a single connection. The default is 100.
initialWindowSizeInBytes Configures how much memory each stream will consume on the server side. The default is 64KB.
maxFrameLength Configures the upper bound on allowable frame sizes. Frames above this bound will be rejected.  The default is 2^24 bytes in length – the max allowed by the spec.

Lastly, there are some limitations that should be spelled out.  Please see our documentation for details.  We’d like to encourage you give it a shot and log any problems or features you like to see resolved/implemented.

Enjoy!

 

Posted in Grizzly | Tagged , | Comments Off