Troubleshooting AppFabric Scaling Issues (Intermittent ErrorCode<ERRCA0017>:SubStatus<ES0006> Errors)

HOCA picture HOCA · Sep 5, 2012 · Viewed 11.9k times · Source

We've implemented AppFabric Windows Server Cache for our web application. Initially, we were able to use the cache without any issues. We then increased traffic roughly 100 fold, and began experiencing intermittent exceptions. The exceptions occur roughly once per 2 days, for about a minute at a time.

Our configuration:

  • 9 web servers inserting/retrieving objects in cache:
    • Mostly temporal 500 byte operational type objects
    • Using 1 named region
    • Objects stored with tags
    • Retrieved in bulk for a given tag
  • Cache Cluster:
    • 1 host (lead) AppFabric 1.1 (version reported by get-cachehost is 3)
    • SQL configuration provider
    • 96GB of RAM on host, the default 50% (48GB) allocated to AppFabric
    • Cache Host Config
    • Cache Client Config

The errors in the order that they occur (the exceptions are occur for each of the nine webservers during the 1 minute period):

  • System.Net.Sockets.SocketException : An existing connection was forcibly closed by the remote host Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0016>:SubStatus<ES0001>:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown. ---> System.ServiceModel.CommunicationException: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:15:00'. ---> System.Net.Sockets.SocketException: An existing connection was forcibly closed by the remote host --- End of inner exception stack trace --- at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.FramingDuplexSessionChannel.EndReceive(IAsyncResult result) at Microsoft.ApplicationServer.Caching.WcfClientChannel.CompleteProcessing(IAsyncResult result) --- End of inner exception stack trace --- at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody) at Microsoft.ApplicationServer.Caching.DataCache.GetNextBatch(String region, DataCacheTag[] tags, GetByTagsOperation op, IMonitoringListener listener, Byte[][]& state, Boolean& more) at Microsoft.ApplicationServer.Caching.CacheEnumerator.MoveNext() at System.Linq.Enumerable.WhereSelectEnumerableIterator'2.MoveNext() at System.Linq.Enumerable.<ExceptIterator>d__99'1.MoveNext() at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection) at System.Linq.Enumerable.ToList[TSource](IEnumerable'1 source)

  • Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0017>:SubStatus<ES0006>:There is a temporary failure. Please retry later. (One or more specified cache servers are unavailable, which could be caused by busy network or servers. For on-premises cache clusters, also verify the following conditions. Ensure that security permission has been granted for this client account, and check that the AppFabric Caching Service is allowed through the firewall on all cache hosts. Also the MaxBufferSize on the server must be greater than or equal to the serialized object size sent from the client.) at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody) at Microsoft.ApplicationServer.Caching.DataCache.GetNextBatch(String region, DataCacheTag[] tags, GetByTagsOperation op, IMonitoringListener listener, Byte[][]& state, Boolean& more) at Microsoft.ApplicationServer.Caching.CacheEnumerator.MoveNext() at System.Linq.Enumerable.WhereSelectEnumerableIterator'2.MoveNext() at System.Linq.Enumerable.<ExceptIterator>d__99'1.MoveNext() at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection) at System.Linq.Enumerable.ToList[TSource](IEnumerable'1 source)

  • Microsoft.ApplicationServer.Caching.DataCacheException: ErrorCode<ERRCA0018>:SubStatus<ES0001>:The request timed out. at Microsoft.ApplicationServer.Caching.DataCache.ThrowException(ResponseBody respBody) at Microsoft.ApplicationServer.Caching.DataCache.GetNextBatch(String region, DataCacheTag[] tags, GetByTagsOperation op, IMonitoringListener listener, Byte[][]& state, Boolean& more) at Microsoft.ApplicationServer.Caching.CacheEnumerator.MoveNext() at System.Linq.Enumerable.WhereSelectEnumerableIterator'2.MoveNext() at System.Linq.Enumerable.<ExceptIterator>d__99'1.MoveNext() at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection) at System.Linq.Enumerable.ToList[TSource](IEnumerable'1 source)

We have also created a tracelog session on the caching server to capture more information to diagnose the issue - any suggestions on how to analyze this would be appreciated (I can make this available if need be).

We also monitored various AppFabric, CLR, and Network performance counters, below is a screenshot of the event as it occurs:

AppFabric Perfmon Capture

Thanks in advance for any thoughts or advice you can share on resolving this issue.

UPDATE 1

The following are the exceptions occurring continuously on the AppFabric Caching Server during the intermittent errors (abstracted from tracelogs) :

  • System.ServiceModel.CommunicationException: The socket connection was aborted because an asynchronous send to the socket did not complete within the allotted timeout of 00:00:00.0082078. The time allotted to this operation may have been a portion of a longer timeout. ---> System.ObjectDisposedException: The socket connection has been disposed. Object name: 'System.ServiceModel.Channels.SocketConnection'. --- End of inner exception stack trace --- at System.ServiceModel.Channels.SocketConnection.ThrowIfNotOpen() at System.ServiceModel.Channels.SocketConnection.BeginRead(Int32 offset, Int32 size, TimeSpan timeout, WaitCallback callback, Object state) at System.ServiceModel.Channels.SessionConnectionReader.BeginReceive(TimeSpan timeout, WaitCallback callback, Object state) at System.ServiceModel.Channels.SynchronizedMessageSource.ReceiveAsyncResult.PerformOperation(TimeSpan timeout) at System.ServiceModel.Channels.SynchronizedMessageSource.SynchronizedAsyncResult'1..ctor(SynchronizedMessageSource syncSource, TimeSpan timeout, AsyncCallback callback, Object state) at System.ServiceModel.Channels.FramingDuplexSessionChannel.BeginReceive(TimeSpan timeout, AsyncCallback callback, Object state) at Microsoft.ApplicationServer.Caching.WcfServerChannel.CompleteProcessing(IAsyncResult result)

  • System.ServiceModel.CommunicationObjectAbortedException: The communication object, System.ServiceModel.Channels.ServerSessionPreambleConnectionReader+ServerFramingDuplexSessionChannel, cannot be used for communication because it has been Aborted. at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnEndSend(IAsyncResult result) at Microsoft.ApplicationServer.Caching.ReplyContext.EndSend(IAsyncResult result)

  • System.ServiceModel.CommunicationObjectFaultedException: The communication object, System.ServiceModel.Channels.ServerSessionPreambleConnectionReader+ServerFramingDuplexSessionChannel, cannot be used for communication because it is in the Faulted state. at System.ServiceModel.Channels.CommunicationObject.ThrowIfDisposedOrNotOpen() at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout) at Microsoft.ApplicationServer.Caching.ReplyContext.Reply(Message message, TimeSpan timeout)

  • System.TimeoutException: Sending to via http://www.w3.org/2005/08/addressing/anonymous timed out after 00:00:15. The time allotted to this operation may have been a portion of a longer timeout. ---> System.TimeoutException: Cannot claim lock within the allotted timeout of 00:00:15. The time allotted to this operation may have been a portion of a longer timeout. --- End of inner exception stack trace --- at System.ServiceModel.Channels.FramingDuplexSessionChannel.OnSend(Message message, TimeSpan timeout) at System.ServiceModel.Channels.OutputChannel.Send(Message message, TimeSpan timeout) at Microsoft.ApplicationServer.Caching.ReplyContext.Reply(Message message, TimeSpan timeout)

UPDATE 2

After another day of troubleshooting we took the following actions which produced some improvement:

  1. Based on this and this we increased maxConnectionsToServer to 3. As a result we gained 50% more client requests/sec as recorded by the AppFabric Caching:Cache perf counter, but the intermittent errors did not stop occuring

  2. We increased the maxBufferSize and maxBufferPoolSize to 2147483647 (int32.max) on the Cache Server configuration. So far we are able to handle 300x traffic volume w/o errors. We will continue to increase traffic volume and monitor. More updates to follow

UPDATE 3

We added two more hosts with 16GB each to the cluster and enabled HighAvailability mode (via Secondaries=1). Currently the original host remains in the cluster with 96GB ram - all hosts have cacheSize = 12GB. On the cache clients we increase the MaxConnectionToServer to 12 (1 per core). Below are our findings:

  • Occasionally we get (once or twice every 10 minutes):
    • ErrorCode<ERRCA0017>:SubStatus<ES0005>:There is a temporary failure. Please retry later. (There was a contention on the store.)
    • ErrorCode<ERRCA0017>:SubStatus<ES0004>:There is a temporary failure. Please retry later. (Replication queue was full. This may happen during reconfiguration of cluster hosts.)
  • The original 96GB cache hosts still experiences 1 minute outages as described above. The new cache hosts haven't experienced the outage

We plan to remove 80GB ram from the original cache host. More updates to follow.

UPDATE 4

The problem seems to have been solved by reducing the amount of RAM in the cache hosts to 16GB. We no longer see the intermittent errors with traffic increased to 400x. Seems to be cased closed. Now on to the next issue: High Availability

Answer

Josh picture Josh · Sep 11, 2012
  1. Have you installed http://support.microsoft.com/kb/983182 and http://support.microsoft.com/kb/2527387 ?
  2. In your code do you check for the exception and the retrylater bool?

                    catch (DataCacheException ex2)
                {
                    if (ex2.ErrorCode == DataCacheErrorCode.RetryLater)
                    {
    

    The use of a named region forces the server to push the values of that named region to a single server instead of spreading out the hashes across all of your cache servers. ("To provide this added search functionality, objects in a region are limited to a single cache host." http://msdn.microsoft.com/en-us/library/ee790985(v=azure.10).aspx )

What I would recommend is that you shard your named region across 2 more servers and put them in a cluster. This way you are limiting the exceptions to a smaller server when it is running the GC and trying to find more ram to put and store objects and tags into.