I have a server hosting a website of mine that has almost zero-traffic.
A few people (< 20) enter the site every day, and a few RSS readers are subscribed to some feeds we put out.
Almost every night, an RSS reader will hit us in the middle of the night and get an exception that the website can't connect to the SQL Server because of a Timeout in the connection. The details are extremely weird, so I'm looking for some help on what could be the issue, since I don't know where to start looking anymore.
We are using ASP.Net MVC, Entity Framework, and SQL Server 2008 over Windows Server 2008. The machine is a dedicated box we got from a not exactly top-tier provider, so things might be configured non-optimally, or who knows what else.
The box is also pretty small, and has only 1Gb RAM, but it should take the kind of load we have for now...
I'm copying the full call stack below, but first, some of the things we know:
UPDATE: We attached a profiler as suggested, and it took quite a while before we had a new exception. This is the new stuff we know:
So, in short, I'm completely clueless about this. I can't find a reason for those hundreds of queries that get run in quick succession, but I believe those must have something to do with the problem.
I also don't know how to diagnose the connection problems...
Or how the Profiler trace may be missing some queries that according to IIS went through fine...
Any ideas?
This is the exception information:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
System.Data.EntityException: The underlying provider failed on Open. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)
at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.SqlClient.SqlConnection.Open()
at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure)
at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure)
--- End of inner exception stack trace ---
at System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean openCondition, DbConnection storeConnectionToOpen, DbConnection originalConnection, String exceptionCode, String attemptedOperation, Boolean& closeStoreConnectionOnFailure)
at System.Data.EntityClient.EntityConnection.Open()
at System.Data.Objects.ObjectContext.EnsureConnection()
at System.Data.Objects.ObjectQuery`1.GetResults(Nullable`1 forMergeOption)
at System.Data.Objects.ObjectQuery`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator()
at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source)
at System.Data.Objects.ELinq.ObjectQueryProvider.<GetElementFunction>b__1[TResult](IEnumerable`1 sequence)
at System.Data.Objects.ELinq.ObjectQueryProvider.ExecuteSingle[TResult](IEnumerable`1 query, Expression queryRoot)
at System.Data.Objects.ELinq.ObjectQueryProvider.System.Linq.IQueryProvider.Execute[S](Expression expression)
at System.Linq.Queryable.FirstOrDefault[TSource](IQueryable`1 source)
at MyProject.Controllers.SitesController.Feed(Int32 id) in C:\...\Controller.cs:line 38
at lambda_method(ExecutionScope , ControllerBase , Object[] )
at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary`2 parameters)
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary`2 parameters)
at System.Web.Mvc.ControllerActionInvoker.<>c__DisplayClassa.<InvokeActionMethodWithFilters>b__7()
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)
at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodWithFilters(ControllerContext controllerContext, IList`1 filters, ActionDescriptor actionDescriptor, IDictionary`2 parameters)
at System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName)
at System.Web.Mvc.Controller.ExecuteCore()
at System.Web.Mvc.MvcHandler.ProcessRequest(HttpContextBase httpContext)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
Any ideas will be enormously appreciated.
This is very likely a Memory problem, perhaps aggravated or triggered by other things, but still inherently a memory problem. there are two other (less likely) possibilities, that you should check and eliminate first (because it is easy to do so):
You may have "Auto Close" enabled: Auto Close can have exactly this behavior, however it is rare for it to be turned on. To check this, in SSMS right-click on your application database, select "Properties", and then select the "Options" pane. Look at the "Auto Close" entry and make sure that it is set to False. Check tempdb also.
SQL Agent Jobs may be causing it: Check the Agent's History Log to see if there were any jobs consistently running during the events. Remember to check maintenance jobs too, as things like Rebuilding Indexes are frequently cited as performance problems while they are running. These are unlikely candidates now, only because they would not normally be affected by the Profiler.
If those do not show anything, then you should check for memory problems. I suspect Memory as the cause in your case because:
You have 1 GB of Memory: Although this is technically above the Minimum for SQL Server, it is way below the recommended for SQL Server, and way below what in my experience is acceptable for production, even for a lightly loaded server.
You are running IIS and SQL Server on the same box: This is not recommended by itself, in large part because of the contention for memory that results, but with only 1 GB of memory it results in IIS, the app, SQL Server, the OS and any other tasks and/or maintenance all fighting for very little memory. The way the Windows manages this is to give memory to the active processes by aggressively taking it away from the non-active processes. It can take many seconds, or even minutes for a large process like SQL Server to get back enough of its memory to be able to completely service a request in this situation.
Profiler made 90% of the problem go away: This is a big clue that memory is likely the problem, because typically, things like Profiler have exactly this effect on this particular problem: the Profiler task keeps the SQL Server just a little bit active all of the time. Frequently, this is just enough activity to either keep it off the OS's "scavenger" list, or at least reduces it's impact somewhat.
Turn Off the Profiler: Its having a Heisenberg effect on the problem, so you have to turn it off or you will not be able to see the problem reliably.
Run a System Monitor (perfmon.exe) from another box, that remotely connects to the perfomrance collection service on the box that your SQL Server and IIS are running on. you can most easily do this by first removing the three default stats (they are local only), and then add in the needed stats (below), but make sure to change the Computer name in the first drop-down to connect to your SQL box.
Send the collected data to a file by creating a "Counter Log" on perfmon. If you are unfamiliar with this, then the easiest thing to do is probably to collect the data to a tab or comma separated file that you can open with Excel to analyze.
Set up your perfmon to collect to a file and add the following counters to it:
-- Processor\%Processor Time[Total]
-- PhysicalDisk\% Idle Time[for each disk]
-- PhysicalDisk\Avg. Disk Queue Length[for each disk]
-- Memory\Pages/sec
-- Memory\Page Reads/sec
-- Memory\Available MBytes
-- Network Interface\Bytes Total/sec[for each interface in use]
-- Process\% Processor Time[see below]
-- Process\Page Faults/sec[see below]
-- Process\Working Set [see below]
For the Process counters (above) you want to include the sqlserver.exe process, any IIS processes, and any stable application processes. Note that this will ONLY work for "stable" processes. Processes that are continually being re-created as needed, cannot be captured this way because there is no way to specify them before they exist.
Run this collection to a file during the time that the problem most frequently happens. Set the collection interval to something close to 10-15 secs. (this collects a lot of data, but you will need this resolution to pick out the separate events).
After you have one or more incidents, stop the collection and then open your colleced data file with Excel. You will probably have to reformat the timestamp column to be usefully visible and show hours minutes and seconds. Use your IIS log to find the exact time of the incidents, then look at the perfmon data to see what was going on before and after the incident. In particular you want to see if its working set was small before and was large after, with a lot of page faulting in between. That's the clearest sign of this problem.
Either separate IIS and SQL Server onto two different boxes (preferred) or else add more memory to the box. I would think that 3-4 GB should be a minimum.
The problem here is that it is most likely either peripheral or only contributory to your main problem. Remember that Profiler made 90% of your incidents go away, so what remains, may be a different problem, or it may be only the most extreme aggravator of the problem. Because of its behavior I would guess that it is either cycling its cache or there is some other background maintenance of the application server processes.