7

Well, it's hypothetical scenario but what I'm trying to understand is the path to go from a post-morten log (say, an SQL Server Profiler trace) to identify the code on a ORM situation. To make it not too vague, consider an scenario like that:

  • SQL Server 2008
  • Entity Framework as a ORM

So, in that scenario, what is the common path to a DBA (which is also an VB.Net developer) make from the log to diagnosing which code(s) (in this case, Linq queries) are creating the trouble? In this case, the app is ok but is affecting other apps response time using the same DB/server.

That would be absurdly different from a Java+Hiberate process?

EDIT: I want to understand the path from the trace to the culprit metaquery. If the app have SQL in it, this means that an "Find in Files" session (maybe with some regex, in extreme cases) could reduce the inspect task' targets to some dozen suspects instead of tenths or even hundreds of source files. Using an ORM, how to get to that stage using an ORM (in this case: EF)?

Fabricio Araujo
  • 957
  • 7
  • 14

4 Answers4

6

I would strongly suggest metadata in the connection in order to trace back to the application. In the connection string, there is an Application Name. There is also session data which can be used in the form of CONTEXT_INFO

https://stackoverflow.com/questions/323494/sql-server-modifying-the-application-name-property-for-auditing-purposes

Of course all this requires application changes, but it is good for tracing and auditing in general, so baking it in from the beginning is really useful.

Cade Roux
  • 6,684
  • 1
  • 33
  • 55
4

Typically, when a query is "interfering" with another query it is either blocking or deadlocking. Either of these would not be visible via a default SQL Profiler trace.

If you are experiencing deadlocks, you will want trace flags 1204 and 1222 turned on in SQL Server so the deadlock output gets sent to the errorlogs. You could also re-run the trace and add the deadlock events.

Blocking issues can be polled for. Either you can roll your own blocking log code or you can use a tool like SQL Spotlight to detect these occurrences. Blocking can be checked by looking at the master.sys.sysprocesses blocked column for a value <> 0. If you decide to roll your own, a script such as this one could be used as a starting point to get the executing session information which can be logged to a table or you can setup a job to pull information from sp_whoisactive.

I would also check your Profiler trace (or take a new one) and correlate it with basic server performance counters (CPU, disk queue, network throughput) to see if any one particular resource is being bottlenecked during a particular query execution. Often it is obvious via the Profiler trace if a query is a problem because the amount of reads or cpu will be very high on the statement or batch completed event. A counter log that is created during the same time that a Profiler trace is taken can be opened up in SQL Profiler and correlated directly to the Profiler trace output that is saved.

It's also a good idea to setup the SQL Server Management Data Warehouse to collect server and query performance stats over time. It has some nice drill down reports that will help you identify problems within a certain window of time. It's not always as precise as a profiler trace but it could give you helpful information such as the types of waits you are experiencing and other resource bottleneck patterns which can easily be correlated to other queries via the Query Stats reports.

Aaron Bertrand
  • 181,950
  • 28
  • 405
  • 624
Alf47
  • 981
  • 1
  • 9
  • 22
1

I'm probably off-base here with my response, because I'm not really going to address finding the offending code, other than saying... Knowing the executed SQL from the trace, leads to knowing the associated ORM entity, which leads to Find All References in Visual Studio. Hopefully the code is DRY, but that's a separate topic.

I'll focus on SQL performance a little more though.

Assuming the trace has sufficient information to log the ORM activity (probably TSQL batch start/complete), I would sort trace output by CPU, read, write and duration looking for high values. When found, I would optimize as necessary. Indexes to reduce reads, less inserts/updates where possible (fewer indexes where possible, read from a different more indexed table than you insert/update), less processing/triggers. This would be an entire series of replies to enumerate, but look for the top offenders and try to address them first.

All of that assumes that the ORM is the culprit. It is possible the server is experiencing issues not related to the ORM.

I recommend reading over this whitepaper several times and after identifying the source of the issue(s), CPU, memory or IO, I would start walking through the bottleneck specific detection steps identified in the whitepaper.

To skip the whitepaper and jump right in, at the server level I would use the built-in activity monitor to examine recent expensive queries. I would look for excessive executions/duration. Address, rinse, repeat.

Good luck :-)

Bennett Dill
  • 111
  • 3
0

How about these techniques using ToTraceString?

http://blog.cincura.net/227674-how-to-show-sql-command-created-by-entity-framework/ http://www.dotnetcurry.com/ShowArticle.aspx?ID=647

Collect and correlate with your trace.

wBob
  • 10,420
  • 2
  • 25
  • 44