In application runs a routine once a week. The last run was extremely slow and we know that more data will be processed during the next run. The slowest statement in the routine, by far, joins a temp table to a products table. Profiler shows a very high number of reads, suggesting it is not indexed well. During the last run, the Products table had 200,000 rows and the temp table had 1200.
update tmp
set tmp.col1 = pd.col1,
tmp.col2 = pd.col2,
tmp.col3 = pd.col3
from #temptable tmp
, Products pd with (nolock)
where tmp.col2 = pd.col2
or tmp.col2 = pd.col3
or tmp.col2 = pd.col4
or tmp.col2 = pd.col5
I only have one chance to apply an indexing strategy. The temp table is generated from data that only exists for a short period of time and no copy exists, Sql server so cannot be recreated before the next run. The plan cache does not have an execution plan.
The query should be updated to ANSI-92 but I'm dealing with it as found.
The products table has indexes on each of the columns col2, col3, col4, col5, but no covering composite or INCLUDES for the update values.
The temp table has no indexing.
I have not tried anything because there is no way to test before the next live run.
Can anyone advise whether I should apply a composite index covering the 4 Product columns, or use four indexes - one for each column and INCLUDEs for col1, col2 and col3?