Speed Up Inserts Into Sql Server From Pyodbc
Solution 1:
UPDATE: pyodbc 4.0.19 added a Cursor#fast_executemany
option that can greatly improve performance by avoiding the behaviour described below. See this answer for details.
Your code does follow proper form (aside from the few minor tweaks mentioned in the other answer), but be aware that when pyodbc performs an .executemany
what it actually does is submit a separate sp_prepexec
for each individual row. That is, for the code
sql = "INSERT INTO #Temp (id, txtcol) VALUES (?, ?)"
params = [(1, 'foo'), (2, 'bar'), (3, 'baz')]
crsr.executemany(sql, params)
the SQL Server actually performs the following (as confirmed by SQL Profiler)
exec sp_prepexec @p1 output,N'@P1 bigint,@P2 nvarchar(3)',N'INSERT INTO #Temp (id, txtcol) VALUES (@P1, @P2)',1,N'foo'exec sp_prepexec @p1 output,N'@P1 bigint,@P2 nvarchar(3)',N'INSERT INTO #Temp (id, txtcol) VALUES (@P1, @P2)',2,N'bar'exec sp_prepexec @p1 output,N'@P1 bigint,@P2 nvarchar(3)',N'INSERT INTO #Temp (id, txtcol) VALUES (@P1, @P2)',3,N'baz'
So, for an .executemany
"batch" of 10,000 rows you would be
- performing 10,000 individual inserts,
- with 10,000 round-trips to the server, and
- sending the identical SQL command text (
INSERT INTO ...
) 10,000 times.
It is possible to have pyodbc send an initial sp_prepare
and then do an .executemany
calling sp_execute
, but the nature of .executemany
is that you still would do 10,000 sp_prepexec
calls, just executing sp_execute
instead of INSERT INTO ...
. That could improve performance if the SQL statement was quite long and complex, but for a short one like the example in your question it probably wouldn't make all that much difference.
One could also get creative and build "table value constructors" as illustrated in this answer, but notice that it is only offered as a "Plan B" when native bulk insert mechanisms are not a feasible solution.
Solution 2:
It's good that you're already using [Struck out after reading other answer.] executemany()
.
It should speed up a (very little) bit if you move the connect()
and cursor()
calls for your insert_cnxn
and insert_cursor
outside of your while loop. (Of course, if you do this, you should also move the 2 corresponding close()
calls outside of the loop as well.) In addition to not having to (re)establish the connection every time, re-using the cursor will prevent having to recompile the SQL each time.
However, you probably won't see a huge speed up from this just because you're probably only making ~10 passes through that loop anyway (given that you said ~100,000 a day and your loop groups together 10,000 at a time).
One other thing you might look into is whether there are any "behind-the-scenes" conversions being made on your OrderDate
parameter. You can go to SQL Server Management Studio and look at the execution plan of the query. (Look for your insert query in the "recent expensive queries" list by right-clicking on the server node and choosing "Activity Monitor"; right click on the insert query and look at its Execution Plan.)
Post a Comment for "Speed Up Inserts Into Sql Server From Pyodbc"