Array binding is a powerful feature that significantly improves the performance by executing similar SQL statements in a single batch, rather than sending them one by one to the database.
dotConnect Universal is a flexible data provider that allows connectivity to multiple databases, including Oracle, DB2, MySQL, SQLite, PostgreSQL, InterBase, and Firebird. This article demonstrates how to perform batch inserts and updates using UniCommand, with a practical example targeting PostgreSQL.
To efficiently update your database with large volumes of data, follow the steps below:
1. Create a database and its table.
With any SQL tool, you can create a database and its table by using the following DDL statement:
CREATE TABLE batch_test(" +
"id int primary key, " +
"f_integer int, " +
"f_varchar character varying(100)
)";
Alternatively, you can achieve the same result programmatically with the following C# code:
2. Create UniCommand and set a DML statement.
Define the desired DML statement (e.g. INSERT, UPDATE, or DELETE) by setting the CommandText property of the UniCommand object.
3. Add parameters.
Specify the parameter names and their types.
4. Fill parameter values.
Define an array of values in each parameter of UniCommand. Each array element corresponds to a different execution of the SQL statement within the same batch.
5. Execute the batch insert.
Call the ExecuteArray() method to execute the command for all sets of parameter values in a single batch.
Below is a sample code that executes several INSERT operations using array binding.
This example demonstrates how to insert multiple records with a single execution operation using array binding in dotConnect Universal.
To use the array binding feature, you must assign arrays of values to the parameters of a UniCommand object and call the ExecuteArray() method. Each array element corresponds to a different execution of the SQL statement within the same batch.
Array binding feature with UniCommand doesn't support explicitly setting the batch size. The optimal size is calculated automatically based on the input data.
This approach is especially beneficial when working with large datasets, as it reduces the number of individual communications between the application and the database, significantly improving performance.