Optimizing your SQL code
To illustrate the importance of optimizing your SQL code, imagine the following scenario: Your Synergy data processing application runs in an office with approximately 100 simultaneous users. Each user requires 50 to 100 open cursors at any given moment, and each cursor requires about 10K for memory allocation. For cursors alone, this could quickly absorb up to 100 MB in database resources. To handle this demand as efficiently as possible, SQL Connection enables you to optimize your SQL Connection application in several ways, such as reusing cursors.
This topic introduces some of the issues that affect performance. Refer to your database documentation for additional information.
For optimal performance, it may be best to reuse a cursor if you’re going to redo the same operation a little later in your program. Cursor reuse can significantly improve database and network performance. It saves time opening the cursor, and it uses less memory. See Reusing cursors.
One advantage of relational databases, especially in client/server situations, is that you can retrieve only those fields (or columns) that you intend to update. In other words, you may not need to transfer all columns in a row, unlike with ISAM files for which the entire record is retrieved. If you take advantage of this capability, you can potentially cut down on a great deal of network traffic.
To optimize data retrieval across a network, we suggest that you consider using subroutines to access data from the database and that you have multiple versions depending on the job being performed. The goal is to transfer the least amount of data possible.
Table creation is the slowest operation for a relational database. Using a database table as a work file (or “scratch” space) will lead to poor performance. Keep in mind that it is seldom necessary to maintain transaction integrity on the work files.
Instead of creating a relational database table as a scratch space, you should consider using a standard Synergy ISAM file for this purpose. ISAM files will always outperform a relational database in these situations. In fact, many database consultants recommend using a mixture of ISAM and relational files for high performance online transaction processing (OLTP). If you must use a database table, consider leaving work tables created, with the data removed, instead of deleting and recreating them.
Optimizing atomic operations with %SSC_SQLLINK
Two cursors are often used for data access and updates. The first cursor is used to fetch the data from the database, and the second cursor is used to update the data for each row fetched. (In this mode of operation, autocommit is usually used.) To reduce resource requirements and overhead for an atomic operation, SQL Connection allows positioned update mode, which enables you to update a single row using the same cursor you fetched the row with. In other words, the same cursor that’s opened for the %SSC_MOVE function can be used to commit data with the %SSC_SQLLINK / %SSC_EXECUTE combination. See %SSC_SQLLINK, and for an example, see exam_fetch_update.dbl, which is in the connect\synsqlx subdirectory of the main Synergy/DE installation directory.
Improving network performance with prefetch caching
To improve network performance and reduce database operations, SQL Connection uses prefetch caching for fetches made with %SSC_MOVE (but not %SSC_EXECIO) if the cursor is opened with SSQL_SELECT and SSQL_STANDARD and if none of the %SSC_OPEN options that disable caching are passed.
If SQL Connection is able to use prefetch caching (and the cache size is optimal), only one network packet transfer is necessary to retrieve the result set for an SQL statement. However, if prefetch caching is not used, network packets are sent in both directions for every operation needed for the SQL statement. SQL Connection uses prefetch caching for both direct and SQL OpenNet connections.
The default prefetch buffer size is 32,768 bytes, but you can change this by passing a different value for the bufsize argument for %SSC_INIT. See the %SSC_INIT Discussion.
Note that some fetch/open combinations may require long searches for more data to fill the prefetch buffer when there is no more data to be returned. If this is the case, consider using SSQL_ONECOL or SSQL_POSITION (%SSC_OPEN options) to prevent prefetch caching.
Selecting the optimal cursor type
If you use the VTX12_SQLNATIVE or VTX11 database driver, you may be able to improve performance by changing the type of cursor that your application uses. The %SSC_OPEN and %SSC_CMD functions have several options that enable you to do this. For example, if your application reads large amounts of data from a SQL Server database and you use VTX12_SQLNATIVE, you can probably improve performance by setting the cursor type to forward-only (SSQL_CURSOR_FORWARD_ONLY) and read-only (SSQL_RO_CURSOR). For more information, see SQL Server database cursor types.
Reducing sockets for SQL Server and using IPC for Oracle
When connecting to SQL Server, you can use the SQL Server shared memory protocol, which can greatly improve performance by reducing the number of TCP/IP sockets used for a connection. See Using the SQL Server shared memory protocol.
For improved performance for direct (local) connections to an Oracle database on Unix, use IPC instead of TCP/IP.
Reducing memory and enabling more concurrent users
There are a couple of ways to reduce the memory used by an SQL Connection application, which may enable it to support more concurrent users.
The first method is to lower the maxcur and maxcol values passed to %SSC_INIT. This works on all platforms.
The second method is to use the -s option for vtxnetd running on a Windows server. Reducing the -s setting reduces the size of the thread stack allocated to vtxnetd, which lowers the amount of memory used for SQL OpenNet. For more information, see vtxnetd and vtxnet2 programs.
Optimizing queries
A query’s construction can greatly affect an SQL Connection program’s performance. Below are some suggestions, but you should also refer to some general SQL reference works and your database documentation.
- Be sure to include either table names or pseudonyms when defining columns in a SELECT statement, especially when there is a join. For example:
SELECT a.name,b.address FROM accounts a, addresses b WHERE b.account_id = a.id
This relieves the query optimizer from having to determine where a column has come from and from checking for conflicts (i.e., columns with the same name in other tables). Additionally, if someone later adds a column with the same name as a column used in your query, it won’t cause a conflict, which would prevent the query from working.
- Use an ORDER BY clause if you expect a query to return more than one column. Otherwise, data may be returned in seemingly random order, especially if there is no primary key constraint.
- Do not use a function in a WHERE clause. This can cause very poor performance because it prevents the query optimizer from using an index, resulting in a full table scan. Generally, you can eliminate the need for functions in WHERE clauses by optimizing data storage. For example, if a field is used only in uppercase form, store it that way. Or, if you must allow mixed case, consider creating a second column with the data in upper case and lower case just for searching.
- Never use a null in a WHERE clause. Such clauses (e.g., “WHERE column_name=null”) are often the result of columns that default to null. It’s better to give the column a different default value, one that’s meaningless in the context. For example for a column that defines a customer credit limit, you could use -1 as the default.
- Review the query plan to make sure your query is optimized. See your database documentation for information.