Configuring for Performance and Resilience

A Synergy application’s performance and resilience can often be significantly improved by careful system design and configuration. This topic doesn’t provide details for any particular system, but it does provide general guidelines for improving performance and achieving high availability (resilience). It contains the following sections:

Note

Issues with third-party products and their interactions are beyond the scope of this documentation and Synergy/DE Developer Support. A common example of this is operating system virtualization, where Synergy/DE is supported on the operating system, and the virtualization software acts as a hardware layer. We recommend that you maintain support contracts with third-party product vendors (e.g., operating system and virtualization software vendors) for assistance with these issues.

Systems for Synergy applications can be configured in a number of ways: stand-alone (monolithic) systems, virtualized systems, and client/server configurations with data and/or processing on a server. A correctly configured stand-alone system with sufficient resources provides the best performance for Synergy applications. Such systems are very fast, and adding a hot standby makes them resilient. (See Configuring for resilience.) We recommend that Synergy applications use a stand-alone system when possible. If you find that your applications are pushing the system’s limits (too many users, intensive end-of-year processing, etc.), improve the stand-alone system, if possible, before considering moving data or processing out to the network.

General performance recommendations

For a Synergy application, optimization involves carefully configuring the hardware and software that support the application, and carefully designing and coding the Synergy application itself.

Keep the following in mind when configuring your system for Synergy applications:

CPU performance and sizing

Application design

A Synergy application’s design and the way it is coded can make a big difference in the way it performs. Make sure you use established best practices for programming and note the following recommendations:

Data files

Optimally configuring and using data files can also affect performance. Note the following:

Configuring for resilience

The key to resilience is redundancy. Regardless of your company’s configuration model—stand-alone, virtual, client/server—your servers should have built-in redundancy, such as dual power supplies and RAID 1 disks. Data center monitoring software (such as OpenManage Server Administrator from Dell) can alert you to the failure of any hardware component in the system so that it can be replaced before the alternate fails.

The four figures below show configurations at various levels of redundancy and therefore resilience.

Level 0 shows a basic system that is backed up regularly using the synbackup utility. The system also has a UPS (uninterrupted power supply).

 

Level 1 also has the UPS, but has added dual power supplies and RAID 1 disks for use by synbackup.

 

Level 2 includes a cold standby system. (A cold standby is a duplicate machine that can be ready to go once the disks are moved over and it’s powered on.) Both machines have dual power supplies. The main server is configured to make backups using synbackup to removable RAID 1 disks. If the main server fails, the disks can be removed and plugged into the cold standby machine.

 

Level 3 below shows a system with a hot standby. (A hot standby is a duplicate machine with a shared disk that is ready to go should the primary fail.) The main server is attached via fibre channel to a SAN array with SSD drives. (15,000 RPM SAS drives could also be used). There should be dual paths from the server to the SAN, and dual controllers, battery backup, and RAID 1 drives. The hot standby system is powered up and attached to the same SAN array so that if the main server crashes, the hot standby can take over and the Synergy application can be back up and running within minutes.

Note

When using a hot standby system, you should set system option #36 to ensure write/store/delete operations are flushed to the SAN array. (Alternatively, you can selectively use the FLUSH statement.) This prevents data corruption from occurring when the hot standby is brought into service by flushing (writing to disk) the operating system cache, which is lost when a hot standby system starts up. Flushing protects data integrity, but it can hurt write/store performance because operating system caching is effectively bypassed. If you use option #36 system-wide, consider disabling it with %OPTION for programs that write or update large amounts of data, such as day-end processing.

Backups are generally performed using software on the SAN array, rather than the operating system, using snapshot backups of data communicating with the Unix or Windows OS. Since thousands of users may be accessing the system, it is imperative that the synbackup utility be used to freeze I/O during backup operations to prevent corrupt ISAM files.

For another layer of redundancy, you may want to add a remote data center backup. The hot standby machine covers the case where the main server crashes. A remote data center comes into play when the main data center becomes unusable, such as after a natural disaster.

Configuring virtual machines

Virtual machines are inherently slower than equivalent physical systems because the virtualization software uses system resources and because cores on a virtual machine do not necessarily map directly to cores on the physical machine due to oversubscription. Consequently, on a virtualized system your Synergy application will not get as much CPU power as it would on a physical system.

When virtualizing systems, CPU usage has only a small overhead. I/O operations—including Toolkit, low-level windows, and most especially WPF-type graphics—slow down significantly compared to a physical system. If the latest virtual I/O hardware instructions are either not present or not enabled, the system may get even slower as more virtual CPU cores are added.

The following Xeon processors (or newer; these were introduced in 2010) will have VT-x, VT-d, and EPT required for acceptable I/O performance: Beckton (multiprocessor), Clarkdale (uniprocessor), Gainestown (dual-processor). In addition, using multiple dedicated server-class graphics cards associated with a virtual machine may help .NET WPF application performance.

On a multi-CPU socket machine, a virtual machine should be paired with a specific CPU socket to avoid negatively impacting performance.

Note the following:

Design and configuration for client/server systems

On a client/server system, Synergy data is located on a remote server and, optimally, some data processing takes place on that server as well. When you move a Synergy application from a stand-alone configuration to a client/server configuration, it is likely to need significant architectural changes to approach the performance it had when stand-alone. For example, data updates and random read statements can be 100 times slower in a client/server environment than in a stand-alone environment, especially with a WAN (see Food for thought: Results from our testing).

We recommend the following:

Configuring Windows terminal servers

Terminal servers are machines with the Remote Desktop Services (Terminal Services) role enabled. Do not activate this role unless it is absolutely necessary. File servers, for example, should not have this role (use a separate server for data).

We recommend the following:

Optimizing network data access with xfServer

xfServer provides reliable, high-performance access to remote Synergy data. It manages connections to files and file locks, preventing data corruption and loss, and it can shift the load for data access from the network to the server, easing network bottlenecks. Whenever possible, do not put user-specific files, such as temporary work files, files you may sort, and print files, on the server; it is better to redirect to a local, user location.

There are two environment variables you can use with xfServer to improve performance, SCSPREFETCH and SCSCOMPR.

The Select classes not only make it easier to write code that accesses Synergy data (by creating SQL-like statements), they can also improve performance because reading the ISAM files and selecting the desired records takes place on the server. Then, only the necessary records are sent over the network to the client. If you want to transmit only the necessary fields within records, you can use the Select.SparseRecord() method or the Select.Sparse class. A corresponding method, SparseUpdate(), can be used when writing data. For sparse records to be truly effective in reducing network traffic, SCSCOMPR should be set as well. Use the DBG_SELECT environment variable to determine how well your Select queries are optimized. See System-Supplied Classes for more information about the Select classes.

On a wireless network or WAN, use xfServer connection recovery (SCSKEEPCONNECT) to improve resilience. This feature enables an xfServer client application to seamlessly reconnect to the server and recover its session context after an unexpected loss of connectivity. (See Using connection recovery (Windows).

For more information about xfServer in general, see What is xfServer?

Note

On Windows, do not set system environment variables such as TEMP and TMP to point to an xfServer location. Doing so can cause Visual Studio and other Windows applications to crash.

Configuring SAN for optimal data access

For best performance with Synergy ISAM data, we recommend using high-quality 15,000 RPM SAS disks or SSDs configured in RAID 1 mirror sets. (Other RAID configurations are not recommended.) SANs should use fibre channel or drives should be directly attached. Note the following:

Food for thought: Results from our testing

To illustrate how different configurations affect performance, we did some testing with similar single-user Windows and Unix systems and an ISAM file with one million 200-byte records, one key, and data compression at 50%. (Note that although the systems were similar, they used different hardware and had different capabilities. The figures cited below are meant to show the differences between operations, not operating systems.) Here’s what we found for local (non-network) access:

Operation

Records per second

Windows

Linux

STORE to an ISAM file

88,000

68,000

READS from an ISAM file opened in input mode

588,000

1,485,000

READS from an ISAM file opened in update mode

175,000

410,000

READS from an ISAM file opened in update mode with /sequential

330,000

657,000

The following table shows the results for xfServer and network access with Gigabit Ethernet. The first four rows show local xfServer access; you can compare them with network xfServer access (in the next five rows) to see how a physical network affects performance. Results for mapped drive access are supplied for comparison purposes only; accessing Synergy data via mapped drives is not recommended.

Operation

Records per second

Local xfServer STORE

37,000

Local xfServer buffered STORE

69,000

Local xfServer READS without prefetch

46,000

Local xfServer READS with prefetch

434,000

xfServer STORE from Linux client to Windows server

5,000

xfServer buffered STORE from Linux client to Windows server

37,000

xfServer READS from Linux client to Windows server without prefetch

6,000

xfServer READS from Linux client to Windows server with prefetch

105,000

xfServer READS from Linux client to Windows server with prefetch and compression

141,000

READS from a mapped drive with a single user

430,000

READS from a mapped drive when a file is open for update by another user

435