Transactions, Locking and DataFlex

A Data Access Worldwide White Paper
by Ben Weijers

December 2000
Updated: October 2001

Concurrency

Handling transactions and locking in the DataFlex language

Handling transactions and locking using the command language

Handling Transactions and Data-Dictionaries

The DataFlex database

The DataFlex Connectivity Kit for Pervasive.SQL

The DataFlex Connectivity Kit for DB2

The DataFlex Connectivity Kit for SQL

The DataFlex Connectivity Kit for ODBC

This white paper discusses the way transactions and locking are handled by DataFlex and the different Connectivity Kits provided by Data Access. The paper starts with a small introduction to concurrency issues, discuss the DataFlex programming language aspect then it will go into details of the supported database formats.

Concurrency

Concurrency is the ability of multiple users to access and modify data simultaneously and is of vital importance in any database environment. The database should handle multiple users accessing the database simultaneously in a correct way. The means to achieve this are transactions and locking.

Transactions

A transaction is a unit of work that is done as a single atomic operation. Transactions either succeed or fail as a whole. For example consider a transaction that transfers money from one bank account to another. This involves withdrawing money from one account and depositing the money in the other account. It is important that both actions occur, it is unacceptable for one action to succeed and the other to fail. A database that supports transactions is able to guarantee that either both steps succeed or both steps fail.

The transaction support of a database system must have the ACID (Atomicity, Consistency, Isolation and Durability) properties .

Transactions are started by a program. The program then manipulates data in the database. Eventually the transaction is committed, or rolled back. A commit will make all changes made by the transaction permanent. A rollback will remove all changes made by the transaction just like the transaction was never started.

Most databases support all of the transaction properties. The Isolation property usually can be set up to be less restrictive. So called Isolation Levels have been defined. The lowest level will make intermediate results available to other transactions, the highest level will not. There is an inverse relation between Isolation Level and concurrency. The higher the Isolation Level, the lower the concurrency

Locking

A lock guarantees exclusive access to the object on which it is applied. All databases that support multi user access use some form of locking. Locking is used to enforce the Isolation transaction property. Although there are a number of variants on implementation level basically databases support the following lock granularities: row locking, page locking and table locking.

Usually a database supports one or more of the lock granularities or uses a mechanism that will automatically select the best locking granularity for the task at hand. There is an inverse relation between lock granularity and concurrency. The smaller the granularity, the bigger the concurrency. If for example we have a table with 10 rows made up of 5 pages, each page having 2 rows, 10 users can simultaneously lock a row, 5 users can simultaneously lock a page and only one user can lock the table.

The use of locks can introduce a side effect known as a deadlock. A deadlock is a situation where two, or more, transactions are waiting on each others locks to be released. Transaction A waits for B and B waits for A. In a deadlock situation transactions wait indefinitely. This is an unacceptable situation. The database system must either prevent deadlocks from happening or detect deadlocks when they happened and resolve that situation.

There are deadlock avoidance strategies. The native DataFlex database uses one of these strategies. Most database systems however use a deadlock detection and resolving strategy. Deadlock can be resolved by choosing a victim transaction in the list of deadlocked transactions. The victim is then stopped by issuing a rollback. This will free the locks claimed by the victim transaction thus allowing other transactions to obtain those locks and continue.

It is important to understand that deadlocks are not a programming error. It is a situation that can occur in certain environments. It is the application programmer's responsibility to handle deadlocks in a proper way. Application programmers typically handle deadlock errors in two ways, one is to report an error to the user and let the user re-enter the information. The other is to automatically retry the transaction a designated number of times.

Again, deadlock is not a programming error. It is a condition that can occur in a certain environment. It depends on the application and database backend what the chance is for deadlock to occur. Usually this chance is very small. Nevertheless, application programmers should handle deadlock no matter how small the chances are for it ever occurring.

Handling transactions and locking in the DataFlex language

A DataFlex program will alter data in tables using one of two methods. Either they will:

All table changes in DDOs are handled through two messages, Request_save and Request_delete. Using DDOs greatly simplifies transaction handling. When you use DDOs, transaction support is built in (in fact it is required). There are no migration or conversion issues with DDOs or DSOs. An application written using data-sets prior to DataFlex 3.1, will work without any required change.

Manual methods include using the traditional commands Save, Saverecord, Delete, Lock, Reread, Unlock. Three additional commands were added to DataFlex 3.1 and VDF4, Begin_Transaction, End_Transaction, and Abort_Transaction. Depending on the applications coding logic, changes may be required to make an application work properly with transactions. Even if a legacy application works properly with transactions you may want to make changes in your code to take full advantage of the transaction handling.

If you have an application that uses DDOs or DSO, you will want to refer the section titled Handling Transaction and Data-Dictionaries. If, you have applications that do not use DDOs or DSO, you should review the following section.

Handling transactions and locking using the command language

The DataFlex programming language supports a number of commands to define transactions and use locking. These commands are:

The Lock, Reread and Unlock commands have been part of the DataFlex language for a long time. Up until DataFlex revision 3.1 there was no proper support for transactions. In DataFlex 3.1 support for transactions was added in the form of the three transaction commands. VDF and WebApp have always supported transactions.

Command

Description

Begin_Transaction

Marks the beginning of an explicit transaction. Every database action between this command and its accompanying End_transaction command is "the transaction".

Defining a transaction within a transaction is not considered an error. The inner transaction will be embraced by the outer one making both transactions act as if it were one.

Begin_Transaction and End_Transaction must be on the same scope. It is not allowed to place the End_Transaction command in a different scope.

Begin_Transaction will lock the database, see Lock for a description of locking the database.

End_transaction

Marks the end of an explicit transaction. Every database action between this command and its accompanying Begin_Transaction command is "the transaction".

End_Transaction will unlock the database. See Unlock for a description of unlocking the database.

Abort_Transaction

Will rollback the transaction.

Will rollback the transaction. The abort transaction command will not jump to the End_transaction command automatically. The flow of the program. will continue on the next line. It is the programmers responsibility to ensure control jumps out of the transaction after issuing this command. If the programmer does not take care of the control flow errors like DFERR_EDIT_WITHOUT_REREAD (4155) can occur.

Abort_Transaction will unlock the database. See Unlock for a description of unlocking the database.

Usually there will be little need to use the Abort_Transaction command. Transactions are automatically aborted if errors occur (see below) and the flow jumps to the End_Transaction command in that case.

Lock

Locks the database. Every table open in the application that is not read only or an alias will be locked. The database API will call the driver of each individual table to lock that table. Tables are locked in the order of the number that they are opened in. Normally this is the same order as defined in the filelist.

If the lock attempt generates error DFERR_LOCK_TIMEOUT (4106), the API will automatically retry the lock. The number of retries can be set by using the Set_Transaction_Retry command. A database driver may or may not generate the DFERR_LOCK_TIMEOUT error.

Some database systems use row locking. Row locking differs from table locking not only in granularity but also in timing. You can lock a table any time. You can only lock a row after it has been found. Drivers that connect to row locking databases, usually do not implement the lock function but will lock as finding occurs in a locked state.

Lock and Unlock are legacy commands. If you want your application to take full advantage of transaction processing we recommend that you replace these commands with Begin_Transaction and End_Transaction.

Reread

A reread is actually two actions. Reread will lock the database followed by a re-find. If the reread command gets no arguments, every active record is re-found. If the reread command has arguments only the active record of the passed buffers are re-found. There is no difference in how the lock is done between the two commands reread and lock.

See Lock for a description of locking the database.

Unlock

Unlocks the database. Every table, page or row that has been locked will be unlocked. The database API will call the driver of each individual table to unlock that table. Tables are unlocked in the order of the number that they are opened in. Normally this is the same order as defined in the filelist.

Lock and Unlock are legacy commands. If you want your application to take full advantage of transaction processing we recommend that you replace these commands with Begin_Transaction and End_Transaction.

 

Transactions support the ACID properties, this makes programming a system that supports transactions more simple. For example, suppose we have a restriction in our application that we can only order items that are in stock. In a non transaction system this condition would have to be checked before trying to save the order. In a transaction system however one can start the transaction, then start saving the rows that make up the order. If somewhere along the way the condition is not met, issue an error which will return the database in the state it had before starting the transaction. If the condition is met for every row, issue an End_transaction thus committing the transaction. Using the transaction commands to define transactions is referred to as doing explicit transactions.

Since the transaction commands were not supported in DataFlex prior to revision 3.1 programmers would be forced to adjust all their programs in order to use the new revision's transaction logic. To avoid this and offer a smooth transition into version 3.1 DataFlex supports so-called implicit transactions. An implicit transaction is started by the Lock or Reread command and ends with the Unlock command.

Errors in a transaction

Because transactions must support Atomicity, errors that occur during a transaction will force a rollback of the transaction. When the error occurred during an explicit transaction, control will jump to the next line of the End_Transaction command. This behaviour of transactions results in a minimal use of the Abort_Transaction command. Transactions are usually rolled back because some error occurred, since the occurrence of an error results in a rollback, there is no need to explicitly use the Abort_Transaction command in such a case.

The only error that is an exception to this rule is the DFERR_LOCK_TIMEOUT (4106) error. If this error occurs, the program will check the retry setting and if a retry must be done jump to the beginning of the (implicit or explicit) transaction. The number of retries can be set by using the Set_Transaction_Retry command. If the object oriented programming style is used the message Verify_Retry will be send to the object where the transaction is defined. This function can return zero in which case the transaction will be retried, a non zero return value will stop the retry. If the logic loops through all its retries and it still gets a time out error, the timeout error will be treated as any other error would.

Implicit transactions will not be aborted when an error occurs. Instead the flow of the program will continue on the next line. Since implicit transactions where designed for compatibility reasons, the behaviour is the same as it was before transactions where introduced to DataFlex. Some database drivers may not be able to support implicit transactions.

There is a difference in how some DataFlex revisions handle errors in implicit transactions. DataFlex 3.1d, DataFlex 3.2, Visual DataFlex 6.0, 6.2 and 7.0 will abort the implicit transaction if an error occurs. The flow of the program however will not jump out of the implicit transaction but continue on the next line. The behaviour change was made to accommodate non DataFlex databases. It does effect the way existing applications using DataFlex data work when errors occur in a locked state. This behaviour will usually not be desired. If for example an implicit transaction tries to save 3 records and an error occurs while trying to save the second record, the transaction will be rolled back but the control will not jump to the end of the implicit transaction. In this case it will still try to save the third record. The rollback has removed the lock so the DFERR_EDIT_WITHOUT_REREAD (4155) error occurs. In DataFlex 3.2 and Visual DataFlex 7.0 this can be avoided by setting the DF_TRANABORT_ONERROR global attribute. In the other revisions mentioned above there is no way to change this behaviour. Setting this attribute to false, will not cause the implicit transaction to be rolled back if an error occurs. To adjust the setting the programmer should add the line: "Set_attribute DF_TRANABORT_ONERROR To (False)". In upcoming revisions of DataFlex the DF_TRANABORT_ONERROR attribute will be supported. Its default value will change from true to false. This will ensure that new revisions will be compatible with pre-transaction code. In Visual DataFlex 7.0 the attribute DF_TRANABORT_ONERROR is not defined in FMAC. If you want to set it in a VDF7 application, you will have to define it. The internal attribute number for the attribute is 22. To define it add " #REPLACE DF_TRANABORT_ONERROR |CI22" to your program code. In DF3.2 , the attribute has been defined in FMAC. You can use it without having to define it first.

In Visual DataFlex 7.0 the attribute DF_TRANABORT_ONERROR is not defined in fmac. If you want to set it in a VDF7 application you will have to define it. The internal attribute number for the attribute is 22. To define it add " #REPLACE DF_TRANABORT_ONERROR |CI22" to your program code. In DF3.2 the attribute has been defined in fmac, you can use it without having to define it first.

Please note that explicit transactions will always be aborted when an error occurs no matter how the DF_TRANABORT_ONERROR attribute is set. Data sets and Data dictionaries automatically use explicit transactions. Setting the DF_TRANABORT_ONERROR attribute has no effect on Data Set / Data Dictionary based applications. They always use explicit transactions.

Due to the differences between runtimes, the behavior of existing applications may change when converted to a newer version. Let's summarize how different runtime versions behave compared to pre-DataFlex 3.1 runtimes. The runtimes that behave differently are marked in red; those that behave differently but can be made to behave the same are marked in blue.

Important: These differences only apply to applications that are not using data-dictionaries or data-sets. If you are using DDOs or DSOs, all runtimes exhibit the same, correct behavior.

Runtime

Behaviour

DF3.1, DF3.1b, DF3.1c

Exactly the same as in pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction will not be aborted. The program continues in a locked state.

DF3.1d

Different from pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction is aborted. The program continues in an unlocked state.

DF3.2

Different from pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction is aborted. The program continues in an unlocked state. In DF3.2 the programmer can set the runtime to work compatible to pre-3.1 runtimes in the following way:

Set_attribute DF_TRANABORT_ONERROR To (False)

VDF4, VDF5

Exactly the same as pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction will not be aborted. The program continues in a locked state.

VDF6

Different from pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction is aborted. The program continues in an unlocked state.

VDF7 Service Pak 3

Exactly the same as in pre-3.1 runtimes. If an error occurs during an implicit transaction the transaction will not be aborted. The program continues in a locked state.

In VDF7 the programmer can set the runtime to work incompatible to pre-3.1 runtimes in the following way:

#REPLACE DF_TRANABORT_ONERROR |CI22

Set_attribute DF_TRANABORT_ONERROR To (True)

Lock granularity

DataFlex only recently started supporting different database systems from the native DataFlex database. The native DataFlex database uses a DataFlex specific way of locking. This way of locking is not supported by all other database systems. Issuing a lock can therefore result in different behaviour depending on the database system in use.

The native DataFlex database supports table level locking only. This means that one and only one transaction can have a table locked at any given time. Most other database systems support row level locking only. In those systems there can be multiple concurrent transactions that have rows in one table locked at a given time. A further difference in table and row locking is the moment the actual lock is done. When using table locking, the table is locked the moment the lock is issued by the program. With row or page locking however, the row or page can only be locked after it, or a row on it, has been found. So row or page locking systems lock while rows are being found in a transaction. This is a conceptual difference.

Some DataFlex programs misuse the lock behaviour of DataFlex to guarantee exclusive access to certain resources. Lets assume we want to write to an ASCII disk file from a DataFlex program and (mis)use the DataFlex lock mechanism to ensure only one DataFlex program writes to the file at any given time. When the underlying database changes to a row locking database, this logic will no longer work.

Handling Transactions and Data-Dictionaries

This discussion applies to the Data-Dictionaries and their predecessor, Data-sets. The term Data-Dictionary or DDO will be used to refer to both technologies.

DDOs alter tables using two methods, Request_save and Request_delete. Both methods use transactions (in fact they require than your tables support transactions) and fully support all of the ACID property requirements. These methods both operate as follows:

  1. Method starts (Request_save or Request_delete)
  2. A Begin_Transaction is executed (unless a transaction is already started)
  3. Table data is reread and locked as needed. If table locking is used, all tables that participate in the save or delete are locked. If row (record) locking is used, rows are locked as they are reread.
  4. Data is validated, processed and table columns are updated
  5. If any error occurs, at any time, the error is handled as follows:

Note that this applies to all errors. If the runtime encounters any unexpected error, this process is triggered. If the developer generates an explicit error with the Error command (typically done within the events validate_save or validate_delete), the process is triggered.

6. If no errors occur

Errors in a DDO transaction

The error rollback is a very simple process. If an error occurs, execution stops, the transaction is rolled back, and the request method is completed. While this is simple, it is powerful and has the following implications.

  1. A request_save or a request_delete may change many rows from many tables. For example a delete cascades and deletes rows in descendant tables causing hundreds of rows to be deleted and hundreds of parent rows to be altered. If an error occurs, all of these changes are rolled back.
  2. If you wish to stop a transaction inside of a DDO, use the error command. It is not expected that you will ever use the Abort_Transaction command within a DDO.
  3. The event methods Validate_save and Validate_delete, were created to provide a place for a developer to check for any errors and to generate errors as needed. There is nothing stopping you from declaring an error in any DDO event (e.g. Update, Backout, Save_main_file). Any error generated at any time within the transaction will properly abort the transaction..
  4. Once an error is encountered within a method, none of the other commands within that method (or the methods that called this method) are executed. The rollback occurs, the error is generated and control is returned to the method that made the request. For example, in the sample below, the code in the function that occurs after the error command can never be executed.
  5. Function Validate_save Returns Integer

    Error 300 "Sorry, no save"
    // If an error occurred, this code never is executed
    Showln "You will never see me"
    // We dont need to return a value, the error triggers the stop

    End_Function // Validate_save

  6. Error reporting is always deferred until the transaction is rolled back and all locks are removed. You can, therefore, execute an error within a DDO transaction without worrying that the error will be reported when tables are locked.

Grouping multiple DDO saves or deletes

If you wish to group several DDO Request_save or Request_delete operations within a single transaction you can group them using Begin_transaction and End_transaction as follows:

Begin_transaction

Send Request_save Of hMyDDO1
Send Request_save Of hMyDDO2

End_transaction

If an error occurs in either DDO transaction both transactions are rolled back and execution resumes at the line following the End_Transaction.

Note that you should never use a Lock, Reread or Unlock command to group DDO transactions.

The DataFlex database

Since the DataFlex database is the native database for a DataFlex environment we will first discuss the physical structure of the database. Then we will look into the transaction issues in using the DataFlex database.

Structure

DataFlex is a file based database. Tables are stored in disk files. The DataFlex database is not a client server database. When a DataFlex application accesses DataFlex data, it will directly access the data needed.

Extension

Contents

dat

The actual data of the table. The .dat file contains a fixed size header that describes the layout of the records in the table and the indexes defined on the data in the table. After the header, the actual data is stored.

In case a form of compression is used, the dat file does not contain actual data but pointers into a .vld file containing the compressed data. See also the description of .vld.

k1 .. k15

The index files. Indexes are ISAM-like structures. They can be defined on a DataFlex table. The number in the file extension corresponds to the index number in the table definition.

When using indexed finds, the DataFlex driver will only get the record corresponding to an index entry if the columns outside of the index segments are accessed or if the index has the DF_INDEX_SEGMENT_CASE set to DF_CASE_IGNORED. This will reduce network traffic.

The maximum number of records of a DataFlex table determines the number of levels of an index. Every index will be capable to hold at least the maximum number of records. Depending on the index segments, the number of levels needed to accommodate the maximum number of records will vary. As records get created in a DataFlex table, the indexes will be filled. Of all indexes in a table, one has the least capacity. If all available index entries are used a DFERR_INDEX_FILE_FULL (23) error will be generated on every attempt to create a new record. In that case, the maximum number of record must be increased. Note that it is possible to have more records in a DataFlex file then the specified maximum. This is not an error but intended behavior. A DataFlex table can hold as many records as the smallest index will accommodate.

The more levels in an index, the more disk access needed to get to the data when finding by that index. Therefore it is advised not to set the maximum number of records too high since this will increase the number of levels. The optimal setting for the maximum number of records depends on the growth factor of individual tables.

tag

The column name file. In the tag file, the column names are stored in ASCII format. A tag file should contain a column name on every line. The name of column 1 on the first line, the name of column two on the second line and so forth.

The tag file is not required. When it is missing the column names will be reported as field_1, field_2 and so forth.

vld

Variable length data file. This file will only be used if a form of compression is used on the table. If a form of compression is used, the .vld file contains the compressed data. The .dat file will contain pointers to the .vld file.

Three forms of compression are supported.

  • Fast compression uses run length encoding. Run length encoding will replace a repeating character by the character and the number of times it is repeated. A string containing 300 As would be replaced by A300
  • Standard compression uses run length encoding combined with Hufman encoding based on the English frequency table
  • Custom compression uses run length encoding combined with Hufman encoding based on the actual data in the table. Custom compression on an empty table cannot be based on the data in the table. In that case the English frequency table is used. Recompressing a table with custom compression that has data in it will create a new frequency table.

hdr

Header integrity check file. If header integrity is switched on, the .hdr file contains the check copy of the table header. Every time the .dat file is saved, the .hdr file will be created.

If a table is opened that has header integrity on, the .hdr file will be compared to the header in the .dat file. If there is a conflict, this will be reported and the table needs to be repaired.

 

There are two more database related disk files for a table with extension fd and def. These files are not specific to the DataFlex database! The fd file is used by the compiler to import table definitions. It is used at compile time, never at runtime. The def file can be generated from the DataFlex database maintenance utilities and contains a readable form of a table definition. Def files are meant for documentation purposes. They can also be imported as the base of a new table definition in the table definition utilities (load def).

One DataFlex table corresponds to a number of physical disk files. Whenever records are modified, created or deleted from a table every disk file involved in the operation must be modified. The information in all files needs to be synchronized. If one of the files is out of synch problems will occur.

Transactions

The native DataFlex database supports the ACID properties for transactions. DataFlex only supports the so called Read Uncommitted Isolation Level. This means that intermediate results of transactions are always visible to other transactions. Traditionally the DataFlex database has always allowed concurrent processes to see each others intermediate results. With the introduction of transactions this has not changed.

The native DataFlex database supports two types of transactions Client atomic and Server atomic. Client atomic transactions do not protect against power failure or system crashes. Server atomic transactions do give this protection.

After starting a client atomic transaction all database modification actions are written to the table and a virtual transaction file. When the transaction is committed, the virtual transaction file is discarded. When the transaction is aborted the virtual transaction file is replayed in reverse order, thus restoring the database in the state it was in before the transaction started. If the systems aborts abnormally somewhere during a transaction, or even during the rollback of a transaction, the database will be in an undetermined state. Client atomic transactions are the default transaction type for the DataFlex database.

Server atomic transactions use the Netware Transaction Tracking System (TTS). In order to use TTS a Netware client must be installed and TTS must be enabled on the Netware server. Make sure the transaction type of the tables is set to server atomic. Make sure the tables in question reside on the Netware server. Tables with transaction type server atomic not residing on a Netware server will generate an error DFERR_FILE_NOT_ON_TTS_VOLUME (4288) when opened. The same error will be generated when TTS is not enabled on the Netware server. You do not need to make special provisions to use server atomic transactions, other than those required for transactions in general.

Locking

The native DataFlex database supports table level locking only. The lock command will lock every table that is open and not Read Only or Alias in an application. Locks are acquired in the order of the file numbers used for the open files. There are two attributes that can be used to set up the DataFlex lock behaviour, DF_LOCK_TIMEOUT and DF_LOCK_DELAY.

DF_LOCK_TIMEOUT is the time DataFlex tries to acquire a lock in milliseconds converted to the next full second. So 1..1000 is a lock timeout of 1 second, 1001..2000 is a lock timeout of 2 seconds and so forth. DF_LOCK_DELAY is the delay time in milliseconds between two consecutive lock attempts.

Deadlock

Deadlocks are impossible in a properly set up DataFlex environment. Therefore there is no deadlock detection feature for the native DataFlex database. The transaction logic will automatically retry a transaction if there is a lock timeout error.

If the environment is improperly set up, deadlocks are possible. This situation can be resolved by using a proper set up. The possible causes of deadlocks are:

Cause

Resolve by

Improper alias set up

Make sure the DF_FILE_ALIAS attribute is set properly. One table should be set to DF_FILE_IS_MASTER and all other to DF_FILE_IS_ALIAS. See the DataFlex documentation on alias files for more information.

In case of improper alias set up, the application is waiting for itself. A situation that will never be resolved, not even by a retry of the lock. An improper alias set up will always deadlock as soon as it is set up. It is always a programming problem that can be resolved in the code.

Non matching order of tables

If two applications access the same database but not open the tables under the same numbers. The relative order of the tables in both applications must be the same. Make sure it is the same.

The retry mechanism for lock timeouts has improved handling this particular situation. It is not supported to use non matching relative orders however.

Consider application A and B opening Ta and Tb. Application A opens Ta under number1 and Tb under number 2. Application B opens Ta under number 2 and Tb under number 1. This could lead to a deadlock situation if both application start a lock and succeed in acquiring a lock on their first table. In this particular sample we choose one application (say B) and adjust the internal table number to have the same relative order. In this case we could open Ta under number 200 and Tb under number 230 for example.

The DataFlex Connectivity Kit for Pervasive.SQL

The first Connectivity Kit that was available was the DataFlex Connectivity Kit for Btrieve. Since then Btrieve has evolved into Pervasive.SQL and the Connectivity Kit has evolved along with it. Pervasive's MicroKernel Database Engine (MKDE) is a record management system that offers client server facilities to applications.

Transactions and Locking

Pervasive.SQL supports two types of transactions, exclusive and concurrent. Exclusive transactions use table level locking, concurrent transactions use row level locking. Pervasive.SQL fully supports the ACID transaction properties.

The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In Pervasive.SQL locks are acquired as data is read. In order to simulate the DataFlex behaviour, the Connectivity Kit supports the EXPLICIT_LOCKING setting. If set to true the Connectivity Kit will lock active records immediately when the lock command is issued.

Explicit locking involves extra find operations on the open tables. Pervasive.SQL acquires a lock when a find operation is done. In order to force the lock, a find must be done. In case of concurrent transactions the Connectivity Kit will re-find all active records. In case of explicit transactions, it will re-find all active records and for the tables that do not have an active record, it will find the first (physical) record in the table.

Combining the two settings the Connectivity Kit supports four different modes:

  EL/TT Behavior
I off / exclusive Tables are locked on the first find in the table after lock command is issued.
II off / concurrent Records are locked as they are found in the table.
III on / exclusive Tables are locked the moment the lock command is issued. This is compatible with the native DataFlex database.
IV on / concurrent Active records are locked the moment the lock command is issued. Consecutive find operations will lock records as they are found.

Which of the four modes should be used depends on the application and the number of users. Tables locks (exclusive transactions) should be used in processes that involve a lot of rows of the table. Row locking (concurrent transactions) should be used in most other cases. If the application needs explicit locking depends o the assumptions made in the transactions. The most likely modes to be used are mode II and III.

The Pervasive.SQL API offers a number two possible ways to lock, wait locking and nowait locking. When wait locking is used, the lock attempt will only return after success or if it has been timed out or chosen as deadlock victim. When nowait locking is used, the lock attempt will immediately return, no matter if it succeeds or not. The DataFlex Connectivity Kit for Pervasive.SQL uses nowait locking. A timeout / delay mechanism is implemented in the Connectivity Kit itself. The mechanism can be set up through the attributes DF_LOCK_TIMEOUT and DF_LOCK_DELAY.

If the Connectivity Kit detects a timeout when trying to lock, it will generate the DFERR_LOCK_TIMEOUT (4106) error. Whenever this error is generated in a transaction the automatic retry mechanism will be started. In case mode III is used, this is no problem. In all other cases it is possible that the transaction is already "well underway" when the error occurs. Variables, properties used in the transaction may have changed. Make sure you reset the variables and properties to the value the had at transaction begin. A good place to trap the retry and reset variables and properties is the Verify_Retry message send in an object oriented environment.

Deadlock

The DataFlex Connectivity Kit for Pervasive.SQL uses nowait locking. Deadlocks will never occur because transaction never wait for locks to succeed. This is handled at Connectiivty Kit level, the Pervasive.SQL mechanism for timeout and deadlock does not apply to transactions using nowait locking.

The DataFlex Connectivity Kit for DB2

The DataFlex Connectivity Kit for DB2 allows DataFlex programs to access data in IBM DB2 Univeral Databases. DB2 is a client server relational database management system (RDBMS) that runs on a variety of operating systems. SQL is used to access data in a DB2 database.

Transactions and Locking

DB2 fully supports the ACID transaction properties. Next to the normal rollback facilities DB2 also supports so called roll forward functionality. DB2 supports all Isolation Levels. The Connectivity Kit will always use the Read Uncommitted isolation level. Transactions in DB2 are logged. The DB2 database system can recover from system failures.

The lock granularity used by DB2 is row (or record) locking. It is possible that the database manager replaces multiple row locks with a single table lock, this is called lock escalation. Lock escalation is implemented in several database platforms. If a lot of records on a page or table are locked the overhead of administrating the locks gets so big that it is cheaper and/or faster, to escalate the lock to lock an entire page or table. In DB2 three different types of lock are possible:

DB2 will place exclusive locks on rows that are inserted, updated or deleted. The DataFlex Connectivity Kit for DB2 uses the Update lock provided by DB2. This type of lock is applied by using a special form of the SQL Select statement. The statement uses a so-called update clause. Update locks are placed while finding rows in a table. Once the row is actually updated, the lock will transform into an exclusive lock.

The update clause is only supported when used in a select statement that does not have a so called order by clause. The only statement generated by the DB2 Connectivity Kit that does not use an order by clause is the one generated for a Find Eq by Recnum operation. The Connectivity Kit will therefore only place Update locks on rows that are found through a Find Eq by Recnum operation after the database has been locked. A Reread generates a Find Eq by Recnum after a lock so rows found by a Reread will also be locked. All rows that are found in some other way, will be locked when they are updated or deleted.

The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In DB2 locks are acquired as data is read in a locked state or as it gets updated or deleted.

When accessing DB2 make sure rows are locked, always use either Reread or a Find Eq by Recnum in a locked state. Data Sets and Data Dictionaries will use Reread when updating data. Programs using the DDO/DSO objects will therefore correctly lock all data that is updated. Procedural programs may require code changes for all locks to be applied.

A further difference between DataFlex and DB2 is the amount of data that is updated in an update operation. DataFlex will read and write entire records. DB2 on the other hand is capable of updating one or more columns in a row. If for example one wants to update the name of a customer. DataFlex will read the entire record, modify the name and write the entire record back to the data file. DB2 will only write the modified customer name to the database.

The ability to update specified columns will reduce the amount of procedural code that needs adjusting. If the program overwrites columns with values not based on previous values of columns in the row, there is no need to make sure the row is locked. If the program uses previous values of columns in the row to calculate new values of columns in the row, you must make sure the row is locked. In those case a Reread or Find Eq by Recnum must be added to the code if t is not already present. Below an example of a procedure that does not need to be adjusted and one that must be adjusted.

Procedure NoNeedToAdjust

Clear SomeTable

Begin_Transaction

Repeat

Find Gt SomeTable By SomeIndex

If (Found) Begin

Move "SomeValue" To SomeTable.SomeColumn

Saverecord SomeTable

End

Until (Not(Found))

End_Transaction

End_Procedure // NoNeedToAdjust

Procedure MustBeAdjusted

Clear SomeTable

Begin_Transaction

Repeat

Find Gt SomeTable By SomeIndex

If (Found) Begin

Move (SomeTable.SomeColumn * 1.16) To SomeTable.SomeColumn

Saverecord SomeTable

End

Until (Not(Found))

End_Transaction

End_Procedure // MustBeAdjusted

The procedure that must be adjusted can be adjusted the following way:

Procedure MustBeAdjusted

Clear SomeTable

Begin_Transaction

Repeat

Find Gt SomeTable By SomeIndex

If (Found) Begin

Reread

Move (SomeTable.SomeColumn * 1.16) To SomeTable.SomeColumn

Saverecord SomeTable

Unlock

End

Until (Not(Found))

End_Transaction

End_Procedure // MustBeAdjusted

Data Access has been recommending the use of Reread when updating records for quite some time. It is expected that the number of programs that need adjusting in this area is limited. If you want both procedures to apply update locks on the records involved, you should adjust both procedures.

Deadlock

DB2 uses row locking, deadlock can occur. Deadlock is detected by the DB2 Deadlock Detector. When deadlock is detected, the deadlock detector arbitrarily selects one transaction that is rolled back. When the rollback occurs, the locks are released thus offering other transactions the possibility to proceed. The rolled back transaction (application) will get notified of the event by receiving the deadlock notification error. It is possible to set up the interval between two deadlock detection events. Next to deadlock detection, DB2 also supports a configurable lock timeout. If a lock timeout occurs the transaction trying to acquire the lock will be rolled back and is notified by receiving the timeout notification error. The DataFlex Connectivity Kit for DB2 will report both these errors in the same way, since they result in the same behaviour for the transaction involved. The error is passed to DataFlex as DB2ERR_DEADLOCK_OR_TIMEOUT (12303).

The DB2 Connectivity Kit does not pass the DFERR_LOCK_TIMEOUT (4106) error to DataFlex. When a lock timeout occurs, the transaction will not be automatically retried. Since DB2 uses record locking, a lock timeout error can occur at any time during a transaction. If the DFERR_LOCK_TIMEOUT error would be passed, the runtime would retry the transaction, jumping to the Begin_Transaction command. If the transaction was "well underway" at the moment the lock timeout occurs, variables and properties used in the transaction will have changed. An automatic retry would not give the desired result, a transaction rollback does not restore variables and properties to their original value. This situation cannot occur when using DataFlex data, Begin_Transaction will do all locking that will ever be done in a transaction so it is safe to automatically retry transactions when using DataFlex data.

It is the programmers responsibility to handle deadlock or lock timeouts. The DataFlex programmer can check for the error condition and act upon it. The code below shows a sample of retrying the transaction 9 times.

Procedure Retry9TimesWhenDeadlockOrTimeout

Local Integer bAborted

Local Integer bRetry

Local Integer iRetryCount

Repeat

Send InitializeForTransaction

Begin_Transaction

Send DoSomeDB2Updates

End_Transaction

Get_attribute DF_TRANSACTION_ABORT To bAborted

If (bAborted And (LastErr = DB2ERR_DEADLOCK_OR_TIMEOUT)) ;

Move (True) To bRetry

Else ;

Move (False) To bRetry

Increment iRetryCount

Until ((Not(bRetry) Or (iRetryCount > 10))

If (bAborted) ;

SendInformUser LastErr

End_procedure // Retry9TimesWhenDeadlockOrTimeout

The DataFlex Connectivity Kit for SQL Server

The DataFlex Connectivity Kit for SQL Server allows DataFlex programs to access data in Microsoft SQL Server 7.0 databases. SQL Server is a high performance, client server relational database management system (RDBMS). SQL Server runs on Microsoft Windows NT Server networks and can be installed as a desktop database system on Windows NT Workstation, Windows 95 and Windows 98 machines. SQL is sued to access data in a SQL Server database.

Transactions and locking

SQL Server fully supports the ACID transaction properties. Next to the normal rollback facilities SQL Server also supports so called roll forward functionality. SQL Server supports all Isolation Levels. The Connectivity Kit will always use the Read Uncommitted isolation level. Transactions in SQL Server are logged. The SQL Server database system can recover from system failures. The lock granularity used by SQL Server is row (or record) locking. It is possible that the database manager replaces multiple row locks with a single page or table lock, this is called lock escalation. Lock escalation is implemented in several database platforms. If a lot of records on a page or table are locked the overhead of administrating the locks gets so big that it is cheaper and/or faster, to escalate the lock to lock an entire page or table. In SQL Server three different types of lock are possible:

SQL Server will place exclusive locks on rows that are inserted, updated or deleted. The DataFlex Connectivity Kit for SQL Server uses the Update lock provided by SQL Server. This type of lock is applied by using a special form of the SQL Select statement. The statement uses a so-called update clause. Update locks are placed while finding rows in a table. Once the row is actually updated, the lock will transform into an exclusive lock.

The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In SQL Server locks are acquired as data is read in a locked state.

Deadlock

Since SQL Server uses row locking, deadlock can occur. SQL server will periodically scan sessions waiting on a lock. If in such a scan a session is detected that was waiting in the previous periodic scan, a deadlock detection search is started. If deadlock is detected the least expensive transaction to undo is chosen as deadlock victim. This victim transaction is rolled back. When the rollback occurs, the locks are released thus offering other transactions the possibility to proceed. The rolled back transaction (application) will get notified of the event by receiving the deadlock notification error. It is possible for a transaction to "volunteer" as victim by setting the deadlock priority. The DataFlex Connectivity Kit for SQL Server does not use this attribute. Next to deadlock detection, SQL Server also supports a configurable lock timeout. If a lock timeout occurs the transaction trying to acquire the lock will be rolled back and is notified by receiving the timeout notification error. The DataFlex Connectivity Kit for SQL Server will report both these errors in the same way, since they result in the same behaviour for the transaction involved. The error is passed to DataFlex as DFMSSQLERR_DEADLOCK_OR_TIMEOUT (12303).

The SQL Server Connectivity Kit does not pass the DFERR_LOCK_TIMEOUT (4106) error to DataFlex. When a lock timeout occurs, the transaction will not be automatically retried. Since SQL Server uses record locking, a lock timeout error can occur at any time during a transaction. If the DFERR_LOCK_TIMEOUT error would be passed, the runtime would retry the transaction, jumping to the Begin_Transaction command. If the transaction was "well underway" at the moment the lock timeout occurs, variables and properties used in the transaction will have changed. An automatic retry would not give the desired result, a transaction rollback does not restore variables and properties to their original value. This situation cannot occur when using DataFlex data, Begin_Transaction will do all locking that will ever be done in a transaction so it is safe to automatically retry transactions when using DataFlex data.

It is the programmers responsibility to handle deadlock or lock timeouts. The DataFlex programmer can check for the error condition and act upon it. The code below shows a sample of retrying the transaction 9 times.

Procedure Retry9TimesWhenDeadlockOrTimeout

Local Integer bAborted

Local Integer bRetry

Local Integer iRetryCount

Repeat

Send InitializeForTransaction

Begin_Transaction

Send DoSomeSQLServerUpdates

End_Transaction

Get_attribute DF_TRANSACTION_ABORT To bAborted

If (bAborted And (LastErr = DFMSSQL2ERR_DEADLOCK_OR_TIMEOUT)) ;

Move (True) To bRetry

Else ;

Move (False) To bRetry

Increment iRetryCount

Until ((Not(bRetry) Or (iRetryCount > 10))

If (bAborted) ;

SendInformUser LastErr

End_procedure // Retry9TimesWhenDeadlockOrTimeout

 

The DataFlex Connectivity Kit for ODBC

The DataFlex Connectivity Kit for ODBC allows DataFlex programs to access data through ODBC. Open Database Connectivity (ODBC) is a widely accepted application programming interface (API) for database access. ODBC is designed to enable an application to access different database management systems (DBMS's) with the same source code. An application calls ODBC functions which are implemented in database specific modules called drivers.

ODBC uses SQL to access data. ODBC is an API specification. The API is independent from any DBMS or operating system. It is important to understand that ODBC is designed to expose database capabilities, not supplement them. Thus, application writers should not expect that using ODBC will suddenly transform a simple database into a fully featured relational database engine. Nor are driver writers expected to implement functionality not found in the underlying database.

A variety of database management systems can be accessed through ODBC. These include enterprise database systems such as Oracle, Sybase, DB2, SQL Server, flat file systems as dBase, Paradox and even non-database systems as Excel, XML and ASCII.

ODBC supports transactions if the database connected to supports transactions. The same can be said about locking. How a DataFlex application behaves with transactions and locking when using the ODBC Connectivity Kit depends on the database used and the ODBC driver used to connect to that database.

Transactions and locking

There are ODBC drivers for non database environments. These generally do not support transactions and locking at all. The discussion below is limited to databases only.

If the database supports the Read Uncommitted Isolation Level, the DataFlex Connectivity Kit for ODBC will use that Isolation Level. If Read Uncommitted is not supported the Connectivity Kit does not setup an Isolation Level and will use the default level for the data source.

If transactions are supported they will be used automatically by the ODBC Connectivity Kit. In order to determine if a database supports transactions see its documentation. In general enterprise databases support transactions with rollback and roll forward features. Flat file databases provide the full range from no support to full rollback / forward support. If there is no support for transactions the ODBC Connectivity Kit cannot use it.

Most databases support row locking. The ODBC Connectivity Kit assumes the environment to be a row locking environment. If the granularity happens to be greater, page or table, this will also work. Assuming the smallest lock granularity assures that locking (if supported) works in all cases. In general databases use two approaches to locking. Optimistic and pessimistic locking.

When optimistic locking is used records are not really locked. All transactions have access to the record at all times. When the transaction requests a lock, a copy of the contents of the record is made. When at a later time in the transaction the record is updated, the current record in the table will be compared to the copy made at the time of the lock. If the two versions do not match, an error is declare and the update is not executed. The idea is that the change of two transactions trying to modify the same row simultaneously is very small. Applications connecting to a database that uses optimistic locking should handle the optimistic lock error condition. When using pessimistic locking Records are actually locked when the transaction requests a lock. Other transaction that try to lock the record will have to wait for the lock to be released. The DataFlex Connectivity Kit for ODBC does not support optimistic locking. Most databases that support optimistic locking offer a way to switch to pessimistic locking. This should be used when accessing the database through the ODBC Connectivity Kit.

The way the ODBC Connectivity Kit handles locks depends on the database that is used. At login time a number of attributes of the database are queried that determine the locking method. The method used will eventually select a so called cursor to be used when accessing the database the cursor choice logic essentially is the following:

If DBMS supports a dynamic cursor use it

Else If DBMS supports positioned updates in a forward only cursor use it

Else If DBMS supports positioned updates in a keyset driven cursor use it

Else If DBMS supports SQLSetPos updates in a forward only cursor use it

Else If DBMS supports SQLSetPos updates in a keyset driven cursor use it

Else use forward only cursor

SQL is a set oriented language. A find (select) operation will return a set of records instead of just one like in record oriented environments. To traverse these sets a mechanism known as database cursors is used. A cursor allows a non set oriented environment to obtain information from a set one row at a time.

Dynamic cursors

Dynamic cursors are the most advanced type of cursor. Unfortunately very few databases support dynamic cursors. If a database supports dynamic cursors, the Connectivity Kit will use that type of cursor. Dynamic cursors support record locking in all find modes.

The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In ODBC using dynamic cursors locks are acquired as data is read in a locked state.

Positioned updates

A positioned update is an update at the current position of an active cursor. In order to be able to use this form of updating a special form of the select statement must be used. The select statement must contain a so called update clause.

The update clause will instruct the database to lock the record in question. Most enterprise databases support positioned updates through one or more cursor types. The update clause will only be generated when a Find Eq by Recnum operation is done. The Connectivity Kit will therefore only place Update locks on rows that are found through a Find Eq by Recnum operation after the database has been locked. A Reread generates a Find Eq by Recnum after a lock so rows found by a Reread will also be locked. All rows that are found in some other way, will be locked when they are updated or deleted.

The moment at which a lock is applied is different from the moment the native DataFlex database acquires a lock. In native DataFlex a lock is acquired when the lock command is issued. In ODBC locks are acquired as data is read in a locked state or as it gets updated or deleted.

When accessing ODBC make sure rows are locked, always use either Reread or a Find Eq by Recnum in a locked state. Data Sets and Data Dictionaries will use Reread when updating data. Programs using the DDO/DSO objects will therefore correctly lock all data that is updated. Procedural programs may require code changes for all locks to be applied.

A further difference between DataFlex and ODBC is the amount of data that is updated in an update operation. DataFlex will read and write entire records. ODBC (actually SQL) on the other hand is capable of updating one or more columns in a row. If for example one wants to update the name of a customer. DataFlex will read the entire record, modify the name and write the entire record back to the data file. ODBC should only write the modified customer name to the database.

The ability to update specified columns will reduce the amount of procedural code that needs adjusting. If the program overwrites columns with values not based on previous values of columns in the row, there is no need to make sure the row is locked. If the program uses previous values of columns in the row to calculate new values of columns in the row, you must make sure the row is locked. In those case a Reread or Find Eq by Recnum must be added to the code if t is not already present. Below an example of a procedure that does not need to be adjusted and one that must be adjusted.

Procedure MustBeAdjusted

Clear SomeTable

Begin_Transaction

Repeat

Find Gt SomeTable By SomeIndex

If (Found) Begin

Reread

Move (SomeTable.SomeColumn * 1.16) To SomeTable.SomeColumn

Saverecord SomeTable

Unlock

End

Until (Not(Found))

End_Transaction

End_Procedure // MustBeAdjusted

Data Access has been recommending the use of Reread when updating records for quite some time. It is expected that the number of programs that need adjusting in this area is limited. If you want both procedures to apply update locks on the records involved, you should adjust both procedures.

SQLSetPos updates

ODBC provides an alternative way to update records from SQL through an API know as SQLSetPos. This API can be used to update and delete rows in a table. Even if SQLSetPos is supported, there is a variety of details in support for the API. SQLSetPos can be used to lock a row exclusive, update a row, delete a row or refresh a row. Not all of the possibilities have to be supported by a database. If the database supports updating rows through SQLSetPos, the Connectivity Kit will use SQLSetPos.

When SQLSetPos is supported with support for exclusive locks this will be called directly after each find in a locked state. This way records that are found in a locked state will be locked as they are found.

If the exclusive lock is not supported the Connectivity Kit will perform a so called "dummy update" to ensure that the row is locked. Directly after a row has been found, the row will be updated. The dummy update column will be set to the value that has just been found. By default the dummy update column is the record identity column. This column is least likely to change. If however a different column should be used for the dummy update logic, this can be set up through the intermediate file keyword DUMMY_UPDATE_COLUMN. Another reason to use an alternative dummy update column is when the record identity column cannot be updated. Some database support system assigned columns like auto-increment columns. In general it is considered an error to try to update such a column. After the dummy update has been done, a re-find is done to ensure the latest version is in the record buffer.

If SQLSetPos is supported all records that are read in a locked state will be locked as they are read. Depending on the support this may as simple as placing a lock or as complicated as doing a dummy update and re-finding the record.

No update support

If the database reports no support for dynamic cursors, positioned updates and SQLSetPos, the Connectivity Kit will use a forward only cursor. It is not defined how locking works in this case. In general these type of environments will lock row as they are updated, deleted or inserted. Locks are applied at the moment of the DataFlex save, saverecord or delete command. No locks are applied when finding in a locked state.

Summary on locking

There are a lot of options and unknown factors for ODBC in general. A database might support a particular feature or it may not. It would be quite confusing to keep track of all the different possibilities and implementations. Luckily most programmers only use one or two databases to connect to through ODBC.

Data Access has tested a number of environments to see how they support locking and transactions. In general we can distinguish the following types of support:

The following table lists the tested environments with it locking type support

Environment Lock Type Comments
DB2 UDB 7.1 PU Make sure to set the maximum number of statements per connection to a value higher then 1. The suggested value for DB2 is 90. The maximum number of statements per connection can be set in the DB2_DRV.INT configuration file with the keyword Max_Active_Statements. If the Max_Active_Statement setting is not higher then 1, locking will not work correctly when accessing DB2 through the ODBC Connectivity Kit
MS SQL 7.0 DC  
Sybase ASA 7.0 PU  
Oracle 8.0 PU  
Oracle 8.i PU  
MS Access 2000 SPDU Make sure Access is set up to use pessimistic locking. The way to set this up can be found at http://support.microsoft.com/support/kb/articles/Q225/9/26.ASP.

 

Deadlock

In most databases that can be accessed through ODBC deadlock can occur. If the database detects deadlock depends on the database in use. See the database documentation if deadlocks can occur and if so if detection is supported. If deadlock is detected, the transaction will be aborted and the deadlock detection error is sent to the victim transaction. If the database supports a lock timeout, the same happens when lock timeout occurs. There is no way DataFlex Connectivity Kit for ODBC can know upfront what the error numbers of the database for the two events are. Errors generated by databases will be reported under number 12289. There is no way to know if an error was a deadlock, timeout or some other error that can be generated by the database in use.

Since the ODBC Connectivity Kit does not know if the database generated a timeout error it will also not pass the DFERR_LOCK_TIMEOUT (4106) error to DataFlex. Even if it would be able to determine if the lock timeout error occurs the DFERR_LOCK_TIMEOUT error would not be generated. Since most databases use record locking, a lock timeout error can occur at any time during a transaction. If the DFERR_LOCK_TIMEOUT error would be passed, the runtime would retry the transaction, jumping to the Begin_Transaction command. If the transaction was "well underway" at the moment the lock timeout occurs, variables and properties used in the transaction will have changed. An automatic retry would not give the desired result, a transaction rollback does not restore variables and properties to their original value. This situation cannot occur when using DataFlex data, Begin_Transaction will do all locking that will ever be done in a transaction so it is safe to automatically retry transactions when using DataFlex data.

The only way to detect if Deadlock occurred is by parsing the error text of the latest error. The ODBC Connectivity Kit formats it errors in a special way: <SQLState> (<Native error number>)-<Error text>. If the SQLState that is set when deadlock or timeout occurs is known a DataFlex program could test for this state. SQLState is a five character string. Not all databases pass native error numbers through ODBC, if it does the DataFlex programmer could also check the native error number.

Contacting Data Access Worldwide

Data Access Worldwide
14000 SW 119 Ave
Miami, FL 33186
305-238-0012
Domestic Sales: 800-451-3539
Fax: 305-238-0017
email: sales@dataaccess.com
Newsgroup Server: news.dataaccess.com
Internet: http://www.dataaccess.com

Data Access Worldwide - Asia Pacific
Suite 5, 333 Wantirna Road, Wantirna VIC 3152 Australia
Phone: +61 3 9800 4233 f: +61 3 9800 4255
Sales: asiapacific@DataAccess.com
Support: support.asiapacific@DataAccess.com
Internet: http://www.DataAccess.com/AsiaPacific

Data Access Worldwide - Brasil
Av.Paulista, 1776 - 21st.Floor
So Paulo -SP - Brazil
CEP 01310-921
Phone: 5511-3262-2000
Fax 5511-3284-1579
Sales: info@dataaccess.com.br
Support: suporte@dataaccess.com.br
Internet: http://www.dataaccess.com.br

Data Access Worldwide - Europe
Lansinkesweg 4
7553 AE Hengelo
The Netherlands
Telephone: +31 (0)74 - 255 56 09
Fax: +31 (0)74 - 250 34 66
Sales: info@dataaccess.nl
Support: support@dataaccess.nl
Internet: http://www.dataaccess.nl

Data Access Technical Support
800-451-3539 / 305-232-3142
email: support@dataaccess.com
Visit our Support Home page to see all of our Support options: http://www.dataaccess.com/support

Data Access Technical Knowledge Base    http://www.dataaccess.com/kbase
Many answers to technical problems can be found online in the Data Access Technical Knowledge Base. Here, you can access the same live data that Data Access Worldwide technical support and development staff use to enter and track technical articles. 

Copyright Notice
This document is property of Data Access Corporation. With credit to Data Access Corporation for its authorship, you are encouraged to reproduce this information in any format either on paper or electronically, in whole or in part. You may publish this paper as a stand alone document within your own publications conditional on the maintenance of the intent, context, and integrity of the material as it is presented here.

DataFlex is a registered trademark of Data Access Corporation.

NO LIABILITY FOR CONSEQUENTIAL DAMAGES
To the maximum extent permitted by applicable law, in no event shall Data Access Corporation be liable for any special, incidental, indirect, or consequential damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or any other pecuniary loss) arising out of the use of or inability to use any information provided in this document, even if Data Access Corporation has been advised of the possibility of such damages. Because some states and jurisdictions do not allow the exclusion or limitation of liability for consequential or incidental damages, the above limitation may not apply to you.