Triggers - Some Do's and Don'ts
Some Do's
- Do experiment with small test cases using triggers so that you are comfortable with what they are and how they work before attempting to implement a complex application involving triggers.
- Do remember that when you change the type or length of a field in the data dictionary (that has associated triggers) you should recompile:
- All trigger functions associated with the field.
- All Object Access Modules of tables that contain the field as a real or virtual column.
- All functions that make *DBOPTIMIZE references to table(s) containing the field.
- The list of objects to recompile is easily obtained by producing a full listing of the definition of the field.
- Remember that when you change the layout of a database table (that has associated triggers) you should recompile:
- The Object Access Module of the table.
- All trigger functions associated with the table.
- Any functions that make *DBOPTIMIZE references to the table.
The list of objects to recompile is easily obtained by producing a full listing of the definition of the file.
Some Don'ts
- Do not do any I/O to the table with which the trigger is linked. Attempting such I/O directly, or indirectly, may cause a recursive call to the table Object Access Module. Do not attempt to use *DBOPTIMIZE to circumvent this rule. Such attempts will cause the table cursor of the active Object Access Module to become lost or corrupted.
- Do not use triggers on tables that have more than 799 real and virtual columns (the 800th column position is reserved for the standard @@UPID field).
- Do not make triggers too expensive to execute. For example, an unconditioned trigger that is always executed after reading from a table doing, say, 3 database accesses, will at least quadruple the time required to read the base table. Triggers are a very useful facility but they are not magic. When you set up a trigger to do a lot of work, then your throughput will be reduced accordingly. The use of triggers and the estimation of the impact that they exert on application throughput is entirely your responsibility as an application designer.
- Do not introduce dependencies between triggers. For instance, trigger A (before update) sets a value in field X, say. Setting up trigger B (also before update), to run after trigger A, with the "knowledge" that trigger A has been executed first (and thus set field X) is not a good idea. This is an example of "interdependence" between triggers and it is not a good way to use triggers. In this case the logic in trigger B should be inserted directly into trigger A following the point that it sets a value into field X.
- Do not use ABORT when a user exit is called from a Trigger function. When ABORT is issued in the Trigger Function, the Object Access Module is able to intercept the ABORT and passes a Trigger error status back to the Function. However, when the ABORT is issued in the (user exit) Function, called by the Trigger, the ABORT is interpreted in the standard way because the Function is not aware that the call was from a Trigger and it does not make any difference. Using ABORT in these situations (e.g. validations) is not recommended.
- It is very strongly recommended that you do not design triggers in such a way as that "normal" RDML functions doing I/O operations are "aware" of their existence, and attempt to directly "communicate" with them in any way (e.g.: *LDA, data areas, etc).
Where trigger "requests" are to be supported, introduce a virtual (or real) column into the table definition and use it to "fire" the trigger in the normal way.