17.4.6 Implementation, Performance and Throughput

The IBM i job queue emulation facilities were designed with these objectives in mind:


The ability to queue tasks when the "monitor" is not active. Submitted jobs persist beyond the duration of a session, and even when the entire machine is powered down and up again.

This is not supported when encryption of the job details is switched on.

This is because the encryption key is only valid whilst the Job Queue Monitor is running. Another instance of the Job Queue Monitor uses a new key and thus any existing jobs cannot be decrypted, and indeed are automatically deleted if any exist.


The method used to emulate IBM i job queues has no deep operating system dependencies and can be easily ported to other multi-tasking operating systems in the future.

Volume capabilities and throughput rates similar to the IBM i facilities

The IBM i subsystem and job queue facilities are designed to handle relatively low batch job throughput rates of at most one job every 2 to 5 seconds.

Ultimately the throughput rate is governed by the amount of work that a submitted job does (i.e. in making jobs behind it on the queue wait), but rates beyond 1 job per 2 - 3 seconds, no matter how simple the submitted job, and no matter how powerful the processor, should not be expected.

IBM i application designers do not use the IBM i job queue capabilities to process jobs where throughput rates of 10's or 100's per second is required. When high rates like this are required, more advanced facilities such as data queues, named pipes, etc should be implemented via shipped or user defined Built-In Functions.


The actual throughput rate achieved by a job queue monitor, and even the rate at which jobs can be submitted to a queue depends upon many, many factors such as CPU speed, disk use, LAN traffic, CPU power, etc. Where very high throughput rates are an essential element of a design it is very strongly recommended that a prototype be constructed and verified early in the design cycle.