Parameter:
Server_Pool_TasksShort description: Sets the size of the NRPC worker thread pool per Notes port. Default: 40 threads per port. Scales the number of concurrent NRPC requests the
server task can handle. Together with Server_Max_Concurrent_Trans it controls the thread-pool behavior.Profile
Parameter | Server_Pool_Tasks |
Category | Performance (NRPC threading) |
Component | Server ( server task / NRPC listener) |
Available since | 8.5 |
Supported versions | 9.0.1, 10.0, 11.0, 12.0, 14.0, 14.5, 14.5.1 |
GUI equivalent | notes.ini only (no GUI) |
Possible values | Integer ≥ 20. Default: 40 (per active Notes port). Typical increase: 60–100. |
Description
Server_Pool_Tasks and Server_Max_Concurrent_Trans jointly control the thread-pool behavior of the NRPC layer — i.e. nserver.exe (Windows) or server (Unix). Other tasks (HTTP, IMAP, Router, LDAP) are not affected.How the pool works:
- Domino creates a separate worker thread pool per Notes port. So if three Notes ports are operated (e.g.
TCPIP,ClusterPort,Spare), three independent pools are created withServer_Pool_Tasksthreads each.
- With default 40 + three ports, that means 120 worker threads plus overhead threads.
- Worker threads handle NRPC transactions (database reads, view queries, indexer requests from clients, replication responses to other servers).
When to increase?
- Very many simultaneous Notes clients (≥ several hundred active users).
- High RPC load through add-ons / backend apps making many small calls.
- Hub servers receiving many replication connections at the same time.
Symptoms of a bottleneck:
show stat Server.Trans.TotalandServer.Trans.PerMinuteshow high load whileServer.Trans.Queue.LengthandServer.Trans.Queue.MaxLenbecome unusually high.
- Users report “hourglass” pauses, mail databases open with delay.
- The console command
show stat Server.Pools.*(where available) shows exhausted thread pools.
Mind the per-port limit: Domino does not allow different pool sizes per Notes port (as of 11/12/14). If you set
Server_Pool_Tasks=80, you get 80 on every active port.Relationship with
Server_Max_Concurrent_Trans: the latter limits parallel active transactions. Default Server_Max_Concurrent_Trans=20. For heavily parallelized servers, raise both together, e.g. Server_Pool_Tasks=80 + Server_Max_Concurrent_Trans=40.Example configuration
Large multi-port server:
Server_Pool_Tasks=60 Server_Max_Concurrent_Trans=30
Very heavily used hub:
Server_Pool_Tasks=100 Server_Max_Concurrent_Trans=40
Notes & pitfalls
- Server restart required: the thread pool is created at server start.
set config Server_Pool_Tasks=…at runtime only changes thenotes.ini, not the running pool.
- RAM requirement: every thread holds its own stack allocation (typically 1–2 MB). With
Server_Pool_Tasks=100× 3 ports, that means roughly 300–600 MB additional working set.
- Find the sweet spot: too many threads cause context-switching overhead and can degrade performance. Increase gradually (40 → 60 → 80 → 100) and compare statistics.
- Do not confuse with
Replicators:Replicatorsscales separate replicator tasks,Server_Pool_Tasksscales NRPC worker threads.
- Mind per-port multiplication: on servers with many Notes ports (e.g. additional ClusterPort), the value multiplies with the number of ports.
- Monitoring:
show stat Server.Trans.*,show stat Server.Pools.*(where available),show serverand platform tools (Performance Monitor /top/pidstat).
- Works on all supported platforms.
Sources (HCL Product Documentation)
- HCL Support Knowledge Base KB0032882 – relationship between
Server_Max_Concurrent_TransandServer_Pool_Tasks
- HCL Support Knowledge Base KB0037705 –
Server_Pool_Tasksdefault 40 per Notes port
- Note: there is no dedicated entry for this parameter in the official
help.hcl-software.com/domino/doc trunk; best-practice recommendations come exclusively from the HCL Support Knowledge Base.