2 <img src="./images/logo.png" width="340px" height="266px"/>
5 <h2 align="center">Node Thread Pool and Cluster Pool :arrow_double_up: :on:</h2>
7 <a href="https://ko-fi.com/Q5Q31D6QY">
8 <img alt="Ko-fi" src="https://ko-fi.com/img/githubbutton_sm.svg"></a>
12 <a href="https://www.npmjs.com/package/poolifier">
13 <img alt="Weekly Downloads" src="https://img.shields.io/npm/dw/poolifier"></a>
14 <a href="https://github.com/poolifier/poolifier/actions">
15 <img alt="Actions Status" src="https://github.com/poolifier/poolifier/workflows/NodeCI/badge.svg"></a>
16 <a href="https://sonarcloud.io/dashboard?id=pioardi_poolifier">
17 <img alt="Quality Gate Status" src="https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=alert_status"></a>
18 <a href="https://sonarcloud.io/component_measures/metric/coverage/list?id=pioardi_poolifier">
19 <img alt="Code coverage" src="https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=coverage"></a>
20 <a href="https://standardjs.com">
21 <img alt="Javascript Standard Style Guide" src="https://img.shields.io/badge/code_style-standard-brightgreen.svg"></a>
22 <a href="https://gitter.im/poolifier/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge">
23 <img alt="Gitter chat" src="https://badges.gitter.im/poolifier/community.svg"></a>
24 <a href="https://badgen.net/badge/Dependabot/enabled/green?icon=dependabot">
25 <img alt="Dependabot" src="https://badgen.net/badge/Dependabot/enabled/green?icon=dependabot"></a>
26 <a href="http://makeapullrequest.com">
27 <img alt="PR Welcome" src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square"></a>
28 <a href="https://img.shields.io/static/v1?label=dependencies&message=no%20dependencies&color=brightgreen">
29 <img alt="No dependencies" src="https://img.shields.io/static/v1?label=dependencies&message=no%20dependencies&color=brightgreen"></a>
34 Poolifier is used to perform CPU intensive and I/O intensive tasks on nodejs servers, it implements worker pools (yes, more worker pool implementations, so you can choose which one fit better for you) using [worker-threads](https://nodejs.org/api/worker_threads.html#worker_threads_worker_threads) and cluster pools using [Node.js cluster](https://nodejs.org/api/cluster.html) modules.
35 With poolifier you can improve your **performance** and resolve problems related to the event loop.
36 Moreover you can execute your tasks using an API designed to improve the **developer experience**.
37 Please consult our <a href="#general-guidance">general guidelines</a>
39 - Performance :racehorse: [benchmarks](./benchmarks/README.md)
40 - Security :bank: :cop: [![Security Rating](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=security_rating)](https://sonarcloud.io/dashboard?id=pioardi_poolifier) [![Vulnerabilities](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=vulnerabilities)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
41 - Easy to use :couple:
42 - Easy switch from a pool to another, easy to tune :white_check_mark:
43 - Dynamic pool size :white_check_mark:
44 - No runtime dependencies :white_check_mark:
45 - Proper async integration with node async hooks :white_check_mark:
46 - Support for worker threads and cluster node modules :white_check_mark:
47 - Support sync and async tasks :white_check_mark:
48 - General guidance on pools to use :white_check_mark:
49 - Widely tested :white_check_mark:
50 - Error handling out of the box :white_check_mark:
51 - Active community :white_check_mark:
52 - Code quality :octocat: [![Bugs](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=bugs)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
53 [![Code Smells](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=code_smells)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
54 [![Duplicated Lines (%)](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=duplicated_lines_density)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
55 [![Maintainability Rating](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=sqale_rating)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
56 [![Reliability Rating](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=reliability_rating)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
57 [![Technical Debt](https://sonarcloud.io/api/project_badges/measure?project=pioardi_poolifier&metric=sqale_index)](https://sonarcloud.io/dashboard?id=pioardi_poolifier)
62 <a href="#overview">Overview</a>
64 <a href="#installation">Installation</a>
66 <a href="#usage">Usage</a>
68 <a href="#node-versions">Node versions</a>
70 <a href="#api">API</a>
72 <a href="#general-guidance">General guidance</a>
74 <a href="#contribute">Contribute</a>
76 <a href="#team">Team</a>
78 <a href="#license">License</a>
83 Node pool contains two [worker-threads](https://nodejs.org/api/worker_threads.html#worker_threads_worker_threads)/[cluster worker](https://nodejs.org/api/cluster.html#cluster_class_worker) pool implementations, you don't have to deal with worker-threads/cluster worker complexity.
84 The first implementation is a static worker pool, with a defined number of workers that are started at creation time and will be reused.
85 The second implementation is a dynamic worker pool with a number of worker started at creation time (these workers will be always active and reused) and other workers created when the load will increase (with an upper limit, these workers will be reused when active), the new created workers will be stopped after a configurable period of inactivity.
86 You have to implement your worker extending the ThreadWorker or ClusterWorker class
91 npm install poolifier --save
96 You can implement a worker-threads worker in a simple way by extending the class ThreadWorker:
100 const { ThreadWorker } = require('poolifier')
102 function yourFunction (data) {
103 // this will be executed in the worker thread,
104 // the data will be received by using the execute method
108 module.exports = new ThreadWorker(yourFunction, {
109 maxInactiveTime: 60000,
114 Instantiate your pool based on your needed :
118 const { FixedThreadPool, DynamicThreadPool } = require('poolifier')
120 // a fixed worker-threads pool
121 const pool = new FixedThreadPool(15,
123 { errorHandler: (e) => console.error(e), onlineHandler: () => console.log('worker is online') })
125 // or a dynamic worker-threads pool
126 const pool = new DynamicThreadPool(10, 100,
128 { errorHandler: (e) => console.error(e), onlineHandler: () => console.log('worker is online') })
130 pool.emitter.on('busy', () => console.log('Pool is busy'))
132 // the execute method signature is the same for both implementations,
133 // so you can easy switch from one to another
134 pool.execute({}).then(res => {
140 You can do the same with the classes ClusterWorker, FixedClusterPool and DynamicClusterPool.
142 **See examples folder for more details (in particular if you want to use a pool for [multiple functions](./examples/multiFunctionExample.js)).**
143 **Now TypeScript is also supported, find how to use it into the example folder**.
145 Remember that workers can only send and receive serializable data.
149 Node versions >= 16.x are supported.
153 ### [Documentation](https://poolifier.github.io/poolifier/)
155 ### `pool = new FixedThreadPool/FixedClusterPool(numberOfThreads/numberOfWorkers, filePath, opts)`
157 `numberOfThreads/numberOfWorkers` (mandatory) Number of workers for this pool
158 `filePath` (mandatory) Path to a file with a worker implementation
159 `opts` (optional) An object with these properties:
161 - `messageHandler` (optional) - A function that will listen for message event on each worker
162 - `errorHandler` (optional) - A function that will listen for error event on each worker
163 - `onlineHandler` (optional) - A function that will listen for online event on each worker
164 - `exitHandler` (optional) - A function that will listen for exit event on each worker
165 - `workerChoiceStrategy` (optional) - The worker choice strategy to use in this pool:
167 - `WorkerChoiceStrategies.ROUND_ROBIN`: Submit tasks to worker in a round robbin fashion
168 - `WorkerChoiceStrategies.LESS_RECENTLY_USED`: Submit tasks to the less recently used worker
169 - `WorkerChoiceStrategies.WEIGHTED_ROUND_ROBIN` Submit tasks to worker using a weighted round robin scheduling algorithm based on tasks execution time
170 - `WorkerChoiceStrategies.FAIR_SHARE`: Submit tasks to worker using a fair share tasks scheduling algorithm based on tasks execution time
172 `WorkerChoiceStrategies.WEIGHTED_ROUND_ROBIN` and `WorkerChoiceStrategies.FAIR_SHARE` strategies are targeted to heavy and long tasks
173 Default: `WorkerChoiceStrategies.ROUND_ROBIN`
175 - `enableEvents` (optional) - Events emission enablement in this pool. Default: true
177 ### `pool = new DynamicThreadPool/DynamicClusterPool(min, max, filePath, opts)`
179 `min` (mandatory) Same as FixedThreadPool/FixedClusterPool numberOfThreads/numberOfWorkers, this number of workers will be always active
180 `max` (mandatory) Max number of workers that this pool can contain, the new created workers will die after a threshold (default is 1 minute, you can override it in your worker implementation).
181 `filePath` (mandatory) Same as FixedThreadPool/FixedClusterPool
182 `opts` (optional) Same as FixedThreadPool/FixedClusterPool
184 ### `pool.execute(data)`
186 Execute method is available on both pool implementations (return type: Promise):
187 `data` (mandatory) An object that you want to pass to your worker implementation
191 Destroy method is available on both pool implementations.
192 This method will call the terminate method on each worker.
194 ### `class YourWorker extends ThreadWorker/ClusterWorker`
196 `fn` (mandatory) The function that you want to execute on the worker
197 `opts` (optional) An object with these properties:
199 - `maxInactiveTime` - Max time to wait tasks to work on (in ms), after this period the new worker will die.
200 The last active time of your worker unit will be updated when a task is submitted to a worker or when a worker terminate a task.
201 If `killBehavior` is set to `KillBehaviors.HARD` this value represents also the timeout for the tasks that you submit to the pool, when this timeout expires your tasks is interrupted and the worker is killed if is not part of the minimum size of the pool.
202 If `killBehavior` is set to `KillBehaviors.SOFT` your tasks have no timeout and your workers will not be terminated until your task is completed.
205 - `async` - true/false, true if your function contains async code pieces, else false
206 - `killBehavior` - Dictates if your async unit (worker/process) will be deleted in case that a task is active on it.
207 **KillBehaviors.SOFT**: If `currentTime - lastActiveTime` is greater than `maxInactiveTime` but a task is still running, then the worker **won't** be deleted.
208 **KillBehaviors.HARD**: If `lastActiveTime` is greater than `maxInactiveTime` but a task is still running, then the worker will be deleted.
209 This option only apply to the newly created workers.
210 Default: `KillBehaviors.SOFT`
214 Performance is one of the main target of these worker pool implementations, we want to have a strong focus on this.
215 We already have a bench folder where you can find some comparisons.
217 ### Internal Node.js thread pool
219 Before to jump into each poolifier pool type, let highlight that **Node.js comes with a thread pool already**, the libuv thread pool where some particular tasks already run by default.
220 Please take a look at [which tasks run on the libuv thread pool](https://nodejs.org/en/docs/guides/dont-block-the-event-loop/#what-code-runs-on-the-worker-pool).
222 **If your task runs on libuv thread pool**, you can try to:
224 - Tune the libuv thread pool size setting the [UV_THREADPOOL_SIZE](https://nodejs.org/api/cli.html#cli_uv_threadpool_size_size)
228 - Use poolifier cluster pool that spawning child processes will also increase the number of libuv threads since that any new child process comes with a separated libuv thread pool. **More threads does not mean more fast, so please tune your application.**
230 ### Cluster vs Threads worker pools
232 **If your task does not run into libuv thread pool** and is CPU intensive then poolifier **thread pools** (FixedThreadPool and DynamicThreadPool) are suggested to run CPU intensive tasks, you can still run I/O intensive tasks into thread pools, but performance enhancement is expected to be minimal.
233 Thread pools are built on top of Node.js [worker-threads](https://nodejs.org/api/worker_threads.html#worker_threads_worker_threads) module.
235 **If your task does not run into libuv thread pool** and is I/O intensive then poolifier **cluster pools** (FixedClusterPool and DynamicClusterPool) are suggested to run I/O intensive tasks, again you can still run CPU intensive tasks into cluster pools, but performance enhancement is expected to be minimal.
236 Consider that by default Node.js already has great performance for I/O tasks (asynchronous I/O).
237 Cluster pools are built on top of Node.js [cluster](https://nodejs.org/api/cluster.html) module.
239 If your task contains code that runs on libuv plus code that is CPU intensive or I/O intensive you either split it either combine more strategies (i.e. tune the number of libuv threads and use cluster/thread pools).
240 But in general, **always profile your application**
242 ### Fixed vs Dynamic pools
244 To choose your pool consider that with a FixedThreadPool/FixedClusterPool or a DynamicThreadPool/DynamicClusterPool (in this case is important the min parameter passed to the constructor) your application memory footprint will increase.
245 Increasing the memory footprint, your application will be ready to accept more tasks, but during idle time your application will consume more memory.
246 One good choose from my point of view is to profile your application using Fixed/Dynamic worker pool, and to see your application metrics when you increase/decrease the num of workers.
247 For example you could keep the memory footprint low choosing a DynamicThreadPool/DynamicClusterPool with 5 workers, and allow to create new workers until 50/100 when needed, this is the advantage to use the DynamicThreadPool/DynamicClusterPool.
248 But in general, **always profile your application**
252 See guidelines [CONTRIBUTING](CONTRIBUTING.md)
253 Choose your task here [2.3.x](https://github.com/orgs/poolifier/projects/1), propose an idea, a fix, an improvement.
257 <!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->
261 - [**Alessandro Pio Ardizio**](https://github.com/pioardi)
265 - [**Shinigami92**](https://github.com/Shinigami92)
266 - [**Jérôme Benoit**](https://github.com/jerome-benoit)