X-Git-Url: https://git.piment-noir.org/?a=blobdiff_plain;ds=inline;f=README.MD;h=e77bc1cdf514678d63c24589287aaa969dd48bd7;hb=da1f2de4c9c3a375bbd50f1daf65d1b74afd0bf4;hp=c173361858778bdb9627f01fe1aac90bccc1eca4;hpb=13031992e5945fe2aef38696c984d8fe30d4db5a;p=poolifier.git
diff --git a/README.MD b/README.MD
index c1733618..e77bc1cd 100644
--- a/README.MD
+++ b/README.MD
@@ -1,6 +1,140 @@
-# Node Pool :arrow_double_up: :on:
-Node pool contains two worker-threads pool implementations.
-The first implementation is a static thread pool , with a defined number of threads that are started at creation time .
-The second implementation is a dynamic thread pool with a number of threads started at creation time and other threads created when the load will increase ( with an upper limit )
+# Node Thread Pool :arrow_double_up: :on:
+[![JavaScript Style Guide](https://img.shields.io/badge/code_style-standard-brightgreen.svg)](https://standardjs.com)
+[![Dependabot](https://badgen.net/dependabot/dependabot/dependabot-core/?icon=dependabot)](https://badgen.net/dependabot/dependabot/dependabot-core/?icon=dependabot)
+[![Actions Status](https://github.com/pioardi/node-pool/workflows/NodeCI/badge.svg)](https://github.com/pioardi/node-pool/actions)
+[![Coverage Status](https://coveralls.io/repos/github/pioardi/node-thread-pool/badge.svg?branch=master)](https://coveralls.io/github/pioardi/node-thread-pool?branch=master)
+[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com)
+[![NODEP](https://img.shields.io/static/v1?label=dependencies&message=no%20dependencies&color=brightgreen
+)](https://img.shields.io/static/v1?label=dependencies&message=no%20dependencies&color=brightgreen
+)
+
Contents
+
+
+ Overview
+Node pool contains two worker-threads pool implementations , you don' t have to deal with worker-threads complexity.
+The first implementation is a static thread pool , with a defined number of threads that are started at creation time and will be reused.
+The second implementation is a dynamic thread pool with a number of threads started at creation time ( these threads will be always active and reused) and other threads created when the load will increase ( with an upper limit, these threads will be reused when active ), the new created threads will be stopped after a configurable period of inactivity.
+You have to implement your worker extending the ThreadWorker class
+Installation
+
+```
+npm install poolifier --save
+```
+Usage
+
+You can implement a worker in a simple way , extending the class ThreadWorker :
+
+```js
+'use strict'
+const { ThreadWorker } = require('poolifier')
+
+function yourFunction (data) {
+ // this will be executed in the worker thread,
+ // the data will be received by using the execute method
+ return { ok: 1 }
+}
+
+class MyWorker extends ThreadWorker {
+ constructor () {
+ super(yourFunction, { maxInactiveTime: 1000 * 60})
+ }
+}
+module.exports = new MyWorker()
+```
+
+Instantiate your pool based on your needed :
+
+```js
+'use strict'
+const { FixedThreadPool, DynamicThreadPool } = require('poolifier')
+
+// a fixed thread pool
+const pool = new FixedThreadPool(15,
+ './yourWorker.js',
+ { errorHandler: (e) => console.error(e), onlineHandler: () => console.log('worker is online') })
+
+// or a dynamic thread pool
+const pool = new DynamicThreadPool(10, 100,
+ './yourWorker.js',
+ { errorHandler: (e) => console.error(e), onlineHandler: () => console.log('worker is online') })
+
+pool.emitter.on('FullPool', () => console.log('Pool is full'))
+
+// the execute method signature is the same for both implementations,
+// so you can easy switch from one to another
+pool.execute({}).then(res => {
+ console.log(res)
+}).catch ....
+
+```
+
+ See examples folder for more details ( in particular if you want to use a pool for [multiple functions](./examples/multiFunctionExample.js) ).
+
+Node versions
+
+You can use node versions 12.x , 13.x
+
+API
+
+### `pool = new FixedThreadPool(numThreads, filePath, opts)`
+`numThreads` (mandatory) Num of threads for this worker pool
+`filePath` (mandatory) Path to a file with a worker implementation
+`opts` (optional) An object with these properties :
+- `errorHandler` - A function that will listen for error event on each worker thread
+- `onlineHandler` - A function that will listen for online event on each worker thread
+- `exitHandler` - A function that will listen for exit event on each worker thread
+- `maxTasks` - This is just to avoid not useful warnings message, is used to set maxListeners on event emitters ( workers are event emitters)
+
+### `pool = new DynamicThreadPool(min, max, filePath, opts)`
+`min` (mandatory) Same as FixedThreadPool numThreads , this number of threads will be always active
+`max` (mandatory) Max number of workers that this pool can contain, the new created threads will die after a threshold ( default is 1 minute , you can override it in your worker implementation).
+`filePath` (mandatory) Same as FixedThreadPool
+`opts` (optional) Same as FixedThreadPool
+
+### `pool.execute(data)`
+Execute method is available on both pool implementations ( return type : Promise):
+`data` (mandatory) An object that you want to pass to your worker implementation
+
+### `pool.destroy()`
+Destroy method is available on both pool implementations.
+This method will call the terminate method on each worker.
+
+
+### `class YourWorker extends ThreadWorker`
+`fn` (mandatory) The function that you want to execute on the worker thread
+`opts` (optional) An object with these properties :
+- `maxInactiveTime` - Max time to wait tasks to work on ( in ms) , after this period the new worker threads will die.
+
+Choose your pool
+Performance is one of the main target of these thread pool implementations, we want to have a strong focus on this.
+We already have a bench folder where you can find some comparisons.
+To choose your pool consider that with a FixedThreadPool or a DynamicThreadPool ( in this case is important the min parameter passed to the constructor) your application memory footprint will increase .
+Increasing the memory footprint, your application will be ready to accept more CPU bound tasks, but during idle time your application will consume more memory.
+One good choose from my point of view is to profile your application using Fixed/Dynamic thread pool , and to see your application metrics when you increase/decrease the num of threads.
+For example you could keep the memory footprint low choosing a DynamicThreadPool with 5 threads, and allow to create new threads until 50/100 when needed, this is the advantage to use the DynamicThreadPool.
+But in general , always profile your application
+
+Contribute
+
+See guidelines [CONTRIBUTING](./.github/CONTRIBUTING.md)
+
+
+License
+
+[MIT](./LICENSE)